destination

package
v0.8.6 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 23, 2017 License: BSD-2-Clause-Views Imports: 18 Imported by: 0

Documentation

Overview

like bufio.Writer (Copyright 2009 The Go Authors. All rights reserved.) but with instrumentation around flushing because this codebase is the only user: * doesn't implement the entire bufio.Writer api because it doesn't need to * and some simplifications can be made, less edgecases etc

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func NewKeepSafe

func NewKeepSafe(initialCap int, periodKeep time.Duration) *keepSafe

func NewSlowChan

func NewSlowChan(backend chan []byte, sleep time.Duration) chan []byte

note: goroutine doesn't get cleaned until backend closes. don't instantiate gazillions of these

func SetLogger

func SetLogger(l *logging.Logger)

Types

type Conn

type Conn struct {
	In chan []byte
	// contains filtered or unexported fields
}

func NewConn

func NewConn(addr string, dest *Destination, periodFlush time.Duration, pickle bool) (*Conn, error)

func (*Conn) Close

func (c *Conn) Close() error

func (*Conn) Flush

func (c *Conn) Flush() error

func (*Conn) HandleData

func (c *Conn) HandleData()

func (*Conn) HandleStatus

func (c *Conn) HandleStatus()

func (*Conn) Write

func (c *Conn) Write(buf []byte) (int, error)

returns a network/write error, so that it can be retried later deals with pickle errors internally because retrying wouldn't help anyway

type Datapoint

type Datapoint struct {
	Name string
	Val  float64
	Time uint32
}

type Destination

type Destination struct {
	Matcher matcher.Matcher `json:"matcher"`

	Addr         string `json:"address"`  // tcp dest
	Instance     string `json:"instance"` // Optional carbon instance name, useful only with consistent hashing
	SpoolDir     string // where to store spool files (if enabled)
	Spool        bool   `json:"spool"`        // spool metrics to disk while dest down?
	Pickle       bool   `json:"pickle"`       // send in pickle format?
	Online       bool   `json:"online"`       // state of connection online/offline.
	SlowNow      bool   `json:"slowNow"`      // did we have to drop packets in current loop
	SlowLastLoop bool   `json:"slowLastLoop"` // "" last loop

	// set in/via Run()
	In chan []byte `json:"-"` // incoming metrics
	// contains filtered or unexported fields
}

func New

func New(prefix, sub, regex, addr, spoolDir string, spool, pickle bool, periodFlush, periodReConn time.Duration) (*Destination, error)

New creates a destination object. Note that it still needs to be told to run via Run().

func (*Destination) Flush

func (dest *Destination) Flush() error

func (*Destination) GetMatcher

func (dest *Destination) GetMatcher() matcher.Matcher

func (*Destination) Match

func (dest *Destination) Match(s []byte) bool

func (*Destination) Run

func (dest *Destination) Run()

func (*Destination) Shutdown

func (dest *Destination) Shutdown() error

func (*Destination) Snapshot

func (dest *Destination) Snapshot() *Destination

a "basic" static copy of the dest, not actually running

func (*Destination) Update

func (dest *Destination) Update(opts map[string]string) error

can't be changed yet: pickle, spool, flush, reconn

func (*Destination) UpdateMatcher

func (dest *Destination) UpdateMatcher(matcher matcher.Matcher)

func (*Destination) WaitOnline

func (dest *Destination) WaitOnline() chan struct{}

type Spool

type Spool struct {
	InRT   chan []byte
	InBulk chan []byte
	Out    chan []byte
	// contains filtered or unexported fields
}

sits in front of nsqd diskqueue. provides buffering (to accept input while storage is slow / sync() runs -every 1000 items- etc) QoS (RT vs Bulk) and controllable i/o rates

func NewSpool

func NewSpool(key, spoolDir string) *Spool

parameters should be tuned so that: can buffer packets for the duration of 1 sync buffer no more then needed, esp if we know the queue is slower then the ingest rate

func (*Spool) Buffer

func (s *Spool) Buffer()

func (*Spool) Close

func (s *Spool) Close()

func (*Spool) Ingest

func (s *Spool) Ingest(bulkData [][]byte)

func (*Spool) Writer

func (s *Spool) Writer()

provides a channel based api to the queue

type Writer

type Writer struct {
	// contains filtered or unexported fields
}

Writer implements buffering for an io.Writer object. If an error occurs writing to a Writer, no more data will be accepted and all subsequent writes will return the error. After all data has been written, the client should call the Flush method to guarantee all data has been forwarded to the underlying io.Writer.

func NewWriter

func NewWriter(w io.Writer, size int, key string) *Writer

NewWriterSize returns a new Writer whose buffer has at least the specified size. If the argument io.Writer is already a Writer with large enough size, it returns the underlying Writer.

func (*Writer) Available

func (b *Writer) Available() int

Available returns how many bytes are unused in the buffer.

func (*Writer) Buffered

func (b *Writer) Buffered() int

Buffered returns the number of bytes that have been written into the current buffer.

func (*Writer) Flush

func (b *Writer) Flush() error

Flush writes any buffered data to the underlying io.Writer.

func (*Writer) Write

func (b *Writer) Write(p []byte) (nn int, err error)

Write writes the contents of p into the buffer. It returns the number of bytes written. If nn < len(p), it also returns an error explaining why the write is short.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL