Version: v1.7.1-0...-29a0ac7 Latest Latest

This package is not in the latest version of its module.

Go to latest
Published: Apr 4, 2016 License: Apache-2.0 Imports: 25 Imported by: 0




This section is empty.


This section is empty.


func AmqpExecutor

func AmqpExecutor(fn GraphiteReturner, consumer rabbitmq.Consumer, cache *lru.Cache)

AmqpExecutor reads jobs from rabbitmq, executes them, and acknowledges them if they processed succesfully or encountered a fatal error (i.e. an error that we know won't recover on future retries, so no point in retrying)

func ChanExecutor

func ChanExecutor(fn GraphiteReturner, jobQueue JobQueue, cache *lru.Cache)

func Construct

func Construct()

func Dispatcher

func Dispatcher(jobQueue JobQueue)

Dispatcher dispatches, every second, all jobs that should run for that second every job has an id so that you can run multiple dispatchers (for HA) while still only processing each job once. (provided jobs get consistently routed to executors)

func GraphiteAuthContextReturner

func GraphiteAuthContextReturner(org_id int64) (bgraphite.Context, error)

func Init

func Init(metrics met.Backend)

Init initalizes all metrics run this function when statsd is ready, so we can create the series

func LoadOrSetOffset

func LoadOrSetOffset() int


type Check

type Check struct {
	// do we need these members here?
	//Id           int64
	//OrgId        int64
	//DataSourceId int64
	Definition CheckDef

type CheckDef

type CheckDef struct {
	CritExpr string
	WarnExpr string

func (CheckDef) String

func (c CheckDef) String() string

type CheckEvaluator

type CheckEvaluator interface {
	Eval() (*m.CheckEvalResult, error)

type GraphiteCheckEvaluator

type GraphiteCheckEvaluator struct {
	Context graphite.Context
	Check   CheckDef
	// contains filtered or unexported fields

func NewGraphiteCheckEvaluator

func NewGraphiteCheckEvaluator(c graphite.Context, check CheckDef) (*GraphiteCheckEvaluator, error)

func (*GraphiteCheckEvaluator) Eval

TODO instrument error scenarios Eval evaluates the crit/warn expression and returns the result, and any non-fatal error (implying the query should be retried later, when a temporary infra problem restores) as well as fatal errors.

type GraphiteReturner

type GraphiteReturner func(org_id int64) (bgraphite.Context, error)

type Job

type Job struct {
	OrgId           int64
	MonitorId       int64
	EndpointId      int64
	EndpointName    string
	EndpointSlug    string
	Settings        map[string]string
	MonitorTypeName string
	Notifications   m.MonitorNotificationSetting
	Freq            int64
	Offset          int64 // offset on top of "even" minute/10s/.. intervals
	Definition      CheckDef
	GeneratedAt     time.Time
	LastPointTs     time.Time
	AssertMinSeries int       // to verify during execution at least this many series are returned (would be nice at some point to include actual number of collectors)
	AssertStart     time.Time // to verify timestamps in response
	AssertStep      int       // to verify step duration
	AssertSteps     int       // to verify during execution this many points are included

Job is a job for an alert execution note that LastPointTs is a time denoting the timestamp of the last point to run against this way the check runs always on the right data, irrespective of execution delays that said, for convenience, we track the generatedAt timestamp

func (Job) StoreResult

func (job Job) StoreResult(res m.CheckEvalResult)

func (Job) String

func (job Job) String() string

type JobQueue

type JobQueue interface {
	Put(job *Job)

type PreAMQPJobQueue

type PreAMQPJobQueue struct {
	// contains filtered or unexported fields

func (PreAMQPJobQueue) Put

func (jq PreAMQPJobQueue) Put(job *Job)

type Ticker

type Ticker struct {
	C chan time.Time
	// contains filtered or unexported fields

ticker is a ticker to power the alerting scheduler. it's like a time.Ticker, except: * it doesn't drop ticks for slow receivers, rather, it queues up. so that callers are in control to instrument what's going on. * it automatically ticks every second, which is the right thing in our current design * it ticks on second marks or very shortly after. this provides a predictable load pattern

(this shouldn't cause too much load contention issues because the next steps in the pipeline just process at their own pace)

* the timestamps are used to mark "last datapoint to query for" and as such, are a configurable amount of seconds in the past * because we want to allow:

- a clean "resume where we left off" and "don't yield ticks we already did"
- adjusting offset over time to compensate for storage backing up or getting fast and providing lower latency
you specify a lastProcessed timestamp as well as an offset at creation, or runtime

func NewTicker

func NewTicker(last time.Time, initialOffset time.Duration, c clock.Clock) *Ticker

NewTicker returns a ticker that ticks on second marks or very shortly after, and never drops ticks

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL