haproxy

package
v3.6.1+incompatible Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 22, 2017 License: Apache-2.0 Imports: 17 Imported by: 0

Documentation

Overview

Package haproxy is inspired by https://github.com/prometheus/haproxy_exporter

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Exporter

type Exporter struct {
	// contains filtered or unexported fields
}

Exporter collects HAProxy stats from the given URI and exports them using the prometheus metrics package.

func NewExporter

func NewExporter(opts PrometheusOptions) (*Exporter, error)

NewExporter returns an initialized Exporter. baseScrapeInterval is how often to scrape per 1000 entries (servers, backends, and frontends).

func NewPrometheusCollector

func NewPrometheusCollector(opts PrometheusOptions) (*Exporter, error)

NewPrometheusCollector starts collectors for prometheus metrics from the provided HAProxy stats port. It will start goroutines. Use the default prometheus handler to access these metrics.

func (*Exporter) Collect

func (e *Exporter) Collect(ch chan<- prometheus.Metric)

Collect fetches the stats from configured HAProxy location and delivers them as Prometheus metrics. It implements prometheus.Collector. It will not collect metrics more often than interval.

func (*Exporter) CollectNow

func (e *Exporter) CollectNow()

CollectNow performs a collection immediately. The next Collect() will report the metrics.

func (*Exporter) Describe

func (e *Exporter) Describe(ch chan<- *prometheus.Desc)

Describe describes all the metrics ever exported by the HAProxy exporter. It implements prometheus.Collector.

type PrometheusOptions

type PrometheusOptions struct {
	// ScrapeURI is the URI to access HAProxy under. Defaults to the unix domain socket.
	ScrapeURI string
	// PidFile is optional and will collect process metrics.
	PidFile string
	// Timeout is the maximum interval to wait for stats.
	Timeout time.Duration
	// BaseScrapeInterval is the minimum time to wait between stat calls per 1000 servers.
	BaseScrapeInterval time.Duration
	// ServerThreshold is the maximum number of servers that can be reported before switching
	// to only using backend metrics. This reduces metrics load when there is a very large set
	// of endpoints.
	ServerThreshold int
	// ExportedMetrics is a list of HAProxy stats to export.
	ExportedMetrics []int
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL