README

InfluxDB Circle CI

An Open-Source, Distributed, Time Series Database

InfluxDB v0.9.0 is now out. Going forward, the 0.9.x series of releases will not make breaking API changes or breaking changes to the underlying data storage. However, 0.9.0 clustering should be considered an alpha release.

InfluxDB is an open source distributed time series database with no external dependencies. It's useful for recording metrics, events, and performing analytics.

Features

  • Built-in HTTP API so you don't have to write any server side code to get up and running.
  • Data can be tagged, allowing very flexible querying.
  • SQL-like query language.
  • Clustering is supported out of the box, so that you can scale horizontally to handle your data.
  • Simple to install and manage, and fast to get data in and out.
  • It aims to answer queries in real-time. That means every data point is indexed as it comes in and is immediately available in queries that should return in < 100ms.

Getting Started

The following directions apply only to the 0.9.0 release or building from the source on master.

Building

You don't need to build the project to use it - you can use any of our pre-built packages to install InfluxDB. That's the recommended way to get it running. However, if you want to contribute to the core of InfluxDB, you'll need to build. For those adventurous enough, you can follow along on our docs.

Starting InfluxDB
  • service influxdb start if you have installed InfluxDB using an official Debian or RPM package.
  • systemctl start influxdb if you have installed InfluxDB using an official Debian or RPM package, and are running a distro with systemd. For example, Ubuntu 15 or later.
  • $GOPATH/bin/influxd if you have built InfluxDB from source.
Creating your first database
curl -G 'http://localhost:8086/query' --data-urlencode "q=CREATE DATABASE mydb"
Insert some data
curl -XPOST 'http://localhost:8086/write?db=mydb' \
-d 'cpu,host=server01,region=uswest load=42 1434055562000000000'

curl -XPOST 'http://localhost:8086/write?db=mydb' \
-d 'cpu,host=server02,region=uswest load=78 1434055562000000000'

curl -XPOST 'http://localhost:8086/write?db=mydb' \
-d 'cpu,host=server03,region=useast load=15.4 1434055562000000000'
Query for the data
curl -G http://localhost:8086/query?pretty=true --data-urlencode "db=mydb" \
--data-urlencode "q=SELECT * FROM cpu WHERE host='server01' AND time < now - 1d"
Analyze the data
curl -G http://localhost:8086/query?pretty=true --data-urlencode "db=mydb" \
--data-urlencode "q=SELECT mean(load) FROM cpu WHERE region='uswest'"
Expand ▾ Collapse ▴

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	// ErrFieldsRequired is returned when a point does not any fields.
	ErrFieldsRequired = errors.New("fields required")

	// ErrFieldTypeConflict is returned when a new field already exists with a different type.
	ErrFieldTypeConflict = errors.New("field type conflict")
)

Functions

func ErrDatabaseNotFound

func ErrDatabaseNotFound(name string) error

func ErrMeasurementNotFound

func ErrMeasurementNotFound(name string) error

func Errorf

func Errorf(format string, a ...interface{}) (err error)

func IsClientError

func IsClientError(err error) bool

    IsClientError indicates whether an error is a known client error.

    func NewStatistics

    func NewStatistics(key, name string, tags map[string]string) *expvar.Map

      NewStatistics returns an expvar-based map with the given key. Within that map is another map. Within there "name" is the Measurement name, "tags" are the tags, and values are placed at the key "values".

      Types

      type Balancer

      type Balancer interface {
      	// Next returns the next Node according to the balancing method
      	// or nil if there are no nodes available
      	Next() *meta.NodeInfo
      }

        Balancer represents a load-balancing algorithm for a set of nodes

        func NewNodeBalancer

        func NewNodeBalancer(nodes []meta.NodeInfo) Balancer

          NewNodeBalancer create a shuffled, round-robin balancer so that multiple instances will return nodes in randomized order and each each returned node will be repeated in a cycle

          Directories

          Path Synopsis
          internal
          Package internal is a generated protocol buffer package.
          Package internal is a generated protocol buffer package.
          cmd
          importer
          v8
          Package influxql implements a parser for the InfluxDB query language.
          Package influxql implements a parser for the InfluxDB query language.
          internal
          Package internal is a generated protocol buffer package.
          Package internal is a generated protocol buffer package.
          pkg
          services
          copier/internal
          Package internal is a generated protocol buffer package.
          Package internal is a generated protocol buffer package.
          hh
          Package hh implements a hinted handoff for writes
          Package hh implements a hinted handoff for writes
          udp
          tests
          Package tsdb implements a durable time series database.
          Package tsdb implements a durable time series database.
          engine/wal
          Package WAL implements a write ahead log optimized for write throughput that can be put in front of the database index.
          Package WAL implements a write ahead log optimized for write throughput that can be put in front of the database index.
          internal
          Package internal is a generated protocol buffer package.
          Package internal is a generated protocol buffer package.