etcd

command module
v0.2.0-rc4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 23, 2013 License: Apache-2.0 Imports: 9 Imported by: 0

README

etcd

README version 0.2.0

Build Status

A highly-available key value store for shared configuration and service discovery. etcd is inspired by zookeeper and doozer, with a focus on:

  • Simple: curl'able user facing API (HTTP+JSON)
  • Secure: optional SSL client cert authentication
  • Fast: benchmarked 1000s of writes/s per instance
  • Reliable: Properly distributed using Raft

Etcd is written in Go and uses the Raft consensus algorithm to manage a highly-available replicated log.

See etcdctl for a simple command line client. Or feel free to just use curl, as in the examples below.

Contact

Getting Started

Getting etcd

The latest release and setup instructions are available at Github.

Building

You can build etcd from source:

git clone https://github.com/coreos/etcd
cd etcd
./build

This will generate a binary in the base directory called ./etcd.

NOTE: you need go 1.1+. Please check your installation with

go version
Running a single machine

These examples will use a single machine cluster to show you the basics of the etcd REST API. Let's start etcd:

./etcd -data-dir machine0 -name machine0

This will bring up etcd listening on port 4001 for client communication and on port 7001 for server-to-server communication. The -data-dir machine0 argument tells etcd to write machine configuration, logs and snapshots to the ./machine0/ directory. The -name machine tells the rest of the cluster that this machine is named machine0.

Usage

Setting the value to a key

Let’s set the first key-value pair to the datastore. In this case the key is /message and the value is Hello world.

curl -L http://127.0.0.1:4001/v2/keys/message -X PUT -d value="Hello world"
{
    "action": "set",
    "node": {
        "createdIndex": 2,
        "key": "/message",
        "modifiedIndex": 2,
        "value": "Hello world"
    }
}

This response contains four fields. We will introduce three more fields as we try more commands.

  1. The action of the request; we set the value via a PUT request, thus the action is set.

  2. The key of the request; we set /message to Hello world, so the key field is /message. We use a file system like structure to represent the key-value pairs so each key starts with /.

  3. The current value of the key; we set the value toHello world.

  4. Modified Index is a unique, monotonically incrementing index created for each change to etcd. Requests that change the index include set, delete, update, create and compareAndSwap. Since the get and watch commands do not change state in the store, they do not change the index. You may notice that in this example the index is 2 even though it is the first request you sent to the server. This is because there are internal commands that also change the state like adding and syncing servers.

Response Headers

etcd includes a few HTTP headers that provide global information about the etcd cluster that serviced a request:

X-Etcd-Index: 35
X-Raft-Index: 5398
X-Raft-Term: 0
  • X-Etcd-Index is the current etcd index as explained above.
  • X-Raft-Index is similar to the etcd index but is for the underlying raft protocol
  • X-Raft-Term this number will increase when an etcd master election happens. If this number is increasing rapdily you may need to tune the election timeout. See the tuning section for details.
Get the value of a key

We can get the value that we just set in /message by issuing a GET request:

curl -L http://127.0.0.1:4001/v2/keys/message
{
    "action": "get",
    "node": {
        "createdIndex": 2,
        "key": "/message",
        "modifiedIndex": 2,
        "value": "Hello world"
    }
}
Changing the value of a key

You can change the value of /message from Hello world to Hello etcd with another PUT request to the key:

curl -L http://127.0.0.1:4001/v2/keys/message -XPUT -d value="Hello etcd"
{
    "action": "set",
    "node": {
        "createdIndex": 3,
        "key": "/message",
        "modifiedIndex": 3,
        "prevValue": "Hello world",
        "value": "Hello etcd"
    }
}

Notice that node.prevValue is set to the previous value of the key - Hello world. It is useful when you want to atomically set a value to a key and get its old value.

Deleting a key

You can remove the /message key with a DELETE request:

curl -L http://127.0.0.1:4001/v2/keys/message -XDELETE
{
    "action": "delete",
    "node": {
        "createdIndex": 3,
        "key": "/message",
        "modifiedIndex": 4,
        "prevValue": "Hello etcd"
    }
}
Using key TTL

Keys in etcd can be set to expire after a specified number of seconds. You can do this by setting a TTL (time to live) on the key when send a PUT request:

curl -L http://127.0.0.1:4001/v2/keys/foo -XPUT -d value=bar -d ttl=5
{
    "action": "set",
    "node": {
        "createdIndex": 5,
        "expiration": "2013-12-04T12:01:21.874888581-08:00",
        "key": "/foo",
        "modifiedIndex": 5,
        "ttl": 5,
        "value": "bar"
    }
}

Note the two new fields in response:

  1. The expiration is the time that this key will expire and be deleted.

  2. The ttl is the time to live for the key, in seconds.

NOTE: Keys can only be expired by a cluster leader so if a machine gets disconnected from the cluster, its keys will not expire until it rejoins.

Now you can try to get the key by sending a GET request:

curl -L http://127.0.0.1:4001/v2/keys/foo

If the TTL has expired, the key will be deleted, and you will be returned a 100.

{
    "cause": "/foo",
    "errorCode": 100,
    "index": 6,
    "message": "Key Not Found"
}
Waiting for a change

We can watch for a change on a key and receive a notification by using long polling. This also works for child keys by passing recursive=true in curl.

In one terminal, we send a get request with wait=true :

curl -L http://127.0.0.1:4001/v2/keys/foo?wait=true

Now we are waiting for any changes at path /foo.

In another terminal, we set a key /foo with value bar:

curl -L http://127.0.0.1:4001/v2/keys/foo -XPUT -d value=bar

The first terminal should get the notification and return with the same response as the set request.

{
    "action": "set",
    "node": {
        "createdIndex": 7,
        "key": "/foo",
        "modifiedIndex": 7,
        "value": "bar"
    }
}

However, the watch command can do more than this. Using the the index we can watch for commands that has happened in the past. This is useful for ensuring you don't miss events between watch commands.

Let's try to watch for the set command of index 7 again:

curl -L http://127.0.0.1:4001/v2/keys/foo?wait=true\&waitIndex=7

The watch command returns immediately with the same response as previous.

Atomically Creating In-Order Keys

Using the POST on a directory you can create keys with key names that are created in-order. This can be used in a variety of useful patterns like implementing queues of keys that need to be processed in strict order. An example use case is the locking module which uses it to ensure clients get fair access to a mutex.

Creating an in-order key is easy

curl -X POST http://127.0.0.1:4001/v2/keys/queue -d value=Job1
{
    "action": "create",
    "node": {
        "createdIndex": 6,
        "key": "/queue/6",
        "modifiedIndex": 6,
        "value": "Job1"
    }
}

If you create another entry some time later it is guaranteed to have a key name that is greater than the previous key. Also note the key names use the global etcd index so the next key can be more than previous + 1.

curl -X POST http://127.0.0.1:4001/v2/keys/queue -d value=Job2
{
    "action": "create",
    "node": {
        "createdIndex": 29,
        "key": "/queue/29",
        "modifiedIndex": 29,
        "value": "Job2"
    }
}
Using a directory TTL

Like keys, directories in etcd can be set to expire after a specified number of seconds. You can do this by setting a TTL (time to live) on a directory when it is created with a PUT:

curl -L http://127.0.0.1:4001/v2/keys/dir -XPUT -d ttl=30 -d dir=true
{
    "action": "set",
    "node": {
        "createdIndex": 17,
        "dir": true,
        "expiration": "2013-12-11T10:37:33.689275857-08:00",
        "key": "/newdir",
        "modifiedIndex": 17,
        "ttl": 30
    }
}

The directories TTL can be refreshed by making an update. You can do this by making a PUT with prevExist=true and a new TTL.

curl -L http://127.0.0.1:4001/v2/keys/dir -XPUT -d ttl=30 -d dir=true -d prevExist=true

Keys that are under this directory work as usual, but when the directory expires a watcher on a key under the directory will get an expire event:

curl -X GET http://127.0.0.1:4001/v2/keys/dir/asdf\?consistent\=true\&wait\=true
{
    "action": "expire",
    "node": {
        "createdIndex": 8,
        "key": "/dir",
        "modifiedIndex": 15
    }
}
Atomic Compare-and-Swap (CAS)

Etcd can be used as a centralized coordination service in a cluster and CompareAndSwap is the most basic operation to build distributed lock service.

This command will set the value of a key only if the client-provided conditions are equal to the current conditions.

The current comparable conditions are:

  1. prevValue - checks the previous value of the key.

  2. prevIndex - checks the previous index of the key.

  3. prevExist - checks existence of the key: if prevExist is true, it is a update request; if prevExist is false, it is a create request.

Here is a simple example. Let's create a key-value pair first: foo=one.

curl -L http://127.0.0.1:4001/v2/keys/foo -XPUT -d value=one

Let's try some invalid CompareAndSwap commands first.

Trying to set this existing key with prevExist=false fails as expected:

curl -L http://127.0.0.1:4001/v2/keys/foo?prevExist=false -XPUT -d value=three

The error code explains the problem:

{
    "cause": "/foo",
    "errorCode": 105,
    "index": 39776,
    "message": "Already exists"
}

Now lets provide a prevValue parameter:

curl -L http://127.0.0.1:4001/v2/keys/foo?prevValue=two -XPUT -d value=three

This will try to compare the previous value of the key and the previous value we provided. If they are equal, the value of the key will change to three.

{
    "cause": "[two != one] [0 != 8]",
    "errorCode": 101,
    "index": 8,
    "message": "Test Failed"
}

which means CompareAndSwap failed.

Let's try a valid condition:

curl -L http://127.0.0.1:4001/v2/keys/foo?prevValue=one -XPUT -d value=two

The response should be

{
    "action": "compareAndSwap",
    "node": {
        "createdIndex": 8,
        "key": "/foo",
        "modifiedIndex": 9,
        "prevValue": "one",
        "value": "two"
    }
}

We successfully changed the value from "one" to "two" since we gave the correct previous value.

Creating Directories

In most cases directories for a key are automatically created. But, there are cases where you will want to create a directory or remove one.

Creating a directory is just like a key only you cannot provide a value and must add the dir=true parameter.

curl -L http://127.0.0.1:4001/v2/keys/dir -XPUT -d dir=true
{
    "action": "set",
    "node": {
        "createdIndex": 30,
        "dir": true,
        "key": "/dir",
        "modifiedIndex": 30
    }
}
Listing a directory

In etcd we can store two types of things: keys and directories. Keys store a single string value. Directories store a set of keys and/or other directories.

In this example, let's first create some keys:

We already have /foo=two so now we'll create another one called /foo_dir/foo with the value of bar:

curl -L http://127.0.0.1:4001/v2/keys/foo_dir/foo -XPUT -d value=bar
{
    "action": "set",
    "node": {
        "createdIndex": 2,
        "key": "/foo_dir/foo",
        "modifiedIndex": 2,
        "value": "bar"
    }
}

Now we can list the keys under root /:

curl -L http://127.0.0.1:4001/v2/keys/

We should see the response as an array of items:

{
    "action": "get",
    "node": {
        "dir": true,
        "key": "/",
        "nodes": [
            {
                "createdIndex": 2,
                "dir": true,
                "key": "/foo_dir",
                "modifiedIndex": 2
            }
        ]
    }
}

Here we can see /foo is a key-value pair under / and /foo_dir is a directory. We can also recursively get all the contents under a directory by adding recursive=true.

curl -L http://127.0.0.1:4001/v2/keys/?recursive=true
{
    "action": "get",
    "node": {
        "dir": true,
        "key": "/",
        "nodes": [
            {
                "createdIndex": 2,
                "dir": true,
                "key": "/foo_dir",
                "modifiedIndex": 2,
                "nodes": [
                    {
                        "createdIndex": 2,
                        "key": "/foo_dir/foo",
                        "modifiedIndex": 2,
                        "value": "bar"
                    }
                ]
            }
        ]
    }
}
Deleting a Directory

Now let's try to delete the directory /foo_dir.

You can remove an empty directory using the DELETE verb and the dir=true parameter.

curl -L -X DELETE 'http://127.0.0.1:4001/v2/keys/dir?dir=true'
{
    "action": "delete",
    "node": {
        "createdIndex": 30,
        "dir": true,
        "key": "/dir",
        "modifiedIndex": 31
    }
}

To delete a directory that holds keys, you must add recursive=true.

curl -L http://127.0.0.1:4001/v2/keys/dir?recursive=true -XDELETE
{
    "action": "delete",
    "node": {
        "createdIndex": 10,
        "dir": true,
        "key": "/dir",
        "modifiedIndex": 11
    }
}
Creating a hidden node

We can create a hidden key-value pair or directory by add a _ prefix. The hidden item will not be listed when sending a GET request for a directory.

First we'll add a hidden key named /_message:

curl -L http://127.0.0.1:4001/v2/keys/_message -XPUT -d value="Hello hidden world"
{
    "action": "set",
    "node": {
        "createdIndex": 3,
        "key": "/_message",
        "modifiedIndex": 3,
        "value": "Hello hidden world"
    }
}

Next we'll add a regular key named /message:

curl -L http://127.0.0.1:4001/v2/keys/message -XPUT -d value="Hello world"
{
    "action": "set",
    "node": {
        "createdIndex": 4,
        "key": "/message",
        "modifiedIndex": 4,
        "value": "Hello world"
    }
}

Now let's try to get a listing of keys under the root directory, /:

curl -L http://127.0.0.1:4001/v2/keys/
{
    "action": "get",
    "node": {
        "dir": true,
        "key": "/",
        "nodes": [
            {
                "createdIndex": 2,
                "dir": true,
                "key": "/foo_dir",
                "modifiedIndex": 2
            },
            {
                "createdIndex": 4,
                "key": "/message",
                "modifiedIndex": 4,
                "value": "Hello world"
            }
        ]
    }
}

Here we see the /message key but our hidden /_message key is not returned.

Advanced Usage

Transport security with HTTPS

Etcd supports SSL/TLS and client cert authentication for clients to server, as well as server to server communication.

First, you need to have a CA cert clientCA.crt and signed key pair client.crt, client.key. This site has a good reference for how to generate self-signed key pairs: http://www.g-loaded.eu/2005/11/10/be-your-own-ca/

For testing you can use the certificates in the fixtures/ca directory.

Let's configure etcd to use this keypair:

./etcd -f -name machine0 -data-dir machine0 -cert-file=./fixtures/ca/server.crt -key-file=./fixtures/ca/server.key.insecure

There are a few new options we're using:

  • -f - forces a new machine configuration, even if an existing configuration is found. (WARNING: data loss!)
  • -cert-file and -key-file specify the location of the cert and key files to be used for for transport layer security between the client and server.

You can now test the configuration using HTTPS:

curl --cacert ./fixtures/ca/server-chain.pem https://127.0.0.1:4001/v2/keys/foo -XPUT -d value=bar -v

You should be able to see the handshake succeed.

OSX 10.9+ Users: curl 7.30.0 on OSX 10.9+ doesn't understand certificates passed in on the command line. Instead you must import the dummy ca.crt directly into the keychain or add the -k flag to curl to ignore errors. If you want to test without the -k flag run open ./fixtures/ca/ca.crt and follow the prompts. Please remove this certificate after you are done testing! If you know of a workaround let us know.

...
SSLv3, TLS handshake, Finished (20):
...

And also the response from the etcd server:

{
    "action": "set",
    "key": "/foo",
    "modifiedIndex": 3,
    "prevValue": "bar",
    "value": "bar"
}
Authentication with HTTPS client certificates

We can also do authentication using CA certs. The clients will provide their cert to the server and the server will check whether the cert is signed by the CA and decide whether to serve the request.

./etcd -f -name machine0 -data-dir machine0 -ca-file=./fixtures/ca/ca.crt -cert-file=./fixtures/ca/server.crt -key-file=./fixtures/ca/server.key.insecure

-ca-file is the path to the CA cert.

Try the same request to this server:

curl --cacert ./fixtures/ca/server-chain.pem https://127.0.0.1:4001/v2/keys/foo -XPUT -d value=bar -v

The request should be rejected by the server.

...
routines:SSL3_READ_BYTES:sslv3 alert bad certificate
...

We need to give the CA signed cert to the server.

curl --key ./fixtures/ca/server2.key.insecure --cert ./fixtures/ca/server2.crt --cacert ./fixtures/ca/server-chain.pem -L https://127.0.0.1:4001/v2/keys/foo -XPUT -d value=bar -v

You should able to see:

...
SSLv3, TLS handshake, CERT verify (15):
...
TLS handshake, Finished (20)

And also the response from the server:

{
    "action": "set",
    "node": {
        "createdIndex": 12,
        "key": "/foo",
        "modifiedIndex": 12,
        "prevValue": "two",
        "value": "bar"
    }
}

Clustering

Example cluster of three machines

Let's explore the use of etcd clustering. We use Raft as the underlying distributed protocol which provides consistency and persistence of the data across all of the etcd instances.

Let start by creating 3 new etcd instances.

We use -peer-addr to specify server port and -addr to specify client port and -data-dir to specify the directory to store the log and info of the machine in the cluster:

./etcd -peer-addr 127.0.0.1:7001 -addr 127.0.0.1:4001 -data-dir machines/machine1 -name machine1

Note: If you want to run etcd on an external IP address and still have access locally, you'll need to add -bind-addr 0.0.0.0 so that it will listen on both external and localhost addresses. A similar argument -peer-bind-addr is used to setup the listening address for the server port.

Let's join two more machines to this cluster using the -peers argument:

./etcd -peer-addr 127.0.0.1:7002 -addr 127.0.0.1:4002 -peers 127.0.0.1:7001 -data-dir machines/machine2 -name machine2
./etcd -peer-addr 127.0.0.1:7003 -addr 127.0.0.1:4003 -peers 127.0.0.1:7001 -data-dir machines/machine3 -name machine3

We can retrieve a list of machines in the cluster using the HTTP API:

curl -L http://127.0.0.1:4001/v2/machines

We should see there are three machines in the cluster

http://127.0.0.1:4001, http://127.0.0.1:4002, http://127.0.0.1:4003

The machine list is also available via the main key API:

curl -L http://127.0.0.1:4001/v2/keys/_etcd/machines
{
    "action": "get",
    "node": {
        "createdIndex": 1,
        "dir": true,
        "key": "/_etcd/machines",
        "modifiedIndex": 1,
        "nodes": [
            {
                "createdIndex": 1,
                "key": "/_etcd/machines/machine1",
                "modifiedIndex": 1,
                "value": "raft=http://127.0.0.1:7001&etcd=http://127.0.0.1:4001"
            },
            {
                "createdIndex": 2,
                "key": "/_etcd/machines/machine2",
                "modifiedIndex": 2,
                "value": "raft=http://127.0.0.1:7002&etcd=http://127.0.0.1:4002"
            },
            {
                "createdIndex": 3,
                "key": "/_etcd/machines/machine3",
                "modifiedIndex": 3,
                "value": "raft=http://127.0.0.1:7003&etcd=http://127.0.0.1:4003"
            }
        ]
    }
}

We can also get the current leader in the cluster:

curl -L http://127.0.0.1:4001/v2/leader

The first server we set up should still be the leader unless it has died during these commands.

http://127.0.0.1:7001

Now we can do normal SET and GET operations on keys as we explored earlier.

curl -L http://127.0.0.1:4001/v2/keys/foo -XPUT -d value=bar
{
    "action": "set",
    "node": {
        "createdIndex": 4,
        "key": "/foo",
        "modifiedIndex": 4,
        "value": "bar"
    }
}
Killing Nodes in the Cluster

Now if we kill the leader of the cluster, we can get the value from one of the other two machines:

curl -L http://127.0.0.1:4002/v2/keys/foo

We can also see that a new leader has been elected:

curl -L http://127.0.0.1:4002/v2/leader
http://127.0.0.1:7002

or

http://127.0.0.1:7003
Testing Persistence

Next we'll kill all the machines to test persistence. Type CTRL-C on each terminal and then rerun the same command you used to start each machine.

Your request for the foo key will return the correct value:

curl -L http://127.0.0.1:4002/v2/keys/foo
{
    "action": "get",
    "node": {
        "createdIndex": 4,
        "key": "/foo",
        "modifiedIndex": 4,
        "value": "bar"
    }
}
Using HTTPS between servers

In the previous example we showed how to use SSL client certs for client-to-server communication. Etcd can also do internal server-to-server communication using SSL client certs. To do this just change the -*-file flags to -peer-*-file.

If you are using SSL for server-to-server communication, you must use it on all instances of etcd.

Modules

etcd has a number of modules that are built on top of the core etcd API. These modules provide things like dashboards, locks and leader election.

Dashboard

An HTML dashboard can be found at `http://127.0.0.1:4001/mod/dashboard/```

Lock

The Lock module implements a fair lock that can be used when lots of clients want access to a single resource. A lock can be associated with a name. The name is unique so if a lock tries to request a name that is already queued for a lock then it will find it and watch until that name obtains the lock. If you lock the same name on a key from two separate curl sessions they'll both return at the same time.

Here's the API:

Acquire a lock (with no name) for "customer1"

curl -X POST http://127.0.0.1:4001/mod/v2/lock/customer1?ttl=60

Acquire a lock for "customer1" that is associated with the name "bar"

curl -X POST http://127.0.0.1:4001/mod/v2/lock/customer1?ttl=60 -d name=bar

Renew the TTL on the "customer1" lock for index 2

curl -X PUT http://127.0.0.1:4001/mod/v2/lock/customer1?ttl=60 -d index=2

Renew the TTL on the "customer1" lock for name "customer1"

curl -X PUT http://127.0.0.1:4001/mod/v2/lock/customer1?ttl=60 -d name=bar

Retrieve the current name for the "customer1" lock.

curl http://127.0.0.1:4001/mod/v2/lock/customer1

Retrieve the current index for the "customer1" lock

curl http://127.0.0.1:4001/mod/v2/lock/customer1?field=index

Delete the "customer1" lock with the index 2

curl -X DELETE http://127.0.0.1:4001/mod/v2/lock/customer1?index=customer1

Delete the "customer1" lock with the name "bar"

curl -X DELETE http://127.0.0.1:4001/mod/v2/lock/customer1?name=bar
Leader Election

The Leader Election module wraps the Lock module to allow clients to come to consensus on a single value. This is useful when you want one server to process at a time but allow other servers to fail over. The API is similar to the Lock module but is limited to simple strings values.

Here's the API:

Attempt to set a value for the "order_processing" leader key:

curl -X POST http://127.0.0.1:4001/mod/v2/leader/order_processing?ttl=60 -d name=myserver1.foo.com

Retrieve the current value for the "order_processing" leader key:

curl http://127.0.0.1:4001/mod/v2/leader/order_processing
myserver1.foo.com

Remove a value from the "order_processing" leader key:

curl -X POST http://127.0.0.1:4001/mod/v2/leader/order_processing?name=myserver1.foo.com

If multiple clients attempt to set the value for a key then only one will succeed. The other clients will hang until the current value is removed because of TTL or because of a DELETE operation. Multiple clients can submit the same value and will all be notified when that value succeeds.

To update the TTL of a value simply reissue the same POST command that you used to set the value.

Contributing

See CONTRIBUTING for details on submitting patches and contacting developers via IRC and mailing lists.

Libraries and Tools

Tools

  • etcdctl - A command line client for etcd

Go libraries

Java libraries

Python libraries

Node libraries

Ruby libraries

C libraries

Clojure libraries

Erlang libraries

Chef Integration

Chef Cookbook

Projects using etcd

FAQ

What size cluster should I use?

Every command the client sends to the master is broadcast to all of the followers. The command is not committed until the majority of the cluster peers receive that command.

Because of this majority voting property, the ideal cluster should be kept small to keep speed up and be made up of an odd number of peers.

Odd numbers are good because if you have 8 peers the majority will be 5 and if you have 9 peers the majority will still be 5. The result is that an 8 peer cluster can tolerate 3 peer failures and a 9 peer cluster can tolerate 4 machine failures. And in the best case when all 9 peers are responding the cluster will perform at the speed of the fastest 5 machines.

Why SSLv3 alert handshake failure when using SSL client auth?

The crypto/tls package of golang checks the key usage of the certificate public key before using it. To use the certificate public key to do client auth, we need to add clientAuth to Extended Key Usage when creating the certificate public key.

Here is how to do it:

Add the following section to your openssl.cnf:

[ ssl_client ]
...
  extendedKeyUsage = clientAuth
...

When creating the cert be sure to reference it in the -extensions flag:

openssl ca -config openssl.cnf -policy policy_anything -extensions ssl_client -out certs/machine.crt -infiles machine.csr
Tuning

The default settings in etcd should work well for installations on a local network where the average network latency is low. However, when using etcd across multiple data centers or over networks with high latency you may need to tweak the heartbeat and election timeout settings.

The underlying distributed consensus protocol relies on two separate timeouts to ensure that nodes can handoff leadership if one stalls or goes offline. The first timeout is called the Heartbeat Timeout. This is the frequency with which the leader will notify followers that it is still the leader. etcd batches commands together for higher throughput so this heartbeat timeout is also a delay for how long it takes for commands to be committed. By default, etcd uses a 50ms heartbeat timeout.

The second timeout is the Election Timeout. This timeout is how long a follower node will go without hearing a heartbeat before attempting to become leader itself. By default, etcd uses a 200ms election timeout.

Adjusting these values is a trade off. Lowering the heartbeat timeout will cause individual commands to be committed faster but it will lower the overall throughput of etcd. If your etcd instances have low utilization then lowering the heartbeat timeout can improve your command response time.

The election timeout should be set based on the heartbeat timeout and your network ping time between nodes. Election timeouts should be at least 10 times your ping time so it can account for variance in your network. For example, if the ping time between your nodes is 10ms then you should have at least a 100ms election timeout.

You should also set your election timeout to at least 4 to 5 times your heartbeat timeout to account for variance in leader replication. For a heartbeat timeout of 50ms you should set your election timeout to at least 200ms - 250ms.

You can override the default values on the command line:

# Command line arguments:
$ etcd -peer-heartbeat-timeout=100 -peer-election-timeout=500

# Environment variables:
$ ETCD_PEER_HEARTBEAT_TIMEOUT=100 ETCD_PEER_ELECTION_TIMEOUT=500 etcd

Or you can set the values within the configuration file:

[peer]
heartbeat_timeout = 100
election_timeout = 100

The values are specified in milliseconds.

Project Details

Versioning

etcd uses semantic versioning. New minor versions may add additional features to the API however.

You can get the version of etcd by issuing a request to /version:

curl -L http://127.0.0.1:4001/version

During the pre-v1.0.0 series of releases we may break the API as we fix bugs and get feedback.

License

etcd is under the Apache 2.0 license. See the LICENSE file for details.

Documentation

The Go Gopher

There is no documentation for this package.

Directories

Path Synopsis
mod
mod is the entry point to all of the etcd modules.
mod is the entry point to all of the etcd modules.
v1
v2
v2
third_party
bitbucket.org/kardianos/osext
Extensions to the standard "os" package.
Extensions to the standard "os" package.
code.google.com/p/go.net/dict
Package dict implements the Dictionary Server Protocol as defined in RFC 2229.
Package dict implements the Dictionary Server Protocol as defined in RFC 2229.
code.google.com/p/go.net/html
Package html implements an HTML5-compliant tokenizer and parser.
Package html implements an HTML5-compliant tokenizer and parser.
code.google.com/p/go.net/html/atom
Package atom provides integer codes (also known as atoms) for a fixed set of frequently occurring HTML strings: tag names and attribute keys such as "p" and "id".
Package atom provides integer codes (also known as atoms) for a fixed set of frequently occurring HTML strings: tag names and attribute keys such as "p" and "id".
code.google.com/p/go.net/idna
Package idna implements IDNA2008 (Internationalized Domain Names for Applications), defined in RFC 5890, RFC 5891, RFC 5892, RFC 5893 and RFC 5894.
Package idna implements IDNA2008 (Internationalized Domain Names for Applications), defined in RFC 5890, RFC 5891, RFC 5892, RFC 5893 and RFC 5894.
code.google.com/p/go.net/ipv4
Package ipv4 implements IP-level socket options for the Internet Protocol version 4.
Package ipv4 implements IP-level socket options for the Internet Protocol version 4.
code.google.com/p/go.net/ipv6
Package ipv6 implements IP-level socket options for the Internet Protocol version 6.
Package ipv6 implements IP-level socket options for the Internet Protocol version 6.
code.google.com/p/go.net/netutil
Package netutil provides network utility functions, complementing the more common ones in the net package.
Package netutil provides network utility functions, complementing the more common ones in the net package.
code.google.com/p/go.net/proxy
Package proxy provides support for a variety of protocols to proxy network data.
Package proxy provides support for a variety of protocols to proxy network data.
code.google.com/p/go.net/publicsuffix
Package publicsuffix provides a public suffix list based on data from http://publicsuffix.org/.
Package publicsuffix provides a public suffix list based on data from http://publicsuffix.org/.
code.google.com/p/go.net/spdy
Package spdy implements the SPDY protocol (currently SPDY/3), described in http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3.
Package spdy implements the SPDY protocol (currently SPDY/3), described in http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3.
code.google.com/p/go.net/websocket
Package websocket implements a client and server for the WebSocket protocol as specified in RFC 6455.
Package websocket implements a client and server for the WebSocket protocol as specified in RFC 6455.
code.google.com/p/goprotobuf/proto
Package proto converts data structures to and from the wire format of protocol buffers.
Package proto converts data structures to and from the wire format of protocol buffers.
code.google.com/p/goprotobuf/protoc-gen-go/generator
The code generator for the plugin for the Google protocol buffer compiler.
The code generator for the plugin for the Google protocol buffer compiler.
github.com/coreos/go-systemd/activation
Package activation implements primitives for systemd socket activation.
Package activation implements primitives for systemd socket activation.
github.com/coreos/go-systemd/dbus
Integration with the systemd D-Bus API.
Integration with the systemd D-Bus API.
github.com/coreos/go-systemd/journal
Package journal provides write bindings to the systemd journal
Package journal provides write bindings to the systemd journal
github.com/gorilla/context
Package gorilla/context stores values shared during a request lifetime.
Package gorilla/context stores values shared during a request lifetime.
github.com/gorilla/mux
Package gorilla/mux implements a request router and dispatcher.
Package gorilla/mux implements a request router and dispatcher.
github.com/stretchr/testify/mock
Provides a system by which it is possible to mock your objects and verify calls are happening as expected.
Provides a system by which it is possible to mock your objects and verify calls are happening as expected.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL