badgerauth

package
v1.65.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 24, 2023 License: AGPL-3.0 Imports: 37 Imported by: 0

README

Auth Database

Package badgerauth implements eventually consistent auth database built on top of BadgerDB.

The implementation is based on the design from the New Auth Database blueprint.

The implementation differs from what's been described in the blueprint slightly. Specifically, the ability to invalidate and delete records through the KV interface has been removed, which drastically simplified implementation. Mainly we don't need to handle special cases around invalidation/deletion, handle out-of-sync nodes (there won't be out-of-sync nodes), and prune the replication log since we won't delete anything through the KV interface.

Usage

With badgerauth as auth database backend, authservice acts as a standalone database node that's designed to be run in a cluster (but it can run alone, too).

Configuration

To run authservice with badgerauth backend --kv-backend='badger://' must be specified and there are several parameters than tune the cluster/storage engine.

Several parameters need to be specified to run a production-grade cluster; others are optional or have otherwise sensible default values. Required parameters are:

  • node.id
  • node.path
  • node.certs-dir
  • node.join

node.address is a commonly changed parameter, and there are also backup-related parameters that might be considered required depending on individual needs.

Storage engine configuration
Parameter Description Default value
node.conflict-backoff.delay The active time between retries, typically not set 0s
node.conflict-backoff.max The maximum total time to allow retries 5m
node.conflict-backoff.min The minimum time between retries 100ms
node.first-start Whether to allow starting with empty storage dev/release: true/false
node.id Unique identifier for the node
node.path A path where to store data (WARNING: data will be stored in RAM only if empty)

node.conflict-backoff.* are settings related to backing off for retrying execution of write transactions. The current underlying storage engine uses concurrent ACID transactions; hence transactions need retrying in a rare case of conflict (see https://dgraph.io/blog/post/badger-txn/).

node.first-start is needed while starting nodes in production for the first time and shouldn't ever be used later on. It guards against dangerous restarts of nodes with empty storage attached that often signals underlying storage stopped being reliable.

Backups configuration
Parameter Default value
node.backup.access-key-id
node.backup.bucket
node.backup.enabled false
node.backup.endpoint
node.backup.interval 1h
node.backup.prefix
node.backup.secret-access-key
Cluster configuration
Parameter Description Default value
node.address address that the node listens on :20004
node.certs-dir directory for certificates for mutual authentication
node.join comma-delimited list of cluster peers (addresses)
node.replication-interval how often to replicate 30s
node.replication-limit maximum entries returned in replication response 1000

Note that it's not possible to start the cluster without mutual authentication. Currently, the only supported transport for replication is TLS (except for unit tests where it's possible to start an insecure cluster). For details, see the Cluster security configuration section.

Cluster security configuration

Disclaimer: the following documentation section is heavily inspired by https://www.cockroachlabs.com/docs/v22.1/create-security-certificates-openssl (badgerauth's cluster security configuration is similar).

To secure the cluster's inter-node communication, a Certificate Authority (CA) certificate that has been used to sign keys and certificates (SSLs) for nodes must be provided.

To create these certificates and keys, use openssl commands, or use a custom CA (for example, a public CA or organizational CA).

Subcommands
Subcommand Usage
openssl genrsa Create an RSA private key.
openssl req Create CA certificate and CSRs (certificate signing requests).
openssl ca Create node certificate using the CSRs.
Configuration files

To use openssl req and openssl ca subcommands, the following configuration files are needed:

Filename pattern File usage
ca.cnf CA configuration file
node.cnf Server configuration file
Certificate directory

Access to a local copy of the CA certificate and key to create node certificates using the OpenSSL commands is needed. It's recommended to create all certificates (node and CA certificates) and node keys in one place and then distribute them appropriately. Store the CA key somewhere safe and keep a backup; if lost, it will not be possible to add new nodes to the cluster.

Required keys and certificates

Use the openssl genrsa and openssl req subcommands to create all certificates and node keys in a single directory, with the files named as follows:

Filename pattern File usage
ca.crt CA certificate
node.crt Server certificate
node.key Key for server certificate

Note the following:

  • The CA key should not be uploaded to the nodes, so it should be created in a separate directory.
  • Keys (files ending in .key) must meet the permission requirements check on macOS, Linux, and other UNIX-like systems.
Example of creating security certificates using OpenSSL
Step 1. Create the CA key and certificate pair
  1. Create two directories:
$ mkdir certs safe-directory
  • certs: create a CA certificate and all node certificates and keys in this directory, and then upload the relevant files to the nodes.
  • safe-directory: create a CA key in this directory and then reference the key when generating node certificates. After that, keep the key safe and secret; do not upload it to nodes.
  1. Create the ca.cnf file and copy the following configuration into it.

The CA certificate expiration period can be set using the default_days parameter. Using the default value of the CA certificate expiration period, which is 365 days, is recommended.

# OpenSSL CA configuration file
[ ca ]
default_ca = CA_default

[ CA_default ]
default_days = 365
database = index.txt
serial = serial.txt
default_md = sha256
copy_extensions = copy
unique_subject = no

# Used to create the CA certificate.
[ req ]
prompt=no
distinguished_name = distinguished_name
x509_extensions = extensions

[ distinguished_name ]
organizationName = Storj

[ extensions ]
keyUsage = critical,digitalSignature,nonRepudiation,keyEncipherment,keyCertSign
basicConstraints = critical,CA:true,pathlen:1

[ signing_policy ]
organizationName = supplied

# Used to sign node certificates.
[ signing_node_req ]
keyUsage = critical,digitalSignature,keyEncipherment
extendedKeyUsage = serverAuth,clientAuth

# Used to sign client certificates.
[ signing_client_req ]
keyUsage = critical,digitalSignature,keyEncipherment
extendedKeyUsage = clientAuth
  1. Create the CA key using the openssl genrsa command:
$ openssl genrsa -out safe-directory/ca.key 2048
$ chmod 400 safe-directory/ca.key
  1. Create the CA certificate using the openssl req command:
$ openssl req -new -x509 -config ca.cnf -key safe-directory/ca.key -out certs/ca.crt -days 365 -batch
  1. Reset database and index files:
$ rm -f index.txt serial.txt
$ touch index.txt
$ echo '01' > serial.txt
Step 2. Create the certificate and key pairs for nodes

In the following steps, replace the placeholder text in the code with the actual username and node address.

  1. Create the node.cnf file for the first node and copy the following configuration into it:
# OpenSSL node configuration file
[ req ]
prompt=no
distinguished_name = distinguished_name
req_extensions = extensions

[ distinguished_name ]
organizationName = Storj

[ extensions ]
subjectAltName = critical,DNS:<node-hostname>,DNS:<node-domain>,IP:<IP Address>
  1. Create the key for the first node using the openssl genrsa command:
$ openssl genrsa -out certs/node.key 2048
$ chmod 400 certs/node.key
  1. Create the CSR for the first node using the openssl req command:
$ openssl req -new -config node.cnf -key certs/node.key -out node.csr -batch
  1. Sign the node CSR to create the node certificate for the first node using the openssl ca command.
$ openssl ca -config ca.cnf -keyfile safe-directory/ca.key -cert certs/ca.crt -policy signing_policy -extensions signing_node_req -out certs/node.crt -outdir certs/ -in node.csr -batch
  1. Verify the values in the Subject Alternative Name field in the certificate:
$ openssl x509 -in certs/node.crt -text | grep "X509v3 Subject Alternative Name" -A 1

Sample output:

X509v3 Subject Alternative Name: critical
            DNS:localhost, DNS:node.example.io, IP Address:127.0.0.1
Summary

For each node in the deployment, repeat Step 2 and upload the CA certificate and node key and certificate to the node.

After uploading all the keys and certificates to the corresponding nodes, remove the .pem files in the certs directory. These files are unnecessary duplicates of the .crt files that are required.

Starting a production-grade cluster

To start a production-grade cluster, start each node in the cluster with the below minimum recommended configuration:

$ authservice run \
    --endpoint <Gateway's address> \
    --kv-backend badger:// \
    --node.id <Unique ID> \
    --node.path <A path where to store data> \
    --node.address <IP address with the port number> \
    --node.certs-dir <Directory with created certificates> \
    --node.join <List of other nodes' addresses>

Example:

$ authservice run \
    --endpoint https://gateway.storjshare.io \
    --kv-backend badger:// \
    --node.id eu1-1 \
    --node.path /where/to/store/data \
    --node.address 1.2.3.4:20004 \
    --node.certs-dir certs \
    --node.join 5.6.7.8:20004,9.10.11.12:20004

Operations

Monitoring cluster's health
Metrics

The auth database reports metrics/events prefixed with as_badgerauth_.

Logs

The most troubleshooting-helpful information is reported at the DEBUG level. However, INFO and above should be sufficient to have a good overview of whether everything works correctly.

Production Owner tools

See authservice-admin for more information to use a command-line tool for retrieving, or updating an authservice record.

Migration from PostgreSQL/CockroachDB (sqlauth backend)

It's possible to migrate from sqlauth to badgerauth using the migration backend. --kv-backend='badger://' and --node-migration.source-sql-auth-kv-backend=cockroach://... must be specified to do this, and badgerauth-specific parameters still apply.

In this mode, nodes will write to both badgerauth and sqlauth on Put and read from badergauth with a fallback to sqlauth on Get. With migration backend specified, --migration used with run command will issue transfer of all records that sqlauth currently has to badgerauth before starting.

Configuration
Parameter Description Default value
node-migration.migration-select-size Page size while performing the migration 1000
node-migration.source-sql-auth-kv-backend Source key/value store backend (must be sqlauth) URL

Consider increasing page size to improve the transfer speed of records from sqlauth to badgerauth.

In a recommended setup, start two initially separate clusters:

  1. Current cluster where all nodes were restarted with migration backend but without --migration flag.
  2. Single-node, a non-production-facing cluster where the node was started with migration backend and --migration flag.

After the single-node cluster completes the migration of old records, make the nodes from the first cluster join the node from the second cluster so they can fetch records existing in PostgreSQL/CockroachDB before migration to a new backend has started.

Ensure the first cluster no longer fetches from the second cluster and that the as_badgerauthmigration_destination_miss event rate is 0 before fully switching to badgerauth only.

Example

Assume 3 nodes in the first cluster and an additional node.

graph TD;
    Node_1-->Node_2;
    Node_1-->Node_3;
    Node_2-->Node_1;
    Node_2-->Node_3;
    Node_3-->Node_1;
    Node_3-->Node_2;

    Node_1-->Additional_Node;
    Node_2-->Additional_Node;
    Node_3-->Additional_Node;

Each Node_{1,2,3} start with a command similar to:

$ authservice run \
    --endpoint ... \
    --kv-backend badger:// \
    --node.id ... \
    --node.path /where/to/store/data \
    --node.address Node_...:20004 \
    --node.certs-dir certs \
    --node.join Node_...:20004,Node_...:20004 \
    # Later switch to
    # --node.join Node_...:20004,Node_...,Additional_Node:20004
    --node-migration.source-sql-auth-kv-backend=cockroach://...

Start the additional node with a command similar to:

$ authservice run \
    --endpoint ... \
    --kv-backend badger:// \
    --migration \
    --node.id ... \
    --node.path /where/to/store/data \
    --node.address Additional_Node:20004 \
    --node.certs-dir certs \
    --node-migration.source-sql-auth-kv-backend=cockroach://...

Development

Schema
NodeID

Contains the local node ID, to identify it in a cluster.

Key

Name: node_id

Value

Type: [32]byte / NodeID

Clock

Contains the logical time of a node.

Key

Name: clock_value/NodeID

Name Type
NodeID [32]byte / NodeID
Value

Type: uint64 / Clock. Big-endian byte order.

ReplicationLogEntry

A log corresponding to every auth record, used for node replication. Note that the key contains the replication log entry itself.

Key

Name: replication_log/NodeID/Clock/KeyHash/State

Name Type
NodeID [32]byte / NodeID
Clock uint64 / Clock. Big-endian byte order.
KeyHash [32]byte / KeyHash
State int32 / Record_State. Big-endian byte order.
Value

Type: nil

Record

The auth record containing encrypted access grant, and metadata fields.

Key

Name: KeyHash

Name Type
KeyHash [32]byte / KeyHash
Value

Type: Record

Name Type
CreatedAtUnix int64
Public bool
SatelliteAddress string
MacaroonHead []byte
ExpiresAtUnix int64
EncryptedSecretKey []byte
EncryptedAccessGrant []byte
InvalidationReason string
InvalidatedAtUnix int64
State int32 / Record_State
Regenerating protobufs

To install dependencies, execute

make badgerauth-install-dependencies

Dependency installation target support Linux using apt or apt-get and MacOS using Homebrew.

To regenerate protobufs, change the directory to pkg/auth/badgerauth/pb and

go generate
Testing

storj.io/gateway-mt/pkg/auth/badgerauth/badgerauthtest package contains most of the tooling necessary for running exhaustive unit tests.

TODO(artur): it might be beneficial to have a binary that starts a local cluster for running integration tests and manual testing.

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	// ProtoError is a class of proto errors.
	ProtoError = errs.Class("proto")

	// ErrKeyAlreadyExists is an error returned when putting a key that exists.
	ErrKeyAlreadyExists = Error.New("key already exists")

	// ErrDBStartedWithDifferentNodeID is returned when a database is started with a different node id.
	ErrDBStartedWithDifferentNodeID = errs.Class("wrong node id")
)
View Source
var (

	// Error is the default error class for the badgerauth package.
	Error = errs.Class("badgerauth")

	// DialError is an error class for dial failures.
	DialError = errs.Class("dial")
)
View Source
var ClockError = errs.Class("clock")

ClockError is a class of clock errors.

View Source
var NodeIDError = errs.Class("node ID")

NodeIDError is a class of id errors.

View Source
var ReplicationLogError = errs.Class("replication log")

ReplicationLogError is a class of replication log errors.

View Source
var TLSError = errs.Class("tls")

TLSError is an error class for tls setup problems.

Functions

func IgnoreDialFailures added in v1.36.0

func IgnoreDialFailures(err error) error

IgnoreDialFailures returns nil if err contains DialError (and err otherwise).

func InsertRecord added in v1.29.0

func InsertRecord(log *zap.Logger, txn *badger.Txn, nodeID NodeID, keyHash authdb.KeyHash, record *pb.Record) error

InsertRecord inserts a record, adding a corresponding replication log entry consistent with the record's state.

InsertRecord can be used to insert on any node for any node.

Types

type Admin added in v1.31.0

type Admin struct {
	// contains filtered or unexported fields
}

Admin represents a service that allows managing database records directly.

func NewAdmin added in v1.31.0

func NewAdmin(db *DB) *Admin

NewAdmin creates a new instance of Admin.

func (*Admin) DeleteRecord added in v1.31.0

func (admin *Admin) DeleteRecord(ctx context.Context, req *pb.DeleteRecordRequest) (_ *pb.DeleteRecordResponse, err error)

DeleteRecord deletes a database record.

func (*Admin) InvalidateRecord added in v1.31.0

func (admin *Admin) InvalidateRecord(ctx context.Context, req *pb.InvalidateRecordRequest) (_ *pb.InvalidateRecordResponse, err error)

InvalidateRecord invalidates a record.

func (*Admin) UnpublishRecord added in v1.31.0

func (admin *Admin) UnpublishRecord(ctx context.Context, req *pb.UnpublishRecordRequest) (_ *pb.UnpublishRecordResponse, err error)

UnpublishRecord unpublishes a record.

type Backup added in v1.31.0

type Backup struct {
	Client    Client
	SyncCycle *sync2.Cycle
	// contains filtered or unexported fields
}

Backup represents a backup job that backs up the database.

func NewBackup added in v1.31.0

func NewBackup(log *zap.Logger, db *DB, client Client) *Backup

NewBackup returns a new Backup. Note that BadgerDB does not support opening multiple connections to the same database, so we must use the same DB connection as normal KV operations.

func (*Backup) RunOnce added in v1.31.0

func (b *Backup) RunOnce(ctx context.Context) (err error)

RunOnce performs a full backup of the database

Each backup is split into separate prefix parts. For example:

mybucket/myprefix/mynodeid/2022/04/13/2022-04-13T03:42:07Z

type BackupConfig added in v1.31.0

type BackupConfig struct {
	Enabled         bool          `user:"true" help:"enable backups" default:"false"`
	Endpoint        string        `user:"true" help:"backup bucket endpoint hostname, e.g. s3.amazonaws.com"`
	Bucket          string        `user:"true" help:"bucket name where database backups are stored"`
	Prefix          string        `user:"true" help:"database backup object path prefix"`
	Interval        time.Duration `user:"true" help:"how often full backups are run" default:"1h"`
	AccessKeyID     string        `user:"true" help:"access key for backup bucket"`
	SecretAccessKey string        `user:"true" help:"secret key for backup bucket"`
}

BackupConfig provides options for creating a backup.

type Client added in v1.31.0

type Client interface {
	PutObject(ctx context.Context, bucketName, objectName string, reader io.Reader, objectSize int64, opts minio.PutObjectOptions) (info minio.UploadInfo, err error)
}

Client is the interface for the object store.

type Clock added in v1.26.0

type Clock uint64

Clock represents logical time on a single DB.

func ReadClock added in v1.26.0

func ReadClock(txn *badger.Txn, id NodeID) (Clock, error)

ReadClock reads the current clock value for the node.

func (Clock) Bytes added in v1.26.0

func (clock Clock) Bytes() []byte

Bytes returns a slice of bytes.

func (*Clock) SetBytes added in v1.26.0

func (clock *Clock) SetBytes(v []byte) error

SetBytes parses []byte for the clock value.

type Config added in v1.26.0

type Config struct {
	ID NodeID `user:"true" help:"unique identifier for the node" default:""`

	FirstStart bool `user:"true" help:"allow start with empty storage" devDefault:"true" releaseDefault:"false"`
	// Path is where to store data. Empty means in memory.
	Path string `user:"true" help:"path where to store data" default:""`

	Address string   `user:"true" help:"address that the node listens on" default:":20004"`
	Join    []string `user:"true" help:"comma delimited list of cluster peers" default:""`
	// CertsDir is a path to a directory for certificates for mutual
	// authentication. If empty, no certificates will be loaded, and it will be
	// impossible to connect the node to any cluster.
	CertsDir string `user:"true" help:"directory for certificates for mutual authentication"`

	// ReplicationInterval defines how often to connect and request status from
	// other nodes.
	ReplicationInterval time.Duration `user:"true" help:"how often to replicate" default:"30s" devDefault:"5s"`
	// ReplicationLimit is per node ID limit of replication response entries to
	// return.
	ReplicationLimit int `user:"true" help:"maximum entries returned in replication response" default:"1000"`
	// ConflictBackoff configures retries for conflicting transactions that may
	// occur when Node's underlying storage engine is under heavy load.
	ConflictBackoff backoff.ExponentialBackoff

	// InsecureDisableTLS allows disabling tls for testing.
	InsecureDisableTLS bool `internal:"true"`

	Backup BackupConfig
}

Config provides options for creating a Node.

Keep this in sync with badgerauthtest.setConfigDefaults.

type DB added in v1.27.0

type DB struct {
	// contains filtered or unexported fields
}

DB represents authentication storage based on BadgerDB. This implements the data-storage layer for a distributed Node.

func OpenDB added in v1.28.0

func OpenDB(log *zap.Logger, config Config) (*DB, error)

OpenDB opens the underlying storage engine for badgerauth node.

func (*DB) Close added in v1.27.0

func (db *DB) Close() error

Close closes the underlying storage engine (BadgerDB).

func (*DB) Get added in v1.27.0

func (db *DB) Get(ctx context.Context, keyHash authdb.KeyHash) (record *authdb.Record, err error)

Get retrieves the record from the storage engine. It returns nil if the key does not exist. If the record is invalid, the error contains why.

func (*DB) HealthCheck added in v1.65.0

func (db *DB) HealthCheck(ctx context.Context) (err error)

HealthCheck ensures the underlying storage engine works and returns an error otherwise.

func (*DB) Put added in v1.27.0

func (db *DB) Put(ctx context.Context, keyHash authdb.KeyHash, record *authdb.Record) error

Put is like PutAtTime, but it uses current time to store the record.

func (*DB) PutAtTime added in v1.27.0

func (db *DB) PutAtTime(ctx context.Context, keyHash authdb.KeyHash, record *authdb.Record, now time.Time) (err error)

PutAtTime stores the record at a specific time. It is an error if the key already exists.

func (*DB) UnderlyingDB added in v1.27.0

func (db *DB) UnderlyingDB() *badger.DB

UnderlyingDB returns underlying BadgerDB. This method is most useful in tests.

type Node

type Node struct {
	Backup *Backup

	SyncCycle sync2.Cycle
	// contains filtered or unexported fields
}

Node is distributed auth storage node that wraps DB with machinery to replicate records from and to other nodes.

func New added in v1.26.0

func New(log *zap.Logger, config Config) (_ *Node, err error)

New constructs new Node.

func (*Node) Address added in v1.28.0

func (node *Node) Address() string

Address returns the server address.

func (*Node) Close

func (node *Node) Close() error

Close releases underlying resources.

func (*Node) Get

func (node *Node) Get(ctx context.Context, keyHash authdb.KeyHash) (record *authdb.Record, err error)

Get returns a record from the database. If the record isn't found, we consult peer nodes to see if they have the record. This covers the case of a user putting a record onto one authservice node, but then retrieving it from another before the record has been fully synced.

func (*Node) HealthCheck added in v1.65.0

func (node *Node) HealthCheck(ctx context.Context) error

HealthCheck proxies DB's HealthCheck.

func (*Node) ID added in v1.28.0

func (node *Node) ID() NodeID

ID returns the configured node id.

func (*Node) Peek added in v1.36.0

func (node *Node) Peek(ctx context.Context, req *pb.PeekRequest) (_ *pb.PeekResponse, err error)

Peek allows fetching a specific record from the node.

func (*Node) Ping

func (node *Node) Ping(ctx context.Context, req *pb.PingRequest) (*pb.PingResponse, error)

Ping allows to fetch information about the node.

func (*Node) Put

func (node *Node) Put(ctx context.Context, keyHash authdb.KeyHash, record *authdb.Record) error

Put proxies DB's Put.

func (*Node) PutAtTime

func (node *Node) PutAtTime(ctx context.Context, keyHash authdb.KeyHash, record *authdb.Record, now time.Time) error

PutAtTime proxies DB's PutAtTime.

func (*Node) Replicate added in v1.28.0

func (node *Node) Replicate(ctx context.Context, req *pb.ReplicationRequest) (_ *pb.ReplicationResponse, err error)

Replicate implements a node's ability to ship its replication log/records to another node. It responds with RPC errors only.

func (*Node) Run added in v1.28.0

func (node *Node) Run(ctx context.Context) error

Run runs the server and the associated servers.

func (*Node) TestingPeers added in v1.28.0

func (node *Node) TestingPeers(ctx context.Context) []*Peer

TestingPeers allows to access the peers for testing.

func (*Node) TestingSetJoin added in v1.28.0

func (node *Node) TestingSetJoin(addresses []string)

TestingSetJoin sets peer nodes to join to.

func (*Node) UnderlyingDB added in v1.28.0

func (node *Node) UnderlyingDB() *DB

UnderlyingDB returns underlying DB. This method is most useful in tests.

type NodeID added in v1.26.0

type NodeID [32]byte

NodeID is a unique id for BadgerDB node.

func (NodeID) Bytes added in v1.26.0

func (id NodeID) Bytes() []byte

Bytes returns the bytes for nodeID.

func (*NodeID) Set added in v1.28.0

func (id *NodeID) Set(v string) error

Set implements flag.Value interface.

func (*NodeID) SetBytes added in v1.26.0

func (id *NodeID) SetBytes(v []byte) error

SetBytes sets the node id from bytes.

func (NodeID) String added in v1.28.0

func (id NodeID) String() string

String returns NodeID as readable text.

func (NodeID) Type added in v1.28.0

func (id NodeID) Type() string

Type implements pflag.Value.

type Peer added in v1.28.0

type Peer struct {
	// contains filtered or unexported fields
}

Peer represents a node peer replication logic.

func NewPeer added in v1.28.0

func NewPeer(node *Node, address string) *Peer

NewPeer returns a replication peer.

func (*Peer) Peek added in v1.36.0

func (peer *Peer) Peek(ctx context.Context, keyHash authdb.KeyHash) (record *pb.Record, err error)

Peek returns a record from the peer.

func (*Peer) Status added in v1.28.0

func (peer *Peer) Status() PeerStatus

Status returns a snapshot of the peer status.

func (*Peer) Sync added in v1.28.0

func (peer *Peer) Sync(ctx context.Context) (err error)

Sync runs the synchronization step once.

type PeerStatus added in v1.28.0

type PeerStatus struct {
	Address string
	NodeID  NodeID

	LastUpdated time.Time
	LastWasUp   bool
	LastError   error

	Clock Clock
}

PeerStatus contains last known peer status.

type ReplicationLogEntry added in v1.27.0

type ReplicationLogEntry struct {
	ID      NodeID
	Clock   Clock
	KeyHash authdb.KeyHash
	State   pb.Record_State
}

ReplicationLogEntry represents replication log entry.

Key layout reference: https://github.com/storj/gateway-mt/blob/3ef75f412a50118d9d910e1b372e126e6ffb7503/docs/blueprints/new-auth-database.md#replication-log-entry

func (ReplicationLogEntry) Bytes added in v1.27.0

func (e ReplicationLogEntry) Bytes() []byte

Bytes returns a slice of bytes.

func (*ReplicationLogEntry) SetBytes added in v1.27.0

func (e *ReplicationLogEntry) SetBytes(entry []byte) error

SetBytes parses entry as ReplicationLogEntry and sets entry's value to result.

func (ReplicationLogEntry) ToBadgerEntry added in v1.27.0

func (e ReplicationLogEntry) ToBadgerEntry() *badger.Entry

ToBadgerEntry constructs new *badger.Entry from e.

type TLSOptions added in v1.28.0

type TLSOptions struct {
	// CertsDir defines a folder for loading the certificates.
	//
	// The filenames follow this convention:
	//  - node.crt, node.key: define certificate and private key
	//  -             ca.crt: defines certificate authority for other peers
	CertsDir string
}

TLSOptions contains configuration for tls.

func (TLSOptions) Load added in v1.28.0

func (opts TLSOptions) Load() (*tls.Config, error)

Load loads the certificates and configuration specified by the options.

Directories

Path Synopsis
Package badgerauthtest is roughly inspired by the design of the storj/satellite/metabase/metabasetest package.
Package badgerauthtest is roughly inspired by the design of the storj/satellite/metabase/metabasetest package.
Package pb includes protobufs for the badgerauth package.
Package pb includes protobufs for the badgerauth package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL