Documentation
¶
Overview ¶
Package tessera provides an implementation of a tile-based logging framework.
Index ¶
- Constants
- Variables
- func NewAppender(ctx context.Context, d Driver, opts *AppendOptions) (*Appender, func(ctx context.Context) error, LogReader, error)
- func NewCertificateTransparencyAppender(a *Appender) func(context.Context, *ctonly.Entry) IndexFuture
- type AddFn
- type Antispam
- type AppendOptions
- func (o AppendOptions) BatchMaxAge() time.Duration
- func (o AppendOptions) BatchMaxSize() uint
- func (o AppendOptions) CheckpointInterval() time.Duration
- func (o AppendOptions) CheckpointPublisher(lr LogReader, httpClient *http.Client) func(context.Context, uint64, []byte) ([]byte, error)
- func (o AppendOptions) EntriesPath() func(uint64, uint8) string
- func (o AppendOptions) GarbageCollectionInterval() time.Duration
- func (o AppendOptions) PushbackMaxOutstanding() uint
- func (o *AppendOptions) WithAntispam(inMemEntries uint, as Antispam) *AppendOptions
- func (o *AppendOptions) WithBatching(maxSize uint, maxAge time.Duration) *AppendOptions
- func (o *AppendOptions) WithCTLayout() *AppendOptions
- func (o *AppendOptions) WithCheckpointInterval(interval time.Duration) *AppendOptions
- func (o *AppendOptions) WithCheckpointSigner(s note.Signer, additionalSigners ...note.Signer) *AppendOptions
- func (o *AppendOptions) WithGarbageCollectionInterval(interval time.Duration) *AppendOptions
- func (o *AppendOptions) WithPushback(maxOutstanding uint) *AppendOptions
- func (o *AppendOptions) WithWitnesses(witnesses WitnessGroup, opts *WitnessOptions) *AppendOptions
- type Appender
- type Driver
- type Entry
- type Follower
- type Index
- type IndexFuture
- type LogReader
- type MigrationOptions
- type MigrationTarget
- type PublicationAwaiter
- type Witness
- type WitnessGroup
- type WitnessOptions
Constants ¶
const ( // DefaultBatchMaxSize is used by storage implementations if no WithBatching option is provided when instantiating it. DefaultBatchMaxSize = 256 // DefaultBatchMaxAge is used by storage implementations if no WithBatching option is provided when instantiating it. DefaultBatchMaxAge = 250 * time.Millisecond // DefaultCheckpointInterval is used by storage implementations if no WithCheckpointInterval option is provided when instantiating it. DefaultCheckpointInterval = 10 * time.Second // DefaultPushbackMaxOutstanding is used by storage implementations if no WithPushback option is provided when instantiating it. DefaultPushbackMaxOutstanding = 4096 // DefaultGarbageCollectionInterval is the default value used if no WithGarbageCollectionInterval option is provided. DefaultGarbageCollectionInterval = time.Minute )
Variables ¶
var ErrPushback = errors.New("pushback")
ErrPushback is returned by underlying storage implementations when a new entry cannot be accepted due to overload in the system. This could be because there are too many entries with indices assigned but which have not yet been integrated into the tree, or it could be because the antispam mechanism is not able to keep up with recently added entries.
Personalities encountering this error should apply back-pressure to the source of new entries in an appropriate manner (e.g. for HTTP services, return a 503 with a Retry-After header).
Personalities should check for this error using `errors.Is(e, ErrPushback)`.
Functions ¶
func NewAppender ¶
func NewAppender(ctx context.Context, d Driver, opts *AppendOptions) (*Appender, func(ctx context.Context) error, LogReader, error)
NewAppender returns an Appender, which allows a personality to incrementally append new leaves to the log and to read from it.
The return values are the Appender for adding new entries, a shutdown function, a log reader, and an error if any of the objects couldn't be constructed.
Shutdown ensures that all calls to Add that have returned a value will be resolved. Any futures returned by _this appender_ which resolve to an index will be integrated and have a checkpoint that commits to them published if this returns successfully. After this returns, any calls to Add will fail.
The context passed into this function will be referenced by any background tasks that are started in the Appender. The correct process for shutting down an Appender cleanly is to first call the shutdown function that is returned, and then cancel the context. Cancelling the context without calling shutdown first may mean that some entries added by this appender aren't in the log when the process exits.
func NewCertificateTransparencyAppender ¶
func NewCertificateTransparencyAppender(a *Appender) func(context.Context, *ctonly.Entry) IndexFuture
NewCertificateTransparencyAppender returns a function which knows how to add a CT-specific entry type to the log.
This entry point MUST ONLY be used for CT logs participating in the CT ecosystem. It should not be used as the basis for any other/new transparency application as this protocol: a) embodies some techniques which are not considered to be best practice (it does this to retain backawards-compatibility with RFC6962) b) is not compatible with the https://c2sp.org/tlog-tiles API which we _very strongly_ encourage you to use instead.
Users of this MUST NOT call `Add` on the underlying Appender directly.
Returns a future, which resolves to the assigned index in the log, or an error.
Types ¶
type AddFn ¶
type AddFn func(ctx context.Context, entry *Entry) IndexFuture
AddFn adds a new entry to be sequenced by the storage implementation.
This method should quickly return an IndexFuture, which can be called to resolve to the index **durably** assigned to the new entry (or an error).
Implementations MUST NOT allow the future to resolve to an index value unless/until it has been durably committed by the storage.
Callers MUST NOT assume that an entry has been accepted or durably stored until they have successfully resolved the future.
Once the future resolves and returns an index, the entry can be considered to have been durably sequenced and will be preserved even in the event that the process terminates.
Once an entry is sequenced, the storage implementation MUST integrate it into the tree soon (how long this is expected to take is left unspecified, but as a guideline it should happen within single digit seconds). Until the entry is integrated and published, clients of the log will not be able to verifiably access this value.
Personalities which require blocking until the entry is integrated (e.g. because they wish to return an inclusion proof) may use the PublicationAwaiter to wrap the call to this method.
type Antispam ¶
type Antispam interface { // Decorator must return a function which knows how to decorate an Appender's Add function in order // to return an index previously assigned to an entry with the same identity hash, if one exists, or // delegate to the next Add function in the chain otherwise. Decorator() func(AddFn) AddFn // Follower should return a structure which will populate the anti-spam index by tailing the contents // of the log, using the provided function to turn entry bundles into identity hashes. Follower(func(entryBundle []byte) ([][]byte, error)) Follower }
Antispam describes the contract that an antispam implementation must meet in order to be used via the WithAntispam option below.
type AppendOptions ¶
type AppendOptions struct {
// contains filtered or unexported fields
}
AppendOptions holds settings for all storage implementations.
func NewAppendOptions ¶
func NewAppendOptions() *AppendOptions
func (AppendOptions) BatchMaxAge ¶
func (o AppendOptions) BatchMaxAge() time.Duration
func (AppendOptions) BatchMaxSize ¶
func (o AppendOptions) BatchMaxSize() uint
func (AppendOptions) CheckpointInterval ¶
func (o AppendOptions) CheckpointInterval() time.Duration
func (AppendOptions) CheckpointPublisher ¶
func (o AppendOptions) CheckpointPublisher(lr LogReader, httpClient *http.Client) func(context.Context, uint64, []byte) ([]byte, error)
CheckpointPublisher returns a function which should be used to create, sign, and potentially witness a new checkpoint.
func (AppendOptions) EntriesPath ¶
func (o AppendOptions) EntriesPath() func(uint64, uint8) string
func (AppendOptions) GarbageCollectionInterval ¶
func (o AppendOptions) GarbageCollectionInterval() time.Duration
func (AppendOptions) PushbackMaxOutstanding ¶
func (o AppendOptions) PushbackMaxOutstanding() uint
func (*AppendOptions) WithAntispam ¶
func (o *AppendOptions) WithAntispam(inMemEntries uint, as Antispam) *AppendOptions
func (*AppendOptions) WithBatching ¶
func (o *AppendOptions) WithBatching(maxSize uint, maxAge time.Duration) *AppendOptions
WithBatching configures the batching behaviour of leaves being sequenced. A batch will be allowed to grow in memory until either:
- the number of entries in the batch reach maxSize
- the first entry in the batch has reached maxAge
At this point the batch will be sent to the sequencer.
Configuring these parameters allows the personality to tune to get the desired balance of sequencing latency with cost. In general, larger batches allow for lower cost of operation, where more frequent batches reduce the amount of time required for entries to be included in the log.
If this option isn't provided, storage implementations with use the DefaultBatchMaxSize and DefaultBatchMaxAge consts above.
func (*AppendOptions) WithCTLayout ¶
func (o *AppendOptions) WithCTLayout() *AppendOptions
WithCTLayout instructs the underlying storage to use a Static CT API compatible scheme for layout.
func (*AppendOptions) WithCheckpointInterval ¶
func (o *AppendOptions) WithCheckpointInterval(interval time.Duration) *AppendOptions
WithCheckpointInterval configures the frequency at which Tessera will attempt to create & publish a new checkpoint.
Well behaved clients of the log will only "see" newly sequenced entries once a new checkpoint is published, so it's important to set that value such that it works well with your ecosystem.
Regularly publishing new checkpoints:
- helps show that the log is "live", even if no entries are being added.
- enables clients of the log to reason about how frequently they need to have their view of the log refreshed, which in turn helps reduce work/load across the ecosystem.
Note that this option probably only makes sense for long-lived applications (e.g. HTTP servers).
If this option isn't provided, storage implementations will use the DefaultCheckpointInterval const above.
func (*AppendOptions) WithCheckpointSigner ¶
func (o *AppendOptions) WithCheckpointSigner(s note.Signer, additionalSigners ...note.Signer) *AppendOptions
WithCheckpointSigner is an option for setting the note signer and verifier to use when creating and parsing checkpoints. This option is mandatory for creating logs where the checkpoint is signed locally, e.g. in the Appender mode. This does not need to be provided where the storage will be used to mirror other logs.
A primary signer must be provided: - the primary signer is the "canonical" signing identity which should be used when creating new checkpoints.
Zero or more dditional signers may also be provided. This enables cases like:
- a rolling key rotation, where checkpoints are signed by both the old and new keys for some period of time,
- using different signature schemes for different audiences, etc.
When providing additional signers, their names MUST be identical to the primary signer name, and this name will be used as the checkpoint Origin line.
Checkpoints signed by these signer(s) will be standard checkpoints as defined by https://c2sp.org/tlog-checkpoint.
func (*AppendOptions) WithGarbageCollectionInterval ¶
func (o *AppendOptions) WithGarbageCollectionInterval(interval time.Duration) *AppendOptions
WithGarbageCollectionInterval allows the interval between scans to remove obsolete partial tiles and entry bundles.
Setting to zero disables garbage collection.
func (*AppendOptions) WithPushback ¶
func (o *AppendOptions) WithPushback(maxOutstanding uint) *AppendOptions
WithPushback allows configuration of when the storage should start pushing back on add requests.
maxOutstanding is the number of "in-flight" add requests - i.e. the number of entries with sequence numbers assigned, but which are not yet integrated into the log.
func (*AppendOptions) WithWitnesses ¶
func (o *AppendOptions) WithWitnesses(witnesses WitnessGroup, opts *WitnessOptions) *AppendOptions
WithWitnesses configures the set of witnesses that Tessera will contact in order to counter-sign a checkpoint before publishing it. A request will be sent to every witness referenced by the group using the URLs method. The checkpoint will be accepted for publishing when a sufficient number of witnesses to Satisfy the group have responded.
If this method is not called, then the default empty WitnessGroup will be used, which contacts zero witnesses and requires zero witnesses in order to publish.
type Appender ¶
type Appender struct {
Add AddFn
}
Appender allows personalities access to the lifecycle methods associated with logs in sequencing mode. This only has a single method, but other methods are likely to be added such as a Shutdown method for #341.
type Driver ¶
type Driver any
Driver is the implementation-specific parts of Tessera. No methods are on here as this is not for public use.
type Entry ¶
type Entry struct {
// contains filtered or unexported fields
}
Entry represents an entry in a log.
func (Entry) Identity ¶
Identity returns an identity which may be used to de-duplicate entries and they are being added to the log.
func (Entry) Index ¶
Index returns the index assigned to the entry in the log, or nil if no index has been assigned.
func (Entry) LeafHash ¶
LeafHash is the Merkle leaf hash which will be used for this entry in the log. Note that in almost all cases, this should be the RFC6962 definition of a leaf hash.
func (*Entry) MarshalBundleData ¶
MarshalBundleData returns this entry's data in a format ready to be appended to an EntryBundle.
Note that MarshalBundleData _may_ be called multiple times, potentially with different values for index (e.g. if there's a failure in the storage when trying to persist the assignment), so index should not be considered final until the storage Add method has returned successfully with the durably assigned index.
type Follower ¶
type Follower interface { // Name returns a human readable name for this follower. Name() string // Follow should be implemented so as to visit entries in the log in order, using the provided // LogReader to access the entry bundles which contain them. // // Implementations should keep track of their progress such that they can pick-up where they left off // if e.g. the binary is restarted. Follow(context.Context, LogReader) // EntriesProcessed reports the progress of the follower, returning the total number of log entries // successfully seen/processed. EntriesProcessed(context.Context) (uint64, error) }
Follower describes the contract of an entity which tracks the contents of the local log.
Currently, this is only used by anti-spam.
type Index ¶
type Index struct { // Index is the location in the log to which a particular entry has been assigned. Index uint64 // IsDup is true if Index represents a previously assigned index for an identical entry. IsDup bool }
Index represents a durably assigned index for some entry.
type IndexFuture ¶
IndexFuture is the signature of a function which can return an assigned index or error.
Implementations of this func are likely to be "futures", or a promise to return this data at some point in the future, and as such will block when called if the data isn't yet available.
type LogReader ¶
type LogReader interface { // ReadCheckpoint returns the latest checkpoint available. // If no checkpoint is available then os.ErrNotExist should be returned. ReadCheckpoint(ctx context.Context) ([]byte, error) // ReadTile returns the raw marshalled tile at the given coordinates, if it exists. // The expected usage for this method is to derive the parameters from a tree size // that has been committed to by a checkpoint returned by this log. Whenever such a // tree size is used, this method will behave as per the https://c2sp.org/tlog-tiles // spec for the /tile/ path. // // If callers pass in parameters that are not implied by a published tree size, then // implementations _may_ act differently from one another, but all will act in ways // that are allowed by the spec. For example, if the only published tree size has been // for size 2, then asking for a partial tile of 1 may lead to some implementations // returning not found, some may return a tile with 1 leaf, and some may return a tile // with more leaves. ReadTile(ctx context.Context, level, index uint64, p uint8) ([]byte, error) // ReadEntryBundle returns the raw marshalled leaf bundle at the given coordinates, if // it exists. // The expected usage and corresponding behaviours are similar to ReadTile. ReadEntryBundle(ctx context.Context, index uint64, p uint8) ([]byte, error) // NextIndex returns the first as-yet unassigned index. // // In a quiescent log, this will be the same as the checkpoint size. In a log with entries actively // being added, this number will be higher since it will take sequenced but not-yet-integrated/not-yet-published // entries into account. NextIndex(ctx context.Context) (uint64, error) // IntegratedSize returns the current size of the integrated tree. // // This tree will have in place all the static resources the returned size implies, but // there may not yet be a checkpoint for this size signed, witnessed, or published. // // It's ONLY safe to use this value for processes internal to the operation of the log (e.g. // populating antispam data structures); it MUST NOT not be used as a substitute for // reading the checkpoint when only data which has been publicly committed to by the // log should be used. If in doubt, use ReadCheckpoint instead. IntegratedSize(ctx context.Context) (uint64, error) }
LogReader provides read-only access to the log.
type MigrationOptions ¶
type MigrationOptions struct {
// contains filtered or unexported fields
}
MigrationOptions holds migration lifecycle settings for all storage implementations.
func NewMigrationOptions ¶
func NewMigrationOptions() *MigrationOptions
func (MigrationOptions) EntriesPath ¶
func (o MigrationOptions) EntriesPath() func(uint64, uint8) string
func (*MigrationOptions) LeafHasher ¶
func (o *MigrationOptions) LeafHasher() func([]byte) ([][]byte, error)
func (*MigrationOptions) WithAntispam ¶
func (o *MigrationOptions) WithAntispam(as Antispam) *MigrationOptions
WithAntispam configures the migration target to *populate* the provided antispam storage using the data being migrated into the target tree.
Note that since the tree is being _migrated_, the resulting target tree must match the structure of the source tree and so no attempt is made to reject/deduplicate entries.
func (*MigrationOptions) WithCTLayout ¶
func (o *MigrationOptions) WithCTLayout() *MigrationOptions
WithCTLayout instructs the underlying storage to use a Static CT API compatible scheme for layout.
type MigrationTarget ¶
type MigrationTarget struct {
// contains filtered or unexported fields
}
MigrationTarget handles the process of migrating/importing a source log into a Tessera instance.
func NewMigrationTarget ¶
func NewMigrationTarget(ctx context.Context, d Driver, opts *MigrationOptions) (*MigrationTarget, error)
NewMigrationTarget returns a MigrationTarget, which allows a personality to "import" a C2SP tlog-tiles or static-ct compliant log into a Tessera instance.
func (*MigrationTarget) Migrate ¶
func (mt *MigrationTarget) Migrate(ctx context.Context, numWorkers uint, sourceSize uint64, sourceRoot []byte, getEntries client.EntryBundleFetcherFunc) error
Migrate performs the work of importing a source log into the local Tessera instance.
Any entry bundles implied by the provided source log size which are not already present in the local log will be fetched using the provided getEntries function, and stored by the underlying driver. A background process will continuously attempt to integrate these bundles into the local tree.
An error will be returned if there is an unrecoverable problem encountered during the migration process, or if, once all entries have been copied and integrated into the local tree, the local root hash does not match the provided sourceRoot.
type PublicationAwaiter ¶
type PublicationAwaiter struct {
// contains filtered or unexported fields
}
PublicationAwaiter allows client threads to block until a leaf is published. This means it has a sequence number, and been integrated into the tree, and a checkpoint has been published for it. A single long-lived PublicationAwaiter instance should be reused for all requests in the application code as there is some overhead to each one; the core of an PublicationAwaiter is a poll loop that will fetch checkpoints whenever it has clients waiting.
The expected call pattern is:
i, cp, err := awaiter.Await(ctx, storage.Add(myLeaf))
When used this way, it requires very little code at the point of use to block until the new leaf is integrated into the tree.
func NewPublicationAwaiter ¶
func NewPublicationAwaiter(ctx context.Context, readCheckpoint func(ctx context.Context) ([]byte, error), pollPeriod time.Duration) *PublicationAwaiter
NewPublicationAwaiter provides an PublicationAwaiter that can be cancelled using the provided context. The PublicationAwaiter will poll every `pollPeriod` to fetch checkpoints using the `readCheckpoint` function.
func (*PublicationAwaiter) Await ¶
func (a *PublicationAwaiter) Await(ctx context.Context, future IndexFuture) (Index, []byte, error)
Await blocks until the IndexFuture is resolved, and this new index has been integrated into the log, i.e. the log has made a checkpoint available that commits to this new index. When this happens, Await returns the index at which the leaf has been added, and a checkpoint that commits to this index.
This operation can be aborted early by cancelling the context. In this event, or in the event that there is an error getting a valid checkpoint, an error will be returned from this method.
type Witness ¶
Witness represents a single witness that can be reached in order to perform a witnessing operation. The URLs() method returns the URL where it can be reached for witnessing, and the Satisfied method provides a predicate to check whether this witness has signed a checkpoint.
func NewWitness ¶
NewWitness returns a Witness given a verifier key and the root URL for where this witness can be reached.
func (Witness) Endpoints ¶
Endpoints returns the details required for updating a witness and checking the response. The returned result is a map from the URL that should be used to update the witness with a new checkpoint, to the value which is the verifier to check the response is well formed.
type WitnessGroup ¶
type WitnessGroup struct { Components []policyComponent N int }
WitnessGroup defines a group of witnesses, and a threshold of signatures that must be met for this group to be satisfied. Witnesses within a group should be fungible, e.g. all of the Armored Witness devices form a logical group, and N should be picked to represent a threshold of the quorum. For some users this will be a simple majority, but other strategies are available. N must be <= len(WitnessKeys).
func NewWitnessGroup ¶
func NewWitnessGroup(n int, children ...policyComponent) WitnessGroup
NewWitnessGroup creates a grouping of Witness or WitnessGroup with a configurable threshold of these sub-components that need to be satisfied in order for this group to be satisfied.
The threshold should only be set to less than the number of sub-components if these are considered fungible.
func (WitnessGroup) Endpoints ¶
func (wg WitnessGroup) Endpoints() map[string]note.Verifier
Endpoints returns the details required for updating a witness and checking the response. The returned result is a map from the URL that should be used to update the witness with a new checkpoint, to the value which is the verifier to check the response is well formed.
func (WitnessGroup) Satisfied ¶
func (wg WitnessGroup) Satisfied(cp []byte) bool
Satisfied returns true if the checkpoint provided has sufficient signatures from the witnesses in this group to satisfy the threshold. This will return false if there are insufficient signatures, and also if the checkpoint cannot be read as a valid note. It is up to the caller to ensure that the input value represents a valid note.
The implementation of this requires every witness in the group to verify the checkpoint, which is O(N). If this is called every time a witness returns a checkpoint then this algorithm is O(N^2). To support large N, this may require some rewriting in order to maintain performance.
type WitnessOptions ¶
type WitnessOptions struct { // FailOpen controls whether a checkpoint, for which the witness policy was unable to be met, // should still be published. // // This setting is intended only for facilitating early "non-blocking" adoption of witnessing, // and will be disabled and/or removed in the future. FailOpen bool }
WitnessOptions contains extra optional configuration for how Tessera should use/interact with a user-provided WitnessGroup policy.
Source Files
¶
Directories
¶
Path | Synopsis |
---|---|
Package api contains the tiles definitions from the [tlog-tiles API].
|
Package api contains the tiles definitions from the [tlog-tiles API]. |
layout
Package layout contains routines for specifying the path layout of Tessera logs, which is really to say that it provides functions to calculate paths used by the [tlog-tiles API].
|
Package layout contains routines for specifying the path layout of Tessera logs, which is really to say that it provides functions to calculate paths used by the [tlog-tiles API]. |
Package client provides client support for interacting with logs that uses the [tlog-tiles API].
|
Package client provides client support for interacting with logs that uses the [tlog-tiles API]. |
cmd
|
|
conformance/aws
aws is a simple personality allowing to run conformance/compliance/performance tests and showing how to use the Tessera AWS storage implmentation.
|
aws is a simple personality allowing to run conformance/compliance/performance tests and showing how to use the Tessera AWS storage implmentation. |
conformance/gcp
gcp is a simple personality allowing to run conformance/compliance/performance tests and showing how to use the Tessera GCP storage implmentation.
|
gcp is a simple personality allowing to run conformance/compliance/performance tests and showing how to use the Tessera GCP storage implmentation. |
conformance/mysql
mysql is a simple personality allowing to run conformance/compliance/performance tests and showing how to use the Tessera MySQL storage implmentation.
|
mysql is a simple personality allowing to run conformance/compliance/performance tests and showing how to use the Tessera MySQL storage implmentation. |
conformance/posix
posix runs a web server that allows new entries to be POSTed to a tlog-tiles log stored on a posix filesystem.
|
posix runs a web server that allows new entries to be POSTed to a tlog-tiles log stored on a posix filesystem. |
examples/posix-oneshot
posix-oneshot is a command line tool for adding entries to a local tlog-tiles log stored on a posix filesystem.
|
posix-oneshot is a command line tool for adding entries to a local tlog-tiles log stored on a posix filesystem. |
experimental/migrate/aws
aws-migrate is a command-line tool for migrating data from a tlog-tiles compliant log, into a Tessera log instance hosted on AWS.
|
aws-migrate is a command-line tool for migrating data from a tlog-tiles compliant log, into a Tessera log instance hosted on AWS. |
experimental/migrate/gcp
gcp-migrate is a command-line tool for migrating data from a tlog-tiles compliant log, into a Tessera log instance.
|
gcp-migrate is a command-line tool for migrating data from a tlog-tiles compliant log, into a Tessera log instance. |
experimental/migrate/mysql
mysql-migrate is a command-line tool for migrating data from a tlog-tiles compliant log, into a Tessera log instance.
|
mysql-migrate is a command-line tool for migrating data from a tlog-tiles compliant log, into a Tessera log instance. |
experimental/migrate/posix
posix-migrate is a command-line tool for migrating data from a tlog-tiles compliant log, into a Tessera log instance.
|
posix-migrate is a command-line tool for migrating data from a tlog-tiles compliant log, into a Tessera log instance. |
experimental/mirror/internal
Package mirror provides support for the infrastructure-specific mirror tools.
|
Package mirror provides support for the infrastructure-specific mirror tools. |
experimental/mirror/posix
mirror/posix is a command-line tool for mirroring a tlog-tiles compliant log into a POSIX filesystem.
|
mirror/posix is a command-line tool for mirroring a tlog-tiles compliant log into a POSIX filesystem. |
fsck
fsck is a command-line tool for checking the integrity of a tlog-tiles based log.
|
fsck is a command-line tool for checking the integrity of a tlog-tiles based log. |
Package ctonly has support for CT Tiles API.
|
Package ctonly has support for CT Tiles API. |
internal
|
|
hammer
hammer is a tool to load test a Tessera log.
|
hammer is a tool to load test a Tessera log. |
migrate
Package migrate contains internal implementations for migration.
|
Package migrate contains internal implementations for migration. |
parse
Package parse contains internal methods for parsing data structures quickly, if unsafely.
|
Package parse contains internal methods for parsing data structures quickly, if unsafely. |
witness
Package witness contains the implementation for sending out a checkpoint to witnesses and retrieving sufficient signatures to satisfy a policy.
|
Package witness contains the implementation for sending out a checkpoint to witnesses and retrieving sufficient signatures to satisfy a policy. |
storage
|
|
aws
Package aws contains an AWS-based storage implementation for Tessera.
|
Package aws contains an AWS-based storage implementation for Tessera. |
aws/antispam
Package aws contains an AWS-based antispam implementation for Tessera.
|
Package aws contains an AWS-based antispam implementation for Tessera. |
gcp
Package gcp contains a GCP-based storage implementation for Tessera.
|
Package gcp contains a GCP-based storage implementation for Tessera. |
gcp/antispam
Package gcp contains a GCP-based antispam implementation for Tessera.
|
Package gcp contains a GCP-based antispam implementation for Tessera. |
internal
Package storage provides implementations and shared components for tessera storage backends.
|
Package storage provides implementations and shared components for tessera storage backends. |
mysql
Package mysql contains a MySQL-based storage implementation for Tessera.
|
Package mysql contains a MySQL-based storage implementation for Tessera. |
posix/antispam
Package badger provides a Tessera persistent antispam driver based on BadgerDB (https://github.com/hypermodeinc/badger), a high-performance pure-go DB with KV support.
|
Package badger provides a Tessera persistent antispam driver based on BadgerDB (https://github.com/hypermodeinc/badger), a high-performance pure-go DB with KV support. |