tessera

package module
v0.1.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 25, 2025 License: Apache-2.0 Imports: 34 Imported by: 6

README

Trillian Tessera

Go Report Card OpenSSF Scorecard Benchmarks Slack Status

Trillian Tessera is a Go library for building tile-based transparency logs (tlogs). It is the logical successor to the approach Trillian v1 takes in building and operating logs.

The implementation and its APIs bake-in current best-practices based on the lessons learned over the past decade of building and operating transparency logs in production environments and at scale.

Tessera was introduced at the Transparency.Dev summit in October 2024. Watch Introducing Trillian Tessera for all the details, but here's a summary of the high level goals:

  • tlog-tiles API and storage
  • Support for both cloud and on-premises infrastructure
  • Make it easy to build and deploy new transparency logs on supported infrastructure
    • Library instead of microservice architecture
    • No additional services to manage
    • Lower TCO for operators compared with Trillian v1
  • Fast sequencing and integration of entries
  • Optional functionality which can be enabled for those ecosystems/logs which need it (only pay the cost for what you need):
    • "Best-effort" de-duplication of entries
    • Synchronous integration
  • Broadly similar write-throughput and write-availability, and potentially far higher read-throughput and read-availability compared to Trillian v1 (dependent on underlying infrastructure)
  • Enable building of arbitrary log personalities, including support for the peculiarities of a Static CT API compliant log.

The main non-goal is to support transparency logs using anything other than the tlog-tiles API. While it is possible to deploy a custom personality in front of Tessera that adapts the tlog-tiles API into any other API, this strategy will lose a lot of the read scaling that Tessera is designed for.

Status

Tessera is under active development, with an alpha 2 release available now. Users of GCP, AWS, MySQL, and POSIX are welcome to try the relevant Getting Started guide.

Roadmap

Beta in Q2 2025, and production ready around mid 2025.

# Step Status
1 Drivers for GCP, AWS, MySQL, and POSIX
2 tlog-tiles API support
3 Example code and terraform scripts for easy onboarding
4 Stable API ⚠️
5 Data migration between releases
6 Data migration between drivers
7 Witness support
8 Monitoring and metrics
9 Production ready
10 Mirrored logs (#576)
11 Preordered logs (#575)
12 Trillian v1 to Tessera migration (#577)
N Fancy features (to be expanded upon later)

The current API is unlikely to change in any significant way, however the API is subject to minor breaking changes until we tag 1.0.

What’s happening to Trillian v1?

Trillian v1 is still in use in production environments by multiple organisations in multiple ecosystems, and is likely to remain so for the mid-term.

New ecosystems, or existing ecosystems looking to evolve, should strongly consider planning a migration to Tessera and adopting the patterns it encourages.

[!Tip] To achieve the full benefits of Tessera, logs must use the tlog-tiles API.

Concepts

This section introduces concepts and terms that will be used throughout the user guide.

Sequencing

When data is added to a log, it is first stored in memory for some period (this can be controlled via the batching options). If the process dies in this state, the entry will be lost.

Once a batch of entries is processed by the sequencer, the new data will transition from a volatile state to one where it is durably assigned an index. If the process dies in this state, the entry will be safe, though it will not be available through the read API of the log until the leaf has been Integrated. Once an index number has been issued to a leaf, no other data will ever be issued the same index number. All index numbers are contiguous and start from 0.

[!IMPORTANT] Within a batch, there is no guarantee about which order index numbers will be assigned. The only way to ensure that sequential calls to Add are given sequential indices is by blocking until a sequencing batch is completed. This can be achieved by configuring a batch size of 1, though this will make sequencing expensive!

Integration

Integration is a background process that happens when a Tessera lifecycle object has been created. This process takes sequenced entries and merges them into the log. Once this process has been completed, a new entry will:

  • Be available via the read API at the index that was returned from sequencing
  • Have Merkle tree hashes that commit to this data being included in the tree
Publishing

Publishing is a background process that creates a new Checkpoint for the latest tree. This background process runs periodically (configurable via WithCheckpointInterval and performs the following steps:

  1. Create a new Checkpoint and sign it with the signer provided by WithCheckpointSigner
  2. Contact witnesses and collect enough countersignatures to satisfy any witness policy configured by WithWitnesses
  3. If the witness policy is satisfied, make this new Checkpoint public available

An entry is considered published once it is committed to by a published Checkpoint (i.e. a published Checkpoint's size is larger than the entry's assigned index). Due to the nature of append-only logs, all Checkpoints issued after this point will also commit to inclusion of this entry.

Usage

Getting Started

The best place to start is the codelab. This will walk you through setting up your first log, writing some entries to it via HTTP, and inspecting the contents.

Take a look at the example personalities in the /cmd/ directory:

  • posix: example of operating a log backed by a local filesystem
    • This example runs an HTTP web server that takes arbitrary data and adds it to a file-based log.
  • mysql: example of operating a log that uses MySQL
    • This example is easiest deployed via docker compose, which allows for easy setup and teardown.
  • gcp: example of operating a log running in GCP.
  • aws: example of operating a log running on AWS.
  • posix-oneshot: example of a command line tool to add entries to a log stored on the local filesystem
    • This example is not a long-lived process; running the command integrates entries into the log which lives only as files.

The main.go files for each of these example personalities try to strike a balance when demonstrating features of Tessera between simplicity, and demonstrating best practices. Please raise issues against the repo, or chat to us in Slack if you have ideas for making the examples more accessible!

Writing Personalities
Introduction

Tessera is a library written in Go. It is designed to efficiently serve logs that allow read access via the tlog-tiles API. The code you write that calls Tessera is referred to as a personality, because it tailors the generic library to your ecosystem.

Before starting to write your own personality, it is strongly recommended that you have familiarized yourself with the provided personalities referenced in Getting Started. When writing your Tessera personality, the first decision you need to make is which of the native drivers to use:

The easiest drivers to operate and to scale are the cloud implementations: GCP and AWS. These are the recommended choice for the majority of users running in production.

If you aren't using a cloud provider, then your options are MySQL and POSIX:

  • POSIX is the simplest to get started with as it needs little in the way of extra infrastructure, and if you already serve static files as part of your business/project this could be a good fit.
  • Alternatively, if you are used to operating user-facing applications backed by a RDBMS, then MySQL could be a natural fit.

To get a sense of the rough performance you can expect from the different backends, take a look at docs/performance.md.

Setup

Once you've picked a storage driver, you can start writing your personality! You'll need to import the Tessera library:

# This imports the library at main.
# This should be set to the latest release version to get a stable release.
go get github.com/transparency-dev/trillian-tessera@main
Constructing the Appender

Import the main tessera package, and the driver for the storage backend you want to use:

	tessera "github.com/transparency-dev/trillian-tessera"

	// Choose one!
	"github.com/transparency-dev/trillian-tessera/storage/posix"
	// "github.com/transparency-dev/trillian-tessera/storage/aws"
	// "github.com/transparency-dev/trillian-tessera/storage/gcp"
	// "github.com/transparency-dev/trillian-tessera/storage/mysql"

Now you'll need to instantiate the lifecycle object for the native driver you are using.

By far the most common way to operate logs is in an append-only manner, and the rest of this guide will discuss this mode. For lifecycle states other than Appender mode, take a look at Lifecycles below.

Here's an example of creating an Appender for the POSIX driver:

	driver, _ := posix.New(ctx, "/tmp/mylog")
	signer := createSigner()

	appender, shutdown, reader, err := tessera.NewAppender(
		ctx, driver, tessera.NewAppendOptions().WithCheckpointSigner(signer))

See the documentation for each driver implementation to understand the parameters that each takes.

The final part of configuring Tessera is to set up the addition features that you want to use. These optional libraries can be used to provide common log behaviours. See Features after reading the rest of this section for more details.

Writing to the Log

Now you should have a Tessera instance configured for your environment with the correct features set up. Now the fun part - writing to the log!

	appender, shutdown, reader, err := tessera.NewAppender(
		ctx, driver, tessera.NewAppendOptions().WithCheckpointSigner(signer))
	if err != nil {
		panic(err)
	}

	future, err := appender.Add(ctx, tessera.NewEntry(data))()

The AppendOptions allow Tessera behaviour to be tuned. Take a look at the methods named With* on the AppendOptions struct in the root package, e.g. WithBatching to see the available options are how they should be used.

Writing to the log follows this flow:

  1. Call Add with a new entry created with the data to be added as a leaf in the log.
    • This method returns a future of the form func() (Index, error).
  2. Call this future function, which will block until the data passed into Add has been sequenced
    • On success, an index number is durably assigned and returned
    • On failure, the error is returned

Once an index has been returned, the new data is sequenced, but not necessarily integrated into the log.

As discussed above in Integration, sequenced entries will be asynchronously integrated into the log and be made available via the read API. Some personalities may need to block until this has been performed, e.g. because they will provide the requester with an inclusion proof, which requires integration. Such personalities are recommended to use Synchronous Publication to perform this blocking.

Reading from the Log

Data that has been written to the log needs to be made available for clients and verifiers. Tessera makes the log readable via the tlog-tiles API. In the case of AWS and GCP, the data to be served is written to object storage and served directly by the cloud provider. The log operator only needs to ensure that these object storage instances are publicly readable, and set up a URL to point to them.

In the case of MySQL and POSIX, the log operator will need to take more steps to make the data available. POSIX writes out the files exactly as per the API spec, so the log operator can serve these via an HTTP File Server.

MySQL is the odd implementation in that it requires personality code to handle read traffic. See the example personalities written for MySQL to see how this Go web server should be configured.

Features

Antispam

In some scenarios, particularly where logs are publicly writable such as Certificate Transparency, it's possible for logs to be asked, whether maliciously or accidentally, to add entries they already contain. Generally, this is undesirable, and so Tessera provides an optional mechanism to try to detect and ignore duplicate entries on a best-effort basis.

Logs that do not allow public submissions directly to the log may want to operate without deduplication, instead relying on the personality to never generate duplicates. This can allow for significantly cheaper operation and faster write throughput.

The antispam mechanism consists of two layers which sit in front of the underlying Add implementation of the storage:

  1. The first layer is an InMemory cache which keeps track of a configurable number of recently-added entries. If a recently-seen entry is spotted by the same application instance, this layer will short-circuit the addition of the duplicate, and instead return and index previously assigned to this entry. Otherwise the requested entry is passed on to the second layer.
  2. The second layer is a Persistent index of a hash of the entry to its assigned position in the log. Similarly to the first layer, this second layer will look for a record in its stored data which matches the incoming entry, and if such a record exists, it will short-circuit the addition of the duplicate entry and return a previous version's assigned position in the log.

These layes are configured by the WithAntispam method of the AppendOptions and MigrateOptions.

[!Tip] Persistent antispam is fairly expensive in terms of storage-compute, so should only be used where it is actually necessary.

[!Note] Tessera's antispam mechanism is best effort; there is no guarantee that all duplicate entries will be suppressed. This is a trade-off; fully-atomic "strong" deduplication is extremely expensive in terms of throughput and compute costs, and would limit Tessera to only being able to use transactional type storage backends.

Witnessing

Logs are required to be append-only data structures. This property can be verified by witnesses, and signatures from witnesses can be provided in the published checkpoint to increase confidence for users of the log.

Personalities can configure Tessera with options that specify witnesses compatible with the C2SP Witness Protocol. Configuring the witnesses is done by creating a top-level WitnessGroup that contains either sub WitnessGroups or Witnesses. Each Witness is configured with a URL at which the witness can be requested to make witnessing operations via the C2SP Witness Protocol, and a Verifier for the key that it must sign with. WitnessGroups are configured with their sub-components, and a number of these components that must be satisfied in order for the group to be satisfied.

These primitives allow arbitrarily complex witness policies to be specified.

Once a top-level WitnessGroup is configured, it is passed in to the Appender lifecycle options using AppendOptions#WithWitnesses. If this method is not called then no witnessing will be configured.

[!Note] If the policy cannot be satisfied then no checkpoint will be published. It is up to the log operator to ensure that a satisfiable policy is configured, and that the requested publishing rate is acceptable to the configured witnesses.

Synchronous Publication

Synchronous Publication is provided by tessera.PublicationAwaiter. This allows applications built with Tessera to block until leaves passed via calls to Add() are committed to via a public checkpoint.

[!Tip] This is useful if e.g. your application needs to return an inclusion proof in response to a request to add an entry to the log.

Lifecycles

Appender

This is the most common lifecycle mode. Appender allows the application to add leaves, which will be assigned positions in the log contiguous to any entries the log has already committed to.

This mode is instantiated via tessera.NewAppender, and configured using the tessera.NewAppendOptions struct.

This is described above in Constructing the Appender.

See more details in the Lifecycle Design: Appender.

Migration Target

This mode is used to migrate a log from one location to another.

This is instantiated via tessera.NewMigrationTarget, and configured using the tessera.NewMigratonOptions struct.

[!Tip] This mode enables the migration of logs between different Tessera storage backends, e.g. you may wish to switch serving infrastructure because:

  • You're migrating between/to/from cloud providers for some reason.
  • You're "freezing" your log, and want to move it to a cheap read-only location.

You can also use this mode to migrate a [tlog-tiles][] compliant log into Tessera.

Binaries for migrating into each of the storage implementations can be found at ./cmd/experimental/migrate/. These binaries take the URL of a remote tiled log, and copy it into the target location. These binaries ought to be sufficient for most use-cases. Users that need to write their own migration binary should use the provided binaries as a reference codelab.

See more details in the Lifecycle Design: Migration.

Freezing a Log

Freezing a log prevents new writes to the log, but still allows read access. We recommend that operators allow all pending sequenced entries to be integrated, and all integrated entries to be published via a Checkpoint before proceeding. Once all pending entries are published, the log is now quiescent, as described in Lifecycle Design: Quiescent.

To ensure all pending entries are published, keep an instance object for the current lifecycle state in a running process, but disable writes to this at the personality level. For example, a personality that takes HTTP requests from the Internet and calls Appender.Add should keep a process running with an Appender, but disable any code paths that lead to Add being invoked (e.g. by flipping a flag that changes this behaviour). The instantiated Appender allows its background processes to keep running, ensuring all entries are sequenced, integrated, and published.

Determining when this is complete can be done by inspecting the databases or via the OpenTelemetry metrics which instrument this code; once the next-available sequence number and published checkpoint size have converged and remain stable, the log is in a quiescent state.

A quiescent log using GCP, AWS, or POSIX that is now permanently read-only can be made cheaper to operate. The implementations no longer need any running binaries running Tessera code. Any databases created for this log (i.e. the sequencing tables, or antispam) can be deleted. The read-path can be served directly from the storage buckets (for GCP, AWS) or via a standard HTTP file server (for POSIX).

A log using MySQL must continue to run a personality in order to serve the read path, and thus cannot benefit from the same degree of cost savings when frozen.

Deleting a Log

Deleting a log is generally performed after Freezing a Log.

Deleting a GCP, AWS, or POSIX log that has already been frozen just requires deleting the storage bucket or files from disk.

Deleting a MySQL log can be done by turning down the personality binaries, and then deleting the database.

Sharding a Log

A common way to deploy logs is to run multiple logs in parallel, each of which accepts a distinct subset of entries. For example, CT shards logs temporally, based on the expiry date of the certificate.

Tessera currently has no special support for sharding logs. The recommended way to instantiate a new shard of a log is simply to create a new log as described above. This requires the full stack to be instantiated, including:

  • any DB instances
  • a personality binary for each log

#589 tracks adding more elegant support for sharing resources for sharded logs. Please upvote that issue if you would like us to prioritize it.

Contributing

See CONTRIBUTING.md for details.

License

This repo is licensed under the Apache 2.0 license, see LICENSE for details.

Contact

Acknowledgements

Tessera builds upon the hard work, experience, and lessons from many many folks involved in transparency ecosystems over the years.

Documentation

Overview

Package tessera provides an implementation of a tile-based logging framework.

Index

Constants

View Source
const (
	// DefaultBatchMaxSize is used by storage implementations if no WithBatching option is provided when instantiating it.
	DefaultBatchMaxSize = 256
	// DefaultBatchMaxAge is used by storage implementations if no WithBatching option is provided when instantiating it.
	DefaultBatchMaxAge = 250 * time.Millisecond
	// DefaultCheckpointInterval is used by storage implementations if no WithCheckpointInterval option is provided when instantiating it.
	DefaultCheckpointInterval = 10 * time.Second
	// DefaultPushbackMaxOutstanding is used by storage implementations if no WithPushback option is provided when instantiating it.
	DefaultPushbackMaxOutstanding = 4096
)

Variables

View Source
var ErrNoMoreEntries = errors.New("no more entries")

NoMoreEntries is a sentinel error returned by StreamEntries when no more entries will be returned by calls to the next function.

View Source
var ErrPushback = errors.New("pushback")

ErrPushback is returned by underlying storage implementations when a new entry cannot be accepted due to overload in the system. This could be because there are too many entries with indices assigned but which have not yet been integrated into the tree, or it could be because the antispam mechanism is not able to keep up with recently added entries.

Personalities encountering this error should apply back-pressure to the source of new entries in an appropriate manner (e.g. for HTTP services, return a 503 with a Retry-After header).

Personalities should check for this error using `errors.Is(e, ErrPushback)`.

Functions

func NewAppender added in v0.1.1

func NewAppender(ctx context.Context, d Driver, opts *AppendOptions) (*Appender, func(ctx context.Context) error, LogReader, error)

NewAppender returns an Appender, which allows a personality to incrementally append new leaves to the log and to read from it.

The return values are the Appender for adding new entries, a shutdown function, a log reader, and an error if any of the objects couldn't be constructed.

Shutdown ensures that all calls to Add that have returned a value will be resolved. Any futures returned by _this appender_ which resolve to an index will be integrated and have a checkpoint that commits to them published if this returns successfully. After this returns, any calls to Add will fail.

The context passed into this function will be referenced by any background tasks that are started in the Appender. The correct process for shutting down an Appender cleanly is to first call the shutdown function that is returned, and then cancel the context. Cancelling the context without calling shutdown first may mean that some entries added by this appender aren't in the log when the process exits.

func NewCertificateTransparencyAppender added in v0.1.1

func NewCertificateTransparencyAppender(a *Appender) func(context.Context, *ctonly.Entry) IndexFuture

NewCertificateTransparencyAppender returns a function which knows how to add a CT-specific entry type to the log.

This entry point MUST ONLY be used for CT logs participating in the CT ecosystem. It should not be used as the basis for any other/new transparency application as this protocol: a) embodies some techniques which are not considered to be best practice (it does this to retain backawards-compatibility with RFC6962) b) is not compatible with the https://c2sp.org/tlog-tiles API which we _very strongly_ encourage you to use instead.

Users of this MUST NOT call `Add` on the underlying Appender directly.

Returns a future, which resolves to the assigned index in the log, or an error.

Types

type AddFn added in v0.1.1

type AddFn func(ctx context.Context, entry *Entry) IndexFuture

Add adds a new entry to be sequenced. This method quickly returns an IndexFuture, which will return the index assigned to the new leaf. Until this index is obtained from the future, the leaf is not durably added to the log, and terminating the process may lead to this leaf being lost. Once the future resolves and returns an index, the leaf is durably sequenced and will be preserved even in the process terminates.

Once a leaf is sequenced, it will be integrated into the tree soon (generally single digit seconds). Until it is integrated and published, clients of the log will not be able to verifiably access this value. Personalities that require blocking until the leaf is integrated can use the PublicationAwaiter to wrap the call to this method.

type Antispam added in v0.1.1

type Antispam interface {
	// Decorator must return a function which knows how to decorate an Appender's Add function in order
	// to return an index previously assigned to an entry with the same identity hash, if one exists, or
	// delegate to the next Add function in the chain otherwise.
	Decorator() func(AddFn) AddFn
	// Follower should return a structure which will populate the anti-spam index by tailing the contents
	// of the log, using the provided function to turn entry bundles into identity hashes.
	Follower(func(entryBundle []byte) ([][]byte, error)) Follower
}

Antispam describes the contract that an antispam implementation must meet in order to be used via the WithAntispam option below.

type AppendOptions added in v0.1.1

type AppendOptions struct {
	// contains filtered or unexported fields
}

AppendOptions holds settings for all storage implementations.

func NewAppendOptions added in v0.1.1

func NewAppendOptions() *AppendOptions

func (AppendOptions) BatchMaxAge added in v0.1.1

func (o AppendOptions) BatchMaxAge() time.Duration

func (AppendOptions) BatchMaxSize added in v0.1.1

func (o AppendOptions) BatchMaxSize() uint

func (AppendOptions) CheckpointInterval added in v0.1.1

func (o AppendOptions) CheckpointInterval() time.Duration

func (AppendOptions) CheckpointPublisher added in v0.1.2

func (o AppendOptions) CheckpointPublisher(lr LogReader, httpClient *http.Client) func(context.Context, uint64, []byte) ([]byte, error)

CheckpointPublisher returns a function which should be used to create, sign, and potentially witness a new checkpoint.

func (AppendOptions) EntriesPath added in v0.1.1

func (o AppendOptions) EntriesPath() func(uint64, uint8) string

func (AppendOptions) PushbackMaxOutstanding added in v0.1.1

func (o AppendOptions) PushbackMaxOutstanding() uint

func (*AppendOptions) WithAntispam added in v0.1.1

func (o *AppendOptions) WithAntispam(inMemEntries uint, as Antispam) *AppendOptions

func (*AppendOptions) WithBatching added in v0.1.1

func (o *AppendOptions) WithBatching(maxSize uint, maxAge time.Duration) *AppendOptions

WithBatching configures the batching behaviour of leaves being sequenced. A batch will be allowed to grow in memory until either:

  • the number of entries in the batch reach maxSize
  • the first entry in the batch has reached maxAge

At this point the batch will be sent to the sequencer.

Configuring these parameters allows the personality to tune to get the desired balance of sequencing latency with cost. In general, larger batches allow for lower cost of operation, where more frequent batches reduce the amount of time required for entries to be included in the log.

If this option isn't provided, storage implementations with use the DefaultBatchMaxSize and DefaultBatchMaxAge consts above.

func (*AppendOptions) WithCTLayout added in v0.1.1

func (o *AppendOptions) WithCTLayout() *AppendOptions

WithCTLayout instructs the underlying storage to use a Static CT API compatible scheme for layout.

func (*AppendOptions) WithCheckpointInterval added in v0.1.1

func (o *AppendOptions) WithCheckpointInterval(interval time.Duration) *AppendOptions

WithCheckpointInterval configures the frequency at which Tessera will attempt to create & publish a new checkpoint.

Well behaved clients of the log will only "see" newly sequenced entries once a new checkpoint is published, so it's important to set that value such that it works well with your ecosystem.

Regularly publishing new checkpoints:

  • helps show that the log is "live", even if no entries are being added.
  • enables clients of the log to reason about how frequently they need to have their view of the log refreshed, which in turn helps reduce work/load across the ecosystem.

Note that this option probably only makes sense for long-lived applications (e.g. HTTP servers).

If this option isn't provided, storage implementations will use the DefaultCheckpointInterval const above.

func (*AppendOptions) WithCheckpointSigner added in v0.1.1

func (o *AppendOptions) WithCheckpointSigner(s note.Signer, additionalSigners ...note.Signer) *AppendOptions

WithCheckpointSigner is an option for setting the note signer and verifier to use when creating and parsing checkpoints. This option is mandatory for creating logs where the checkpoint is signed locally, e.g. in the Appender mode. This does not need to be provided where the storage will be used to mirror other logs.

A primary signer must be provided: - the primary signer is the "canonical" signing identity which should be used when creating new checkpoints.

Zero or more dditional signers may also be provided. This enables cases like:

  • a rolling key rotation, where checkpoints are signed by both the old and new keys for some period of time,
  • using different signature schemes for different audiences, etc.

When providing additional signers, their names MUST be identical to the primary signer name, and this name will be used as the checkpoint Origin line.

Checkpoints signed by these signer(s) will be standard checkpoints as defined by https://c2sp.org/tlog-checkpoint.

func (*AppendOptions) WithPushback added in v0.1.1

func (o *AppendOptions) WithPushback(maxOutstanding uint) *AppendOptions

WithPushback allows configuration of when the storage should start pushing back on add requests.

maxOutstanding is the number of "in-flight" add requests - i.e. the number of entries with sequence numbers assigned, but which are not yet integrated into the log.

func (*AppendOptions) WithWitnesses added in v0.1.1

func (o *AppendOptions) WithWitnesses(witnesses WitnessGroup) *AppendOptions

WithWitnesses configures the set of witnesses that Tessera will contact in order to counter-sign a checkpoint before publishing it. A request will be sent to every witness referenced by the group using the URLs method. The checkpoint will be accepted for publishing when a sufficient number of witnesses to Satisfy the group have responded.

If this method is not called, then the default empty WitnessGroup will be used, which contacts zero witnesses and requires zero witnesses in order to publish.

func (AppendOptions) Witnesses added in v0.1.1

func (o AppendOptions) Witnesses() WitnessGroup

type Appender added in v0.1.1

type Appender struct {
	Add AddFn
}

Appender allows personalities access to the lifecycle methods associated with logs in sequencing mode. This only has a single method, but other methods are likely to be added such as a Shutdown method for #341.

type Driver added in v0.1.1

type Driver any

Driver is the implementation-specific parts of Tessera. No methods are on here as this is not for public use.

type Entry

type Entry struct {
	// contains filtered or unexported fields
}

Entry represents an entry in a log.

func NewEntry

func NewEntry(data []byte) *Entry

NewEntry creates a new Entry object with leaf data.

func (Entry) Data

func (e Entry) Data() []byte

Data returns the raw entry bytes which will form the entry in the log.

func (Entry) Identity

func (e Entry) Identity() []byte

Identity returns an identity which may be used to de-duplicate entries and they are being added to the log.

func (Entry) Index

func (e Entry) Index() *uint64

Index returns the index assigned to the entry in the log, or nil if no index has been assigned.

func (Entry) LeafHash

func (e Entry) LeafHash() []byte

LeafHash is the Merkle leaf hash which will be used for this entry in the log. Note that in almost all cases, this should be the RFC6962 definition of a leaf hash.

func (*Entry) MarshalBundleData

func (e *Entry) MarshalBundleData(index uint64) []byte

MarshalBundleData returns this entry's data in a format ready to be appended to an EntryBundle.

Note that MarshalBundleData _may_ be called multiple times, potentially with different values for index (e.g. if there's a failure in the storage when trying to persist the assignment), so index should not be considered final until the storage Add method has returned successfully with the durably assigned index.

type Follower added in v0.1.2

type Follower interface {
	// Name returns a human readable name for this follower.
	Name() string

	// Follow should be implemented so as to visit entries in the log in order, using the provided
	// LogReader to access the entry bundles which contain them.
	//
	// Implementations should keep track of their progress such that they can pick-up where they left off
	// if e.g. the binary is restarted.
	Follow(context.Context, LogReader)

	// EntriesProcessed reports the progress of the follower, returning the total number of log entries
	// successfully seen/processed.
	EntriesProcessed(context.Context) (uint64, error)
}

Follower describes the contract of something which is required to track the contents of the local log.

type Index added in v0.1.2

type Index struct {
	// Index is the location in the log to which a particular entry has been assigned.
	Index uint64
	// IsDup is true if Index represents a previously assigned index for an identical entry.
	IsDup bool
}

Index represents a durably assigned index for some entry.

type IndexFuture

type IndexFuture func() (Index, error)

IndexFuture is the signature of a function which can return an assigned index or error.

Implementations of this func are likely to be "futures", or a promise to return this data at some point in the future, and as such will block when called if the data isn't yet available.

type LogReader added in v0.1.1

type LogReader interface {
	// ReadCheckpoint returns the latest checkpoint available.
	// If no checkpoint is available then os.ErrNotExist should be returned.
	ReadCheckpoint(ctx context.Context) ([]byte, error)

	// ReadTile returns the raw marshalled tile at the given coordinates, if it exists.
	// The expected usage for this method is to derive the parameters from a tree size
	// that has been committed to by a checkpoint returned by this log. Whenever such a
	// tree size is used, this method will behave as per the https://c2sp.org/tlog-tiles
	// spec for the /tile/ path.
	//
	// If callers pass in parameters that are not implied by a published tree size, then
	// implementations _may_ act differently from one another, but all will act in ways
	// that are allowed by the spec. For example, if the only published tree size has been
	// for size 2, then asking for a partial tile of 1 may lead to some implementations
	// returning not found, some may return a tile with 1 leaf, and some may return a tile
	// with more leaves.
	ReadTile(ctx context.Context, level, index uint64, p uint8) ([]byte, error)

	// ReadEntryBundle returns the raw marshalled leaf bundle at the given coordinates, if
	// it exists.
	// The expected usage and corresponding behaviours are similar to ReadTile.
	ReadEntryBundle(ctx context.Context, index uint64, p uint8) ([]byte, error)

	// IntegratedSize returns the current size of the integrated tree.
	//
	// This tree will have in place all the static resources the returned size implies, but
	// there may not yet be a checkpoint for this size signed, witnessed, or published.
	//
	// It's ONLY safe to use this value for processes internal to the operation of the log (e.g.
	// populating antispam data structures); it MUST NOT not be used as a substitute for
	// reading the checkpoint when only data which has been publicly committed to by the
	// log should be used. If in doubt, use ReadCheckpoint instead.
	IntegratedSize(ctx context.Context) (uint64, error)

	// NextIndex returns the first as-yet unassigned index.
	//
	// In a quiescent log, this will be the same as the checkpoint size. In a log with entries actively
	// being added, this number will be higher since it will take sequenced but not-yet-integrated/not-yet-published
	// entries into account.
	NextIndex(ctx context.Context) (uint64, error)

	// StreamEntries() returns functions `next` and `stop` which act like a pull iterator for
	// consecutive entry bundles, starting with the entry bundle which contains the requested entry
	// index.
	//
	// Each call to `next` will return raw entry bundle bytes along with a RangeInfo struct which
	// contains information on which entries within that bundle are to be considered valid.
	//
	// next will hang if it has reached the extent of the current tree, and return once either
	// the tree has grown and more entries are available, or cancel was called.
	//
	// next will cease iterating if either:
	//   - it produces an error (e.g. via the underlying calls to the log storage)
	//   - the returned cancel function is called
	// and will continue to return an error if called again after either of these cases.
	StreamEntries(ctx context.Context, fromEntryIdx uint64) (next func() (layout.RangeInfo, []byte, error), cancel func())
}

LogReader provides read-only access to the log.

type MigrationOptions added in v0.1.1

type MigrationOptions struct {
	// contains filtered or unexported fields
}

MigrationOptions holds migration lifecycle settings for all storage implementations.

func NewMigrationOptions added in v0.1.1

func NewMigrationOptions() *MigrationOptions

func (MigrationOptions) EntriesPath added in v0.1.1

func (o MigrationOptions) EntriesPath() func(uint64, uint8) string

func (*MigrationOptions) LeafHasher added in v0.1.1

func (o *MigrationOptions) LeafHasher() func([]byte) ([][]byte, error)

func (*MigrationOptions) WithAntispam added in v0.1.1

func (o *MigrationOptions) WithAntispam(as Antispam) *MigrationOptions

WithAntispam configures the migration target to *populate* the provided antispam storage using the data being migrated into the target tree.

Note that since the tree is being _migrated_, the resulting target tree must match the structure of the source tree and so no attempt is made to reject/deduplicate entries.

func (*MigrationOptions) WithCTLayout added in v0.1.1

func (o *MigrationOptions) WithCTLayout() *MigrationOptions

WithCTLayout instructs the underlying storage to use a Static CT API compatible scheme for layout.

type MigrationTarget added in v0.1.1

type MigrationTarget struct {
	// contains filtered or unexported fields
}

MigrationTarget handles the process of migrating/importing a source log into a Tessera instance.

func NewMigrationTarget added in v0.1.1

func NewMigrationTarget(ctx context.Context, d Driver, opts *MigrationOptions) (*MigrationTarget, error)

NewMigrationTarget returns a MigrationTarget, which allows a personality to "import" a C2SP tlog-tiles or static-ct compliant log into a Tessera instance.

func (*MigrationTarget) Migrate added in v0.1.2

func (mt *MigrationTarget) Migrate(ctx context.Context, numWorkers uint, sourceSize uint64, sourceRoot []byte, getEntries client.EntryBundleFetcherFunc) error

Migrate performs the work of importing a source log into the local Tessera instance.

Any entry bundles implied by the provided source log size which are not already present in the local log will be fetched using the provided getEntries function, and stored by the underlying driver. A background process will continuously attempt to integrate these bundles into the local tree.

An error will be returned if there is an unrecoverable problem encountered during the migration process, or if, once all entries have been copied and integrated into the local tree, the local root hash does not match the provided sourceRoot.

type MigrationWriter added in v0.1.2

type MigrationWriter interface {
	// SetEntryBundle stores the provided serialised entry bundle at the location implied by the provided
	// entry bundle index and partial size.
	//
	// Bundles may be set in any order (not just consecutively), and the implementation should integrate
	// them into the local tree in the most efficient way possible.
	//
	// Writes should be idempotent; repeated calls to set the same bundle with the same data should not
	// return an error.
	SetEntryBundle(ctx context.Context, idx uint64, partial uint8, bundle []byte) error
	// AwaitIntegration should block until the local integrated tree has grown to the provided size,
	// and should return the locally calculated root hash derived from the integration of the contents of
	// entry bundles set using SetEntryBundle above.
	AwaitIntegration(ctx context.Context, size uint64) ([]byte, error)
	// IntegratedSize returns the current size of the locally integrated log.
	IntegratedSize(ctx context.Context) (uint64, error)
}

type PublicationAwaiter added in v0.1.2

type PublicationAwaiter struct {
	// contains filtered or unexported fields
}

PublicationAwaiter allows client threads to block until a leaf is published. This means it has a sequence number, and been integrated into the tree, and a checkpoint has been published for it. A single long-lived PublicationAwaiter instance should be reused for all requests in the application code as there is some overhead to each one; the core of an PublicationAwaiter is a poll loop that will fetch checkpoints whenever it has clients waiting.

The expected call pattern is:

i, cp, err := awaiter.Await(ctx, storage.Add(myLeaf))

When used this way, it requires very little code at the point of use to block until the new leaf is integrated into the tree.

func NewPublicationAwaiter added in v0.1.2

func NewPublicationAwaiter(ctx context.Context, readCheckpoint func(ctx context.Context) ([]byte, error), pollPeriod time.Duration) *PublicationAwaiter

NewPublicationAwaiter provides an PublicationAwaiter that can be cancelled using the provided context. The PublicationAwaiter will poll every `pollPeriod` to fetch checkpoints using the `readCheckpoint` function.

func (*PublicationAwaiter) Await added in v0.1.2

func (a *PublicationAwaiter) Await(ctx context.Context, future IndexFuture) (Index, []byte, error)

Await blocks until the IndexFuture is resolved, and this new index has been integrated into the log, i.e. the log has made a checkpoint available that commits to this new index. When this happens, Await returns the index at which the leaf has been added, and a checkpoint that commits to this index.

This operation can be aborted early by cancelling the context. In this event, or in the event that there is an error getting a valid checkpoint, an error will be returned from this method.

type UnbundlerFunc added in v0.1.1

type UnbundlerFunc func(entryBundle []byte) ([][]byte, error)

UnbundlerFunc is a function which knows how to turn a serialised entry bundle into a slice of []byte representing each of the entries within the bundle.

type Witness added in v0.1.1

type Witness struct {
	Key note.Verifier
	Url string
}

Witness represents a single witness that can be reached in order to perform a witnessing operation. The URLs() method returns the URL where it can be reached for witnessing, and the Satisfied method provides a predicate to check whether this witness has signed a checkpoint.

func NewWitness added in v0.1.1

func NewWitness(vkey string, witnessRoot *url.URL) (Witness, error)

NewWitness returns a Witness given a verifier key and the root URL for where this witness can be reached.

func (Witness) Endpoints added in v0.1.1

func (w Witness) Endpoints() map[string]note.Verifier

Endpoints returns the details required for updating a witness and checking the response. The returned result is a map from the URL that should be used to update the witness with a new checkpoint, to the value which is the verifier to check the response is well formed.

func (Witness) Satisfied added in v0.1.1

func (w Witness) Satisfied(cp []byte) bool

Satisfied returns true if the checkpoint provided is signed by this witness. This will return false if there is no signature, and also if the checkpoint cannot be read as a valid note. It is up to the caller to ensure that the input value represents a valid note.

type WitnessGroup added in v0.1.1

type WitnessGroup struct {
	Components []policyComponent
	N          int
}

WitnessGroup defines a group of witnesses, and a threshold of signatures that must be met for this group to be satisfied. Witnesses within a group should be fungible, e.g. all of the Armored Witness devices form a logical group, and N should be picked to represent a threshold of the quorum. For some users this will be a simple majority, but other strategies are available. N must be <= len(WitnessKeys).

func NewWitnessGroup added in v0.1.1

func NewWitnessGroup(n int, children ...policyComponent) WitnessGroup

NewWitnessGroup creates a grouping of Witness or WitnessGroup with a configurable threshold of these sub-components that need to be satisfied in order for this group to be satisfied.

The threshold should only be set to less than the number of sub-components if these are considered fungible.

func (WitnessGroup) Endpoints added in v0.1.1

func (wg WitnessGroup) Endpoints() map[string]note.Verifier

Endpoints returns the details required for updating a witness and checking the response. The returned result is a map from the URL that should be used to update the witness with a new checkpoint, to the value which is the verifier to check the response is well formed.

func (WitnessGroup) Satisfied added in v0.1.1

func (wg WitnessGroup) Satisfied(cp []byte) bool

Satisfied returns true if the checkpoint provided has sufficient signatures from the witnesses in this group to satisfy the threshold. This will return false if there are insufficient signatures, and also if the checkpoint cannot be read as a valid note. It is up to the caller to ensure that the input value represents a valid note.

The implementation of this requires every witness in the group to verify the checkpoint, which is O(N). If this is called every time a witness returns a checkpoint then this algorithm is O(N^2). To support large N, this may require some rewriting in order to maintain performance.

Directories

Path Synopsis
api
Package api contains the tiles definitions from the [tlog-tiles API].
Package api contains the tiles definitions from the [tlog-tiles API].
layout
Package layout contains routines for specifying the path layout of Tessera logs, which is really to say that it provides functions to calculate paths used by the [tlog-tiles API].
Package layout contains routines for specifying the path layout of Tessera logs, which is really to say that it provides functions to calculate paths used by the [tlog-tiles API].
Package client provides client support for interacting with logs that uses the [tlog-tiles API].
Package client provides client support for interacting with logs that uses the [tlog-tiles API].
cmd
conformance/aws
aws is a simple personality allowing to run conformance/compliance/performance tests and showing how to use the Tessera AWS storage implmentation.
aws is a simple personality allowing to run conformance/compliance/performance tests and showing how to use the Tessera AWS storage implmentation.
conformance/gcp
gcp is a simple personality allowing to run conformance/compliance/performance tests and showing how to use the Tessera GCP storage implmentation.
gcp is a simple personality allowing to run conformance/compliance/performance tests and showing how to use the Tessera GCP storage implmentation.
conformance/mysql
mysql is a simple personality allowing to run conformance/compliance/performance tests and showing how to use the Tessera MySQL storage implmentation.
mysql is a simple personality allowing to run conformance/compliance/performance tests and showing how to use the Tessera MySQL storage implmentation.
conformance/posix
posix runs a web server that allows new entries to be POSTed to a tlog-tiles log stored on a posix filesystem.
posix runs a web server that allows new entries to be POSTed to a tlog-tiles log stored on a posix filesystem.
examples/posix-oneshot
posix-oneshot is a command line tool for adding entries to a local tlog-tiles log stored on a posix filesystem.
posix-oneshot is a command line tool for adding entries to a local tlog-tiles log stored on a posix filesystem.
experimental/migrate/aws
aws-migrate is a command-line tool for migrating data from a tlog-tiles compliant log, into a Tessera log instance hosted on AWS.
aws-migrate is a command-line tool for migrating data from a tlog-tiles compliant log, into a Tessera log instance hosted on AWS.
experimental/migrate/gcp
gcp-migrate is a command-line tool for migrating data from a tlog-tiles compliant log, into a Tessera log instance.
gcp-migrate is a command-line tool for migrating data from a tlog-tiles compliant log, into a Tessera log instance.
experimental/migrate/mysql
mysql-migrate is a command-line tool for migrating data from a tlog-tiles compliant log, into a Tessera log instance.
mysql-migrate is a command-line tool for migrating data from a tlog-tiles compliant log, into a Tessera log instance.
experimental/migrate/posix
posix-migrate is a command-line tool for migrating data from a tlog-tiles compliant log, into a Tessera log instance.
posix-migrate is a command-line tool for migrating data from a tlog-tiles compliant log, into a Tessera log instance.
Package ctonly has support for CT Tiles API.
Package ctonly has support for CT Tiles API.
internal
hammer
hammer is a tool to load test a Tessera log.
hammer is a tool to load test a Tessera log.
parse
Package parse contains internal methods for parsing data structures quickly, if unsafely.
Package parse contains internal methods for parsing data structures quickly, if unsafely.
witness
Package witness contains the implementation for sending out a checkpoint to witnesses and retrieving sufficient signatures to satisfy a policy.
Package witness contains the implementation for sending out a checkpoint to witnesses and retrieving sufficient signatures to satisfy a policy.
storage
aws
Package aws contains an AWS-based storage implementation for Tessera.
Package aws contains an AWS-based storage implementation for Tessera.
aws/antispam
Package aws contains an AWS-based antispam implementation for Tessera.
Package aws contains an AWS-based antispam implementation for Tessera.
gcp
Package gcp contains a GCP-based storage implementation for Tessera.
Package gcp contains a GCP-based storage implementation for Tessera.
gcp/antispam
Package gcp contains a GCP-based antispam implementation for Tessera.
Package gcp contains a GCP-based antispam implementation for Tessera.
internal
Package storage provides implementations and shared components for tessera storage backends.
Package storage provides implementations and shared components for tessera storage backends.
mysql
Package mysql contains a MySQL-based storage implementation for Tessera.
Package mysql contains a MySQL-based storage implementation for Tessera.
posix/antispam
Package badger provides a Tessera persistent antispam driver based on BadgerDB (https://github.com/hypermodeinc/badger), a high-performance pure-go DB with KV support.
Package badger provides a Tessera persistent antispam driver based on BadgerDB (https://github.com/hypermodeinc/badger), a high-performance pure-go DB with KV support.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL