Documentation
¶
Overview ¶
Package layer2 is a library for building nostr event stores with two separate data storage systems, primarily for creating size limited caches with larger stores backing them, to enable scaling providing access to an event store to more users more quickly via a caching strategy.
Index ¶
- type Backend
- func (b *Backend) Close() (err error)
- func (b *Backend) DeleteEvent(c context.T, ev *eventid.T, noTombstone ...bool) (err error)
- func (b *Backend) Export(c context.T, w io.Writer, pubkeys ...[]byte)
- func (b *Backend) Import(r io.Reader)
- func (b *Backend) Init(path string) (err error)
- func (b *Backend) Nuke() (err error)
- func (b *Backend) Path() (s string)
- func (b *Backend) QueryEvents(c context.T, f *filter.T) (evs event.Ts, err error)
- func (b *Backend) SaveEvent(c context.T, ev *event.T) (err error)
- func (b *Backend) Sync() (err error)
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Backend ¶
type Backend struct { Ctx context.T WG *sync.WaitGroup // L1 will store its state/configuration in path/layer1 L1 store.I // L2 will store its state/configuration in path/layer2 L2 store.I // PollFrequency is how often the L2 is queried for recent events. This is only // relevant for shared layer2 stores, and will not apply for layer2 // implementations that are just two separate data store systems on the same // server. PollFrequency time.Duration // PollOverlap is the multiple of the PollFrequency within which polling the L2 // is done to ensure any slow synchrony on the L2 is covered (2-4 usually). PollOverlap int // EventSignal triggers when the L1 saves a new event from the L2 // // caller is responsible for populating this so that a signal can pass to all // peers sharing the same L2 and enable cross-cluster subscription delivery. EventSignal event.C // contains filtered or unexported fields }
Backend is a two level nostr event store. The first level is assumed to have a subset of all events that the second level has. This is a mechanism for sharding nostr event data across multiple relays which can then be failovers for each other or shards by geography or subject matter.
func (*Backend) DeleteEvent ¶
DeleteEvent deletes an event on both the layer1 and layer2.
func (*Backend) Export ¶
Export from the layer2, which is assumed to be the most authoritative (and large) store of events available to the relay.
func (*Backend) Import ¶
Import events to the layer2, if the events come up in searches they will be propagated down to the layer1.
func (*Backend) Init ¶
Init a layer2.Backend setting up their configurations and polling frequencies and other similar things.
func (*Backend) Nuke ¶
Nuke wipes the both of the event stores in parallel and returns when both are complete.
func (*Backend) QueryEvents ¶
QueryEvents processes a filter.T search on the event store. The events found in the second level will be saved into the first level so they become available from the first layer next time they match.
Directories
¶
Path | Synopsis |
---|---|
Package badgerbadger is a test of the layer 2 that uses two instances of the ratel event store, meant for testing the layer 2 protocol with two tiers of the database a size limited cache and a large non-purging store.
|
Package badgerbadger is a test of the layer 2 that uses two instances of the ratel event store, meant for testing the layer 2 protocol with two tiers of the database a size limited cache and a large non-purging store. |
tester
Package main is a tester for a layer2 database scheme with one ratel DB with cache and the second not, testing the maintenance of the cache utilization and second level being accessed to fetch events that have been pruned out of the cache.
|
Package main is a tester for a layer2 database scheme with one ratel DB with cache and the second not, testing the maintenance of the cache utilization and second level being accessed to fetch events that have been pruned out of the cache. |