ipld

package module
v0.9.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 15, 2021 License: MIT Imports: 6 Imported by: 538

README

go-ipld-prime

go-ipld-prime is an implementation of the IPLD spec interfaces, a batteries-included codec implementations of IPLD for CBOR and JSON, and tooling for basic operations on IPLD objects (traversals, etc).

API

The API is split into several packages based on responsibly of the code. The most central interfaces are the base package, but you'll certainly need to import additional packages to get concrete implementations into action.

Roughly speaking, the core package interfaces are all about the IPLD Data Model; the codec/* packages contain functions for parsing serial data into the IPLD Data Model, and converting Data Model content back into serial formats; the traversal package is an example of higher-order functions on the Data Model; concrete ipld.Node implementations ready to use can be found in packages in the node/* directory; and several additional packages contain advanced features such as IPLD Schemas.

(Because the codecs, as well as higher-order features like traversals, are implemented in a separate package from the core interfaces or any of the Node implementations, you can be sure they're not doing any funky "magic" -- all this stuff will work the same if you want to write your own extensions, whether for new Node implementations or new codecs, or new higher-order order functions!)

  • github.com/ipld/go-ipld-prime -- imported as just ipld -- contains the core interfaces for IPLD. The most important interfaces are Node, NodeBuilder, Path, and Link.
  • github.com/ipld/go-ipld-prime/node/basic -- imported as basicnode -- provides concrete implementations of Node and NodeBuilder which work for any kind of data.
  • github.com/ipld/go-ipld-prime/traversal -- contains higher-order functions for traversing graphs of data easily.
  • github.com/ipld/go-ipld-prime/traversal/selector -- contains selectors, which are sort of like regexps, but for trees and graphs of IPLD data!
  • `github.com/ipld/go-ipld-prime/codec -- parent package of all the codec implementations!
  • github.com/ipld/go-ipld-prime/codec/dagcbor -- implementations of marshalling and unmarshalling as CBOR (a fast, binary serialization format).
  • github.com/ipld/go-ipld-prime/codec/dagjson -- implementations of marshalling and unmarshalling as JSON (a popular human readable format).
  • github.com/ipld/go-ipld-prime/linking/cid -- imported as cidlink -- provides concrete implementations of Link as a CID. Also, the multicodec registry.
  • github.com/ipld/go-ipld-prime/schema -- contains the schema.Type and schema.TypedNode interface declarations, which represent IPLD Schema type information.
  • github.com/ipld/go-ipld-prime/node/typed -- provides concrete implementations of schema.TypedNode which decorate a basic Node at runtime to have additional features described by IPLD Schemas.

Other IPLD Libraries

The IPLD specifications are designed to be language-agnostic. Many implementations exist in a variety of languages.

For overall behaviors and specifications, refer to the specs repo: https://github.com/ipld/specs/

distinctions from go-ipld-interface&go-ipld-cbor

This library ("go ipld prime") is the current head of development for golang IPLD, and we recommend new developments in golang be done using this library as the basis.

However, several other libraries exist in golang for working with IPLD data. Most of these predate go-ipld-prime and no longer receive active development, but since they do support a lot of other software, you may continue to seem them around for a while. go-ipld-prime is generally serially compatible with these -- just like it is with IPLD libraries in other languages.

In terms of programmatic API and features, go-ipld-prime is a clean take on the IPLD interfaces, and chose to address several design decisions very differently than older generation of libraries:

  • The Node interfaces map cleanly to the IPLD Data Model;
  • Many features known to be legacy are dropped;
  • The Link implementations are purely CIDs (no "name" nor "size" properties);
  • The Path implementations are provided in the same box;
  • The JSON and CBOR implementations are provided in the same box;
  • Several odd dependencies on blockstore and other interfaces that were closely coupled with IPFS are replaced by simpler, less-coupled interfaces;
  • New features like IPLD Selectors are only available from go-ipld-prime;
  • New features like ADLs (Advanced Data Layouts), which provide features like transparent sharding and indexing for large data, are only available from go-ipld-prime;
  • Declarative transformations can be applied to IPLD data (defined in terms of the IPLD Data Model) using go-ipld-prime;
  • and many other small refinements.

In particular, the clean and direct mapping of "Node" to concepts in the IPLD Data Model ensures a much more consistent set of rules when working with go-ipld-prime data, regardless of which codecs are involved. (Codec-specific embellishments and edge-cases were common in the previous generation of libraries.) This clarity is also what provides the basis for features like Selectors, ADLs, and operations such as declarative transformations.

Many of these changes had been discussed for the other IPLD codebases as well, but we chose clean break v2 as a more viable project-management path. Both go-ipld-prime and these legacy libraries can co-exist on the same import path, and both refer to the same kinds of serial data. Projects wishing to migrate can do so smoothly and at their leisure.

We now consider many of the earlier golang IPLD libraries to be defacto deprecated, and you should expect new features here, rather than in those libraries. (Those libraries still won't be going away anytime soon, but we really don't recomend new construction on them.)

unixfsv1

Be advised that faculties for dealing with unixfsv1 data are still limited. You can find some tools for dealing with dag-pb (the underlying codec) in the ipld/go-codec-dagpb repo, and there are also some tools retrofitting some of unixfsv1's other features to be perceivable using an ADL in the ipfs/go-unixfsnode repo... however, a "some assembly required" advisory may still be in effect; check the readmes in those repos for details on what they support.

Change Policy

The go-ipld-prime library is already usable. We are also still in development, and may still change things.

A changelog can be found at CHANGELOG.md.

Using a commit hash to pin versions precisely when depending on this library is advisable (as it is with any other).

We may sometimes tag releases, but it's just as acceptable to track commits on master without the indirection.

The following are all norms you can expect of changes to this codebase:

  • The master branch will not be force-pushed.
    • (exceptional circumstances may exist, but such exceptions will only be considered valid for about as long after push as the "$N-second-rule" about dropped food).
    • Therefore, commit hashes on master are gold to link against.
  • All other branches will be force-pushed.
    • Therefore, commit hashes not reachable from the master branch are inadvisable to link against.
  • If it's on master, it's understood to be good, in as much as we can tell.
  • Development proceeds -- both starting from and ending on -- the master branch.
    • There are no other long-running supported-but-not-master branches.
    • The existence of tags at any particular commit do not indicate that we will consider starting a long running and supported diverged branch from that point, nor start doing backports, etc.
  • All changes are presumed breaking until proven otherwise; and we don't have the time and attention budget at this point for doing the "proven otherwise".
    • All consumers updating their libraries should run their own compiler, linking, and test suites before assuming the update applies cleanly -- as is good practice regardless.
    • Any idea of semver indicating more or less breakage should be treated as a street vendor selling potions of levitation -- it's likely best disregarded.

None of this is to say we'll go breaking things willy-nilly for fun; but it is to say:

  • Staying close to master is always better than not staying close to master;
  • and trust your compiler and your tests rather than tea-leaf patterns in a tag string.
Version Names

When a tag is made, version number steps in go-ipld-prime advance as follows:

  1. the number bumps when the lead maintainer says it does.
  2. even numbers should be easy upgrades; odd numbers may change things.
  3. the version will start with v0. until further notice.

This is WarpVer.

These version numbers are provided as hints about what to expect, but ultimately, you should always invoke your compiler and your tests to tell you about compatibility.

Updating

Read the CHANGELOG.

Really, read it. We put exact migration instructions in there, as much as possible. Even outright scripts, when feasible.

An even-number release tag is usually made very shortly before an odd number tag, so if you're cautious about absorbing changes, you should update to the even number first, run all your tests, and then upgrade to the odd number. Usually the step to the even number should go off without a hitch, but if you do get problems from advancing to an even number tag, A) you can be pretty sure it's a bug, and B) you didn't have to edit a bunch of code before finding that out.

Documentation

Overview

go-ipld-prime is a series of go interfaces for manipulating IPLD data.

See https://github.com/ipld/specs for more information about the basics of "What is IPLD?".

See https://github.com/ipld/go-ipld-prime/tree/master/doc/README.md for more documentation about go-ipld-prime's architecture and usage.

Here in the godoc, the first couple of types to look at should be:

  • Node
  • NodeBuilder (and NodeAssembler)

These types provide a generic description of the data model.

If working with linked data (data which is split into multiple trees of Nodes, loaded separately, and connected by some kind of "link" reference), the next types you should look at are:

  • Link
  • LinkBuilder
  • Loader
  • Storer

All of these types are interfaces. There are several implementations you can choose; we've provided some in subpackages, or you can bring your own.

Particularly interesting subpackages include:

  • node/* -- various Node + NodeBuilder implementations
  • node/basic -- the first Node implementation you should try
  • codec/* -- functions for serializing and deserializing Nodes
  • linking/* -- various Link + LinkBuilder implementations
  • traversal -- functions for walking Node graphs (including automatic link loading) and visiting
  • must -- helpful functions for streamlining error handling
  • fluent -- alternative Node interfaces that flip errors to panics
  • schema -- interfaces for working with IPLD Schemas and Nodes which use Schema types and constraints

Note that since interfaces in this package are the core of the library, choices made here maximize correctness and performance -- these choices are *not* always the choices that would maximize ergonomics. (Ergonomics can come on top; performance generally can't.) You can check out the 'must' or 'fluent' packages for more ergonomics; 'traversal' provides some ergnomics features for certain uses; any use of schemas with codegen tooling will provide more ergnomic options; or you can make your own function decorators that do what *you* need.

Index

Constants

This section is empty.

Variables

View Source
var (
	KindSet_Recursive = KindSet{Kind_Map, Kind_List}
	KindSet_Scalar    = KindSet{Kind_Null, Kind_Bool, Kind_Int, Kind_Float, Kind_String, Kind_Bytes, Kind_Link}

	KindSet_JustMap    = KindSet{Kind_Map}
	KindSet_JustList   = KindSet{Kind_List}
	KindSet_JustNull   = KindSet{Kind_Null}
	KindSet_JustBool   = KindSet{Kind_Bool}
	KindSet_JustInt    = KindSet{Kind_Int}
	KindSet_JustFloat  = KindSet{Kind_Float}
	KindSet_JustString = KindSet{Kind_String}
	KindSet_JustBytes  = KindSet{Kind_Bytes}
	KindSet_JustLink   = KindSet{Kind_Link}
)

Functions

This section is empty.

Types

type ADL added in v0.9.0

type ADL interface {
	Node

	// Substrate returns the underlying Data Model node, which can be used
	// to encode an ADL's raw layout.
	Substrate() Node
}

ADL represents an Advanced Data Layout, a special kind of Node which implements custom logic while still behaving like an IPLD node.

For more details, see the docs at https://github.com/ipld/specs/blob/master/schemas/authoring-guide.md.

type BlockReadOpener added in v0.9.0

type BlockReadOpener func(LinkContext, Link) (io.Reader, error)

BlockReadOpener defines the shape of a function used to open a reader for a block of data.

In a content-addressed system, the Link parameter should be only determiner of what block body is returned.

The LinkContext may be zero, or may be used to carry extra information: it may be used to carry info which hints at different storage pools; it may be used to carry authentication data; etc. (Any such behaviors are something that a BlockReadOpener implementation will needs to document at a higher detail level than this interface specifies. In this interface, we can only note that it is possible to pass such information opaquely via the LinkContext or by attachments to the general-purpose Context it contains.) The LinkContext should not have effect on the block body returned, however; at most should only affect data availability (e.g. whether any block body is returned, versus an error).

Reads are cancellable by cancelling the LinkContext.Context.

Other parts of the IPLD library suite (such as the traversal package, and all its functions) will typically take a Context as a parameter or piece of config from the caller, and will pass that down through the LinkContext, meaning this can be used to carry information as well as cancellation control all the way through the system.

BlockReadOpener is typically not used directly, but is instead composed in a LinkSystem and used via the methods of LinkSystem. LinkSystem methods will helpfully handle the entire process of opening block readers, verifying the hash of the data stream, and applying a Decoder to build Nodes -- all as one step.

BlockReadOpener implementations are not required to validate that the contents which will be streamed out of the reader actually match and hash in the Link parameter before returning. (This is something that the LinkSystem composition will handle if you're using it.)

Some implementations of BlockWriteOpener and BlockReadOpener may be found in the storage package. Applications are also free to write their own.

type BlockWriteCommitter added in v0.9.0

type BlockWriteCommitter func(Link) error

BlockWriteCommitter defines the shape of a function which, together with BlockWriteOpener, handles the writing and "committing" of a write to a content-addressable storage system.

BlockWriteCommitter is a function which is will be called at the end of a write process. It should flush any buffers and close the io.Writer which was made available earlier from the BlockWriteOpener call that also returned this BlockWriteCommitter.

BlockWriteCommitter takes a Link parameter. This Link is expected to be a reasonable hash of the content, so that the BlockWriteCommitter can use this to commit the data to storage in a content-addressable fashion. See the documentation of BlockWriteOpener for more description of this and an example of how this is likely to be reduced to practice.

type BlockWriteOpener added in v0.9.0

type BlockWriteOpener func(LinkContext) (io.Writer, BlockWriteCommitter, error)

BlockWriteOpener defines the shape of a function used to open a writer into which data can be streamed, and which will eventually be "commited". Committing is done using the BlockWriteCommitter returned by using the BlockWriteOpener, and finishes the write along with requiring stating the Link which should identify this data for future reading.

The LinkContext may be zero, or may be used to carry extra information: it may be used to carry info which hints at different storage pools; it may be used to carry authentication data; etc.

Writes are cancellable by cancelling the LinkContext.Context.

Other parts of the IPLD library suite (such as the traversal package, and all its functions) will typically take a Context as a parameter or piece of config from the caller, and will pass that down through the LinkContext, meaning this can be used to carry information as well as cancellation control all the way through the system.

BlockWriteOpener is typically not used directly, but is instead composed in a LinkSystem and used via the methods of LinkSystem. LinkSystem methods will helpfully handle the entire process of traversing a Node tree, encoding this data, hashing it, streaming it to the writer, and committing it -- all as one step.

BlockWriteOpener implementations are expected to start writing their content immediately, and later, the returned BlockWriteCommitter should also be able to expect that the Link which it is given is a reasonable hash of the content. (To give an example of how this might be efficiently implemented: One might imagine that if implementing a disk storage mechanism, the io.Writer returned from a BlockWriteOpener will be writing a new tempfile, and when the BlockWriteCommiter is called, it will flush the writes and then use a rename operation to place the tempfile in a permanent path based the Link.)

Some implementations of BlockWriteOpener and BlockReadOpener may be found in the storage package. Applications are also free to write their own.

type Decoder added in v0.9.0

type Decoder func(NodeAssembler, io.Reader) error

Decoder defines the shape of a function which produces a Node tree by reading serialized data from an io.Reader. (Decoder doesn't itself return a Node directly, but rather takes a NodeAssembler as an argument, because this allows the caller more control over the Node implementation, as well as some control over allocations.)

The dual of Decoder is an Encoder, which takes a Node and emits its data in a serialized form into an io.Writer. Typically, Decoder and Encoder functions will be found in pairs, and will be expected to be able to round-trip each other's data.

Decoder functions can be used directly. Decoder functions are also often used via a LinkSystem when working with content-addressed storage. LinkSystem methods will helpfully handle the entire process of opening block readers, verifying the hash of the data stream, and applying a Decoder to build Nodes -- all as one step.

A Decoder works with Nodes. If you have a native golang structure, and want to populate it with data using a Decoder, you'll need to either get a NodeAssembler which proxies data into that structure directly, or assemble a Node as intermediate storage and copy the data to the native structure as a separate step.

It may be useful to understand "multicodecs" when working with Decoders. See the documentation on the Encoder function interface for more discussion of multicodecs, the multicodec table, and how this is typically connected to linking.

type Encoder added in v0.9.0

type Encoder func(Node, io.Writer) error

Encoder defines the shape of a function which traverses a Node tree and emits its data in a serialized form into an io.Writer.

The dual of Encoder is a Decoder, which takes a NodeAssembler and fills it with deserialized data consumed from an io.Reader. Typically, Decoder and Encoder functions will be found in pairs, and will be expected to be able to round-trip each other's data.

Encoder functions can be used directly. Encoder functions are also often used via a LinkSystem when working with content-addressed storage. LinkSystem methods will helpfully handle the entire process of traversing a Node tree, encoding this data, hashing it, streaming it to the writer, and committing it -- all as one step.

An Encoder works with Nodes. If you have a native golang structure, and want to serialize it using an Encoder, you'll need to figure out how to transform that golang structure into an ipld.Node tree first.

It may be useful to understand "multicodecs" when working with Encoders. In IPLD, a system called "multicodecs" is typically used to describe encoding foramts. A "multicodec indicator" is a number which describes an encoding; the Link implementations used in IPLD (CIDs) store a multicodec indicator in the Link; and in this library, a multicodec registry exists in the `codec` package, and can be used to associate a multicodec indicator number with an Encoder function. The default EncoderChooser in a LinkSystem will use this multicodec registry to select Encoder functions. However, you can construct a LinkSystem that uses any EncoderChooser you want. It is also possible to have and use Encoder functions that aren't registered as a multicodec at all... we just recommend being cautious of this, because it may make your data less recognizable when working with other systems that use multicodec indicators as part of their communication.

type ErrCannotBeNull added in v0.0.3

type ErrCannotBeNull struct{} // Review: arguably either ErrInvalidKindForNodePrototype.

type ErrHashMismatch added in v0.9.0

type ErrHashMismatch struct {
	Actual   Link
	Expected Link
}

ErrHashMismatch is the error returned when loading data and verifying its hash and finding that the loaded data doesn't re-hash to the expected value. It is typically seen returned by functions like LinkSystem.Load or LinkSystem.Fill.

func (ErrHashMismatch) Error added in v0.9.0

func (e ErrHashMismatch) Error() string

type ErrInvalidKey added in v0.0.2

type ErrInvalidKey struct {
	// TypeName will indicate the named type of a node the function was called on.
	TypeName string

	// Key is the key that was rejected.
	Key Node

	// Reason, if set, may provide details (for example, the reason a key couldn't be converted to a type).
	// If absent, it'll be presumed "no such field".
	// ErrUnmatchable may show up as a reason for typed maps with complex keys.
	Reason error
}

ErrInvalidKey indicates a key is invalid for some reason.

This is only possible for typed nodes; specifically, it may show up when handling struct types, or maps with interesting key types. (Other kinds of key invalidity that happen for untyped maps fall under ErrRepeatedMapKey or ErrWrongKind.) (Union types use ErrInvalidUnionDiscriminant instead of ErrInvalidKey, even when their representation strategy is maplike.)

func (ErrInvalidKey) Error added in v0.0.2

func (e ErrInvalidKey) Error() string

type ErrInvalidSegmentForList added in v0.4.0

type ErrInvalidSegmentForList struct {
	// TypeName may indicate the named type of a node the function was called on,
	// or be empty string if working on untyped data.
	TypeName string

	// TroubleSegment is the segment we couldn't use.
	TroubleSegment PathSegment

	// Reason may explain more about why the PathSegment couldn't be used;
	// in practice, it's probably a 'strconv.NumError'.
	Reason error
}

ErrInvalidSegmentForList is returned when using Node.LookupBySegment and the given PathSegment can't be applied to a list because it's unparsable as a number.

func (ErrInvalidSegmentForList) Error added in v0.4.0

func (e ErrInvalidSegmentForList) Error() string

type ErrInvalidUnionDiscriminant added in v0.0.3

type ErrInvalidUnionDiscriminant struct{} // only possible for typed nodes -- specifically, union types.

type ErrIteratorOverread

type ErrIteratorOverread struct{}

ErrIteratorOverread is returned when calling 'Next' on a MapIterator or ListIterator when it is already done.

func (ErrIteratorOverread) Error

func (e ErrIteratorOverread) Error() string

type ErrLinkingSetup added in v0.9.0

type ErrLinkingSetup struct {
	Detail string // Perhaps an enum here as well, which states which internal function was to blame?
	Cause  error
}

ErrLinkingSetup is returned by methods on LinkSystem when some part of the system is not set up correctly, or when one of the components refuses to handle a Link or LinkPrototype given. (It is not yielded for errors from the storage nor codec systems once they've started; those errors rise without interference.)

func (ErrLinkingSetup) Error added in v0.9.0

func (e ErrLinkingSetup) Error() string

func (ErrLinkingSetup) Unwrap added in v0.9.0

func (e ErrLinkingSetup) Unwrap() error

type ErrListOverrun added in v0.0.3

type ErrListOverrun struct{} // only possible for typed nodes -- specifically, struct types with list (aka tuple) representations.

type ErrMissingRequiredField added in v0.0.3

type ErrMissingRequiredField struct {
	Missing []string
}

ErrMissingRequiredField is returned when calling 'Finish' on a NodeAssembler for a Struct that has not has all required fields set.

func (ErrMissingRequiredField) Error added in v0.6.0

func (e ErrMissingRequiredField) Error() string

type ErrNotExists

type ErrNotExists struct {
	Segment PathSegment
}

ErrNotExists may be returned from the lookup functions of the Node interface to indicate a missing value.

Note that schema.ErrNoSuchField is another type of error which sometimes occurs in similar places as ErrNotExists. ErrNoSuchField is preferred when handling data with constraints provided by a schema that mean that a field can *never* exist (as differentiated from a map key which is simply absent in some data).

func (ErrNotExists) Error

func (e ErrNotExists) Error() string

type ErrRepeatedMapKey added in v0.0.3

type ErrRepeatedMapKey struct {
	Key Node
}

ErrRepeatedMapKey is an error indicating that a key was inserted into a map that already contains that key.

This error may be returned by any methods that add data to a map -- any of the methods on a NodeAssembler that was yielded by MapAssembler.AssignKey(), or from the MapAssembler.AssignDirectly() method.

func (ErrRepeatedMapKey) Error added in v0.0.3

func (e ErrRepeatedMapKey) Error() string

type ErrUnmatchable added in v0.4.0

type ErrUnmatchable struct {
	// TypeName will indicate the named type of a node the function was called on.
	TypeName string

	// Reason must always be present.  ErrUnmatchable doesn't say much otherwise.
	Reason error
}

ErrUnmatchable is the error raised when processing data with IPLD Schemas and finding data which cannot be matched into the schema. It will be returned by NodeAssemblers and NodeBuilders when they are fed unmatchable data. As a result, it will also often be seen returned from unmarshalling when unmarshalling into schema-constrained NodeAssemblers.

ErrUnmatchable provides the name of the type in the schema that data couldn't be matched to, and wraps another error as the more detailed reason.

func (ErrUnmatchable) Error added in v0.4.0

func (e ErrUnmatchable) Error() string

func (ErrUnmatchable) Reasonf added in v0.9.0

func (e ErrUnmatchable) Reasonf(format string, a ...interface{}) ErrUnmatchable

Reasonf returns a new ErrUnmatchable with a Reason field set to the Errorf of the arguments. It's a helper function for creating untyped error reasons without importing the fmt package.

type ErrWrongKind

type ErrWrongKind struct {
	// TypeName may optionally indicate the named type of a node the function
	// was called on (if the node was typed!), or, may be the empty string.
	TypeName string

	// MethodName is literally the string for the operation attempted, e.g.
	// "AsString".
	//
	// For methods on nodebuilders, we say e.g. "NodeBuilder.CreateMap".
	MethodName string

	// ApprorpriateKind describes which Kinds the erroring method would
	// make sense for.
	AppropriateKind KindSet

	// ActualKind describes the Kind of the node the method was called on.
	//
	// In the case of typed nodes, this will typically refer to the 'natural'
	// data-model kind for such a type (e.g., structs will say 'map' here).
	ActualKind Kind
}

ErrWrongKind may be returned from functions on the Node interface when a method is invoked which doesn't make sense for the Kind that node concretely contains.

For example, calling AsString on a map will return ErrWrongKind. Calling Lookup on an int will similarly return ErrWrongKind.

func (ErrWrongKind) Error

func (e ErrWrongKind) Error() string

type Kind added in v0.7.0

type Kind uint8

Kind represents the primitive kind in the IPLD data model. All of these kinds map directly onto serializable data.

Note that Kind contains the concept of "map", but not "struct" or "object" -- those are a concepts that could be introduced in a type system layers, but are *not* present in the data model layer, and therefore they aren't included in the Kind enum.

const (
	Kind_Invalid Kind = 0
	Kind_Map     Kind = '{'
	Kind_List    Kind = '['
	Kind_Null    Kind = '0'
	Kind_Bool    Kind = 'b'
	Kind_Int     Kind = 'i'
	Kind_Float   Kind = 'f'
	Kind_String  Kind = 's'
	Kind_Bytes   Kind = 'x'
	Kind_Link    Kind = '/'
)

func (Kind) String added in v0.7.0

func (k Kind) String() string

type KindSet added in v0.7.0

type KindSet []Kind

KindSet is a type with a few enumerated consts that are commonly used (mostly, in error messages).

func (KindSet) Contains added in v0.7.0

func (x KindSet) Contains(e Kind) bool

func (KindSet) String added in v0.7.0

func (x KindSet) String() string
type Link interface {
	// Prototype should return a LinkPrototype which carries the information
	// to make more Link values similar to this one (but with different hashes).
	Prototype() LinkPrototype

	// String should return a reasonably human-readable debug-friendly representation the Link.
	// There is no contract that requires that the string be able to be parsed back into a Link value,
	// but the string should be unique (e.g. not elide any parts of the hash).
	String() string
}

Link is a special kind of value in IPLD which can be "loaded" to access more nodes.

Nodes can be a Link: "link" is one of the kinds in the IPLD Data Model; and accordingly there is an `ipld.Kind_Link` enum value, and Node has an `AsLink` method.

Links are considered a scalar value in the IPLD Data Model, but when "loaded", the result can be any other IPLD kind: maps, lists, strings, etc.

Link is an interface in the go-ipld-prime implementation, but the most common instantiation of it comes from the `linking/cid` package, and represents CIDs (see https://github.com/multiformats/cid).

The Link interface says very little by itself; it's generally necessary to use type assertions to unpack more specific forms of data. The only real contract is that the Link must be able to return a LinkPrototype, which must be able to produce new Link values of a similar form. (In practice: if you're familiar with CIDs: Link.Prototype is analogous to cid.Prefix.)

The traversal package contains powerful features for walking through large graphs of Nodes while automatically loading and traversing links as the walk goes.

Note that the Link interface should typically be inhabited by a struct or string, as opposed to a pointer. This is because Link is often desirable to be able to use as a golang map key, and in that context, pointers would not result in the desired behavior.

type LinkContext

type LinkContext struct {
	// Ctx is the familiar golang Context pattern.
	// Use this for cancellation, or attaching additional info
	// (for example, perhaps to pass auth tokens through to the storage functions).
	Ctx context.Context

	// Path where the link was encountered.  May be zero.
	//
	// Functions in the traversal package will set this automatically.
	LinkPath Path

	// When traversing data or encoding: the Node containing the link --
	// it may have additional type info, etc, that can be accessed.
	// When building / decoding: not present.
	//
	// Functions in the traversal package will set this automatically.
	LinkNode Node

	// When building data or decoding: the NodeAssembler that will be receiving the link --
	// it may have additional type info, etc, that can be accessed.
	// When traversing / encoding: not present.
	//
	// Functions in the traversal package will set this automatically.
	LinkNodeAssembler NodeAssembler

	// Parent of the LinkNode.  May be zero.
	//
	// Functions in the traversal package will set this automatically.
	ParentNode Node
}

LinkContext is a structure carrying ancilary information that may be used while loading or storing data -- see its usage in BlockReadOpener, BlockWriteOpener, and in the methods on LinkSystem which handle loading and storing data.

A zero value for LinkContext is generally acceptable in any functions that use it. In this case, any operations that need a Context will use Context.Background (thus being uncancellable) and simply have no additional information to work with.

type LinkPrototype added in v0.9.0

type LinkPrototype interface {
	// BuildLink should return a new Link value based on the given hashsum.
	// The hashsum argument should typically be a value returned from a
	// https://golang.org/pkg/hash/#Hash.Sum call.
	//
	// The hashsum reference must not be retained (the caller is free to reuse it).
	BuildLink(hashsum []byte) Link
}

LinkPrototype encapsulates any implementation details and parameters necessary for creating a Link, expect for the hash result itself.

LinkPrototype, like Link, is an interface in go-ipld-prime, but the most common instantiation of it comes from the `linking/cid` package, and represents CIDs (see https://github.com/multiformats/cid). If using CIDs as an implementation, LinkPrototype will encapsulate information like multihashType, multicodecType, and cidVersion, for example. (LinkPrototype is analogous to cid.Prefix.)

type LinkSystem added in v0.9.0

type LinkSystem struct {
	EncoderChooser     func(LinkPrototype) (Encoder, error)
	DecoderChooser     func(Link) (Decoder, error)
	HasherChooser      func(LinkPrototype) (hash.Hash, error)
	StorageWriteOpener BlockWriteOpener
	StorageReadOpener  BlockReadOpener
}

LinkSystem is a struct that composes all the individual functions needed to load and store content addressed data using IPLD -- encoding functions, hashing functions, and storage connections -- and then offers the operations a user wants -- Store and Load -- as methods.

Typically, the functions which are fields of LinkSystem are not used directly by users (except to set them, when creating the LinkSystem), and it's the higher level operations such as Store and Load that user code then calls.

The most typical way to get a LinkSystem is from the linking/cid package, which has a factory function called DefaultLinkSystem. The LinkSystem returned by that function will be based on CIDs, and use the multicodec registry and multihash registry to select encodings and hashing mechanisms. The BlockWriteOpener and BlockReadOpener must still be provided by the user; otherwise, only the ComputeLink method will work.

Some implementations of BlockWriteOpener and BlockReadOpener may be found in the storage package. Applications are also free to write their own. Custom wrapping of BlockWriteOpener and BlockReadOpener are also common, and may be reasonable if one wants to build application features that are block-aware.

func (lsys *LinkSystem) ComputeLink(lp LinkPrototype, n Node) (Link, error)

ComputeLink returns a Link for the given data, but doesn't do anything else (e.g. it doesn't try to store any of the serial-form data anywhere else).

func (*LinkSystem) Fill added in v0.9.0

func (lsys *LinkSystem) Fill(lnkCtx LinkContext, lnk Link, na NodeAssembler) error

func (*LinkSystem) Load added in v0.9.0

func (lsys *LinkSystem) Load(lnkCtx LinkContext, lnk Link, np NodePrototype) (Node, error)
func (lsys *LinkSystem) MustComputeLink(lp LinkPrototype, n Node) Link

func (*LinkSystem) MustFill added in v0.9.0

func (lsys *LinkSystem) MustFill(lnkCtx LinkContext, lnk Link, na NodeAssembler)

func (*LinkSystem) MustLoad added in v0.9.0

func (lsys *LinkSystem) MustLoad(lnkCtx LinkContext, lnk Link, np NodePrototype) Node

func (*LinkSystem) MustStore added in v0.9.0

func (lsys *LinkSystem) MustStore(lnkCtx LinkContext, lp LinkPrototype, n Node) Link

func (*LinkSystem) Store added in v0.9.0

func (lsys *LinkSystem) Store(lnkCtx LinkContext, lp LinkPrototype, n Node) (Link, error)

type ListAssembler added in v0.0.3

type ListAssembler interface {
	AssembleValue() NodeAssembler

	Finish() error

	// ValuePrototype returns a NodePrototype that knows how to build values this map can contain.
	//
	// You often don't need this (because you should be able to
	// just feed data and check errors), but it's here.
	//
	// ValuePrototype, much like the matching method on the MapAssembler interface,
	// requires a parameter specifying the index in the list in order to say
	// what NodePrototype will be acceptable as a value at that position.
	// For many lists (and *all* lists which operate exclusively at the Data Model level),
	// this will return the same NodePrototype regardless of the value of 'idx';
	// the only time this value will vary is when operating with a Schema,
	// and handling the representation NodeAssembler for a struct type with
	// a representation of a list kind.
	// If you know you are operating in a situation that won't have varying
	// NodePrototypes, it is acceptable to call `ValuePrototype(0)` and use the
	// resulting NodePrototype for all reasoning.
	ValuePrototype(idx int64) NodePrototype
}

type ListIterator

type ListIterator interface {
	// Next returns the next index and value.
	//
	// An error value can also be returned at any step: in the case of advanced
	// data structures with incremental loading, it's possible to encounter
	// cancellation or I/O errors at any point in iteration.
	// If an error is returned, the boolean will always be false (so it's
	// correct to check the bool first and short circuit to continuing if true).
	// If an error is returned, the key and value may be nil.
	Next() (idx int64, value Node, err error)

	// Done returns false as long as there's at least one more entry to iterate.
	// When Done returns false, iteration can stop.
	//
	// Note when implementing iterators for advanced data layouts (e.g. more than
	// one chunk of backing data, which is loaded incrementally): if your
	// implementation does any I/O during the Done method, and it encounters
	// an error, it must return 'false', so that the following Next call
	// has an opportunity to return the error.
	Done() bool
}

ListIterator is an interface for traversing list nodes. Sequential calls to Next() will yield index-value pairs; Done() describes whether iteration should continue.

A loop which iterates from 0 to Node.Length is a valid alternative to using a ListIterator.

type MapAssembler added in v0.0.3

type MapAssembler interface {
	AssembleKey() NodeAssembler   // must be followed by call to AssembleValue.
	AssembleValue() NodeAssembler // must be called immediately after AssembleKey.

	AssembleEntry(k string) (NodeAssembler, error) // shortcut combining AssembleKey and AssembleValue into one step; valid when the key is a string kind.

	Finish() error

	// KeyPrototype returns a NodePrototype that knows how to build keys of a type this map uses.
	//
	// You often don't need this (because you should be able to
	// just feed data and check errors), but it's here.
	//
	// For all Data Model maps, this will answer with a basic concept of "string".
	// For Schema typed maps, this may answer with a more complex type (potentially even a struct type).
	KeyPrototype() NodePrototype

	// ValuePrototype returns a NodePrototype that knows how to build values this map can contain.
	//
	// You often don't need this (because you should be able to
	// just feed data and check errors), but it's here.
	//
	// ValuePrototype requires a parameter describing the key in order to say what
	// NodePrototype will be acceptable as a value for that key, because when using
	// struct types (or union types) from the Schemas system, they behave as maps
	// but have different acceptable types for each field (or member, for unions).
	// For plain maps (that is, not structs or unions masquerading as maps),
	// the empty string can be used as a parameter, and the returned NodePrototype
	// can be assumed applicable for all values.
	// Using an empty string for a struct or union will return nil,
	// as will using any string which isn't a field or member of those types.
	//
	// (Design note: a string is sufficient for the parameter here rather than
	// a full Node, because the only cases where the value types vary are also
	// cases where the keys may not be complex.)
	ValuePrototype(k string) NodePrototype
}

MapAssembler assembles a map node! (You guessed it.)

Methods on MapAssembler must be called in a valid order: assemble a key, then assemble a value, then loop as long as desired; when finished, call 'Finish'.

Incorrect order invocations will panic. Calling AssembleKey twice in a row will panic; calling AssembleValue before finishing using the NodeAssembler from AssembleKey will panic; calling AssembleValue twice in a row will panic; etc.

Note that the NodeAssembler yielded from AssembleKey has additional behavior: if the node assembled there matches a key already present in the map, that assembler will emit the error!

type MapIterator

type MapIterator interface {
	// Next returns the next key-value pair.
	//
	// An error value can also be returned at any step: in the case of advanced
	// data structures with incremental loading, it's possible to encounter
	// cancellation or I/O errors at any point in iteration.
	// If an error is returned, the boolean will always be false (so it's
	// correct to check the bool first and short circuit to continuing if true).
	// If an error is returned, the key and value may be nil.
	Next() (key Node, value Node, err error)

	// Done returns false as long as there's at least one more entry to iterate.
	// When Done returns true, iteration can stop.
	//
	// Note when implementing iterators for advanced data layouts (e.g. more than
	// one chunk of backing data, which is loaded incrementally): if your
	// implementation does any I/O during the Done method, and it encounters
	// an error, it must return 'false', so that the following Next call
	// has an opportunity to return the error.
	Done() bool
}

MapIterator is an interface for traversing map nodes. Sequential calls to Next() will yield key-value pairs; Done() describes whether iteration should continue.

Iteration order is defined to be stable: two separate MapIterator created to iterate the same Node will yield the same key-value pairs in the same order. The order itself may be defined by the Node implementation: some Nodes may retain insertion order, and some may return iterators which always yield data in sorted order, for example.

type Node

type Node interface {
	// Kind returns a value from the Kind enum describing what the
	// essential serializable kind of this node is (map, list, integer, etc).
	// Most other handling of a node requires first switching upon the kind.
	Kind() Kind

	// LookupByString looks up a child object in this node and returns it.
	// The returned Node may be any of the Kind:
	// a primitive (string, int64, etc), a map, a list, or a link.
	//
	// If the Kind of this Node is not Kind_Map, a nil node and an error
	// will be returned.
	//
	// If the key does not exist, a nil node and an error will be returned.
	LookupByString(key string) (Node, error)

	// LookupByNode is the equivalent of LookupByString, but takes a reified Node
	// as a parameter instead of a plain string.
	// This mechanism is useful if working with typed maps (if the key types
	// have constraints, and you already have a reified `schema.TypedNode` value,
	// using that value can save parsing and validation costs);
	// and may simply be convenient if you already have a Node value in hand.
	//
	// (When writing generic functions over Node, a good rule of thumb is:
	// when handling a map, check for `schema.TypedNode`, and in this case prefer
	// the LookupByNode(Node) method; otherwise, favor LookupByString; typically
	// implementations will have their fastest paths thusly.)
	LookupByNode(key Node) (Node, error)

	// LookupByIndex is the equivalent of LookupByString but for indexing into a list.
	// As with LookupByString, the returned Node may be any of the Kind:
	// a primitive (string, int64, etc), a map, a list, or a link.
	//
	// If the Kind of this Node is not Kind_List, a nil node and an error
	// will be returned.
	//
	// If idx is out of range, a nil node and an error will be returned.
	LookupByIndex(idx int64) (Node, error)

	// LookupBySegment is will act as either LookupByString or LookupByIndex,
	// whichever is contextually appropriate.
	//
	// Using LookupBySegment may imply an "atoi" conversion if used on a list node,
	// or an "itoa" conversion if used on a map node.  If an "itoa" conversion
	// takes place, it may error, and this method may return that error.
	LookupBySegment(seg PathSegment) (Node, error)

	// MapIterator returns an iterator which yields key-value pairs
	// traversing the node.
	// If the node kind is anything other than a map, nil will be returned.
	//
	// The iterator will yield every entry in the map; that is, it
	// can be expected that itr.Next will be called node.Length times
	// before itr.Done becomes true.
	MapIterator() MapIterator

	// ListIterator returns an iterator which yields key-value pairs
	// traversing the node.
	// If the node kind is anything other than a list, nil will be returned.
	//
	// The iterator will yield every entry in the list; that is, it
	// can be expected that itr.Next will be called node.Length times
	// before itr.Done becomes true.
	ListIterator() ListIterator

	// Length returns the length of a list, or the number of entries in a map,
	// or -1 if the node is not of list nor map kind.
	Length() int64

	// Absent nodes are returned when traversing a struct field that is
	// defined by a schema but unset in the data.  (Absent nodes are not
	// possible otherwise; you'll only see them from `schema.TypedNode`.)
	// The absent flag is necessary so iterating over structs can
	// unambiguously make the distinction between values that are
	// present-and-null versus values that are absent.
	//
	// Absent nodes respond to `Kind()` as `ipld.Kind_Null`,
	// for lack of any better descriptive value; you should therefore
	// always check IsAbsent rather than just a switch on kind
	// when it may be important to handle absent values distinctly.
	IsAbsent() bool

	IsNull() bool
	AsBool() (bool, error)
	AsInt() (int64, error)
	AsFloat() (float64, error)
	AsString() (string, error)
	AsBytes() ([]byte, error)
	AsLink() (Link, error)

	// Prototype returns a NodePrototype which can describe some properties of this node's implementation,
	// and also be used to get a NodeBuilder,
	// which can be use to create new nodes with the same implementation as this one.
	//
	// For typed nodes, the NodePrototype will also implement schema.Type.
	//
	// For Advanced Data Layouts, the NodePrototype will encapsulate any additional
	// parameters and configuration of the ADL, and will also (usually)
	// implement NodePrototypeSupportingAmend.
	//
	// Calling this method should not cause an allocation.
	Prototype() NodePrototype
}

Node represents a value in IPLD. Any point in a tree of data is a node: scalar values (like int64, string, etc) are nodes, and so are recursive values (like map and list).

Nodes and kinds are described in the IPLD specs at https://github.com/ipld/specs/blob/master/data-model-layer/data-model.md .

Methods on the Node interface cover the superset of all possible methods for all possible kinds -- but some methods only make sense for particular kinds, and thus will only make sense to call on values of the appropriate kind. (For example, 'Length' on an integer doesn't make sense, and 'AsInt' on a map certainly doesn't work either!) Use the Kind method to find out the kind of value before calling kind-specific methods. Individual method documentation state which kinds the method is valid for. (If you're familiar with the stdlib reflect package, you'll find the design of the Node interface very comparable to 'reflect.Value'.)

The Node interface is read-only. All of the methods on the interface are for examining values, and implementations should be immutable. The companion interface, NodeBuilder, provides the matching writable methods, and should be use to create a (thence immutable) Node.

Keeping Node immutable and separating mutation into NodeBuilder makes it possible to perform caching (or rather, memoization, since there's no such thing as cache invalidation for immutable systems) of computed properties of Node; use copy-on-write algorithms for memory efficiency; and to generally build pleasant APIs. Many library functions will rely on the immutability of Node (e.g., assuming that pointer-equal nodes do not change in value over time), so any user-defined Node implementations should be careful to uphold the immutability contract.)

There are many different concrete types which implement Node. The primary purpose of various node implementations is to organize memory in the program in different ways -- some in-memory layouts may be more optimal for some programs than others, and changing the Node (and NodeBuilder) implementations lets the programmer choose.

For concrete implementations of Node, check out the "./node/" folder, and the packages within it. "node/basic" should probably be your first start; the Node and NodeBuilder implementations in that package work for any data. Other packages are optimized for specific use-cases. Codegen tools can also be used to produce concrete implementations of Node; these may be specific to certain data, but still conform to the Node interface for interoperability and to support higher-level functions.

Nodes may also be *typed* -- see the 'schema' package and `schema.TypedNode` interface, which extends the Node interface with additional methods. Typed nodes have additional constraints and behaviors: for example, they may be a "struct" and have a specific type/structure to what data you can put inside them, but still behave as a regular Node in all ways this interface specifies (so you can traverse typed nodes, etc, without any additional special effort).

var Absent Node = absentNode{}
var Null Node = nullNode{}

type NodeAssembler added in v0.0.3

type NodeAssembler interface {
	BeginMap(sizeHint int64) (MapAssembler, error)
	BeginList(sizeHint int64) (ListAssembler, error)
	AssignNull() error
	AssignBool(bool) error
	AssignInt(int64) error
	AssignFloat(float64) error
	AssignString(string) error
	AssignBytes([]byte) error
	AssignLink(Link) error

	AssignNode(Node) error // if you already have a completely constructed subtree, this method puts the whole thing in place at once.

	// Prototype returns a NodePrototype describing what kind of value we're assembling.
	//
	// You often don't need this (because you should be able to
	// just feed data and check errors), but it's here.
	//
	// Using `this.Prototype().NewBuilder()` to produce a new `Node`,
	// then giving that node to `this.AssignNode(n)` should always work.
	// (Note that this is not necessarily an _exclusive_ statement on what
	// sort of values will be accepted by `this.AssignNode(n)`.)
	Prototype() NodePrototype
}

NodeAssembler is the interface that describes all the ways we can set values in a node that's under construction.

To create a Node, you should start with a NodeBuilder (which contains a superset of the NodeAssembler methods, and can return the finished Node from its `Build` method).

Why do both this and the NodeBuilder interface exist? When creating trees of nodes, recursion works over the NodeAssembler interface. This is important to efficient library internals, because avoiding the requirement to be able to return a Node at any random point in the process relieves internals from needing to implement 'freeze' features. (This is useful in turn because implementing those 'freeze' features in a language without first-class/compile-time support for them (as golang is) would tend to push complexity and costs to execution time; we'd rather not.)

type NodeBuilder

type NodeBuilder interface {
	NodeAssembler

	// Build returns the new value after all other assembly has been completed.
	//
	// A method on the NodeAssembler that finishes assembly of the data must
	// be called first (e.g., any of the "Assign*" methods, or "Finish" if
	// the assembly was for a map or a list); that finishing method still has
	// all responsibility for validating the assembled data and returning
	// any errors from that process.
	// (Correspondingly, there is no error return from this method.)
	Build() Node

	// Resets the builder.  It can hereafter be used again.
	// Reusing a NodeBuilder can reduce allocations and improve performance.
	//
	// Only call this if you're going to reuse the builder.
	// (Otherwise, it's unnecessary, and may cause an unwanted allocation).
	Reset()
}

type NodePrototype added in v0.5.0

type NodePrototype interface {
	// NewBuilder returns a NodeBuilder that can be used to create a new Node.
	//
	// Note that calling NewBuilder often performs an allocation
	// (while in contrast, getting a NodePrototype typically does not!) --
	// this may be consequential when writing high performance code.
	NewBuilder() NodeBuilder
}

NodePrototype describes a node implementation (all Node have a NodePrototype), and a NodePrototype can always be used to get a NodeBuilder.

A NodePrototype may also provide other information about implementation; such information is specific to this library ("prototype" isn't a concept you'll find in the IPLD Specifications), and is usually provided through feature-detection interfaces (for example, see NodePrototypeSupportingAmend).

Generic algorithms for working with IPLD Nodes make use of NodePrototype to get builders for new nodes when creating data, and can also use the feature-detection interfaces to help decide what kind of operations will be optimal to use on a given node implementation.

Note that NodePrototype is not the same as schema.Type. NodePrototype is a (golang-specific!) way to reflect upon the implementation and in-memory layout of some IPLD data. schema.Type is information about how a group of nodes is related in a schema (if they have one!) and the rules that the type mandates the node must follow. (Every node must have a prototype; but schema types are an optional feature.)

type NodePrototypeSupportingAmend added in v0.5.0

type NodePrototypeSupportingAmend interface {
	AmendingBuilder(base Node) NodeBuilder
}

NodePrototypeSupportingAmend is a feature-detection interface that can be used on a NodePrototype to see if it's possible to build new nodes of this style while sharing some internal data in a copy-on-write way.

For example, Nodes using an Advanced Data Layout will typically support this behavior, and since ADLs are often used for handling large volumes of data, detecting and using this feature can result in significant performance savings.

type Path

type Path struct {
	// contains filtered or unexported fields
}

Path describes a series of steps across a tree or DAG of Node, where each segment in the path is a map key or list index (literaly, Path is a slice of PathSegment values). Path is used in describing progress in a traversal; and can also be used as an instruction for traversing from one Node to another. Path values will also often be encountered as part of error messages.

(Note that Paths are useful as an instruction for traversing from *one* Node to *one* other Node; to do a walk from one Node and visit *several* Nodes based on some sort of pattern, look to IPLD Selectors, and the 'traversal/selector' package in this project.)

Path values are always relative. Observe how 'traversal.Focus' requires both a Node and a Path argument -- where to start, and where to go, respectively. Similarly, error values which include a Path will be speaking in reference to the "starting Node" in whatever context they arose from.

The canonical form of a Path is as a list of PathSegment. Each PathSegment is a string; by convention, the string should be in UTF-8 encoding and use NFC normalization, but all operations will regard the string as its constituent eight-bit bytes.

There are no illegal or magical characters in IPLD Paths (in particular, do not mistake them for UNIX system paths). IPLD Paths can only go down: that is, each segment must traverse one node. There is no ".." which means "go up"; and there is no "." which means "stay here". IPLD Paths have no magic behavior around characters such as "~". IPLD Paths do not have a concept of "globs" nor behave specially for a path segment string of "*" (but you may wish to see 'Selectors' for globbing-like features that traverse over IPLD data).

An empty string is a valid PathSegment. (This leads to some unfortunate complications when wishing to represent paths in a simple string format; however, consider that maps do exist in serialized data in the wild where an empty string is used as the key: it is important we be able to correctly describe and address this!)

A string containing "/" (or even being simply "/"!) is a valid PathSegment. (As with empty strings, this is unfortunate (in particular, because it very much doesn't match up well with expectations popularized by UNIX-like filesystems); but, as with empty strings, maps which contain such a key certainly exist, and it is important that we be able to regard them!)

A string starting, ending, or otherwise containing the NUL (\x00) byte is also a valid PathSegment. This follows from the rule of "a string is regarded as its constituent eight-bit bytes": an all-zero byte is not exceptional. In golang, this doesn't pose particular difficulty, but note this would be of marked concern for languages which have "C-style nul-terminated strings".

For an IPLD Path to be represented as a string, an encoding system including escaping is necessary. At present, there is not a single canonical specification for such an escaping; we expect to decide one in the future, but this is not yet settled and done. (This implementation has a 'String' method, but it contains caveats and may be ambiguous for some content. This may be fixed in the future.)

func NewPath added in v0.0.2

func NewPath(segments []PathSegment) Path

NewPath returns a Path composed of the given segments.

This constructor function does a defensive copy, in case your segments slice should mutate in the future. (Use NewPathNocopy if this is a performance concern, and you're sure you know what you're doing.)

func NewPathNocopy added in v0.0.2

func NewPathNocopy(segments []PathSegment) Path

NewPathNocopy is identical to NewPath but trusts that the segments slice you provide will not be mutated.

func ParsePath

func ParsePath(pth string) Path

ParsePath converts a string to an IPLD Path, doing a basic parsing of the string using "/" as a delimiter to produce a segmented Path. This is a handy, but not a general-purpose nor spec-compliant (!), way to create a Path: it cannot represent all valid paths.

Multiple subsequent "/" characters will be silently collapsed. E.g., `"foo///bar"` will be treated equivalently to `"foo/bar"`. Prefixed and suffixed extraneous "/" characters are also discarded. This makes this constructor incapable of handling some possible Path values (specifically: paths with empty segements cannot be created with this constructor).

There is no escaping mechanism used by this function. This makes this constructor incapable of handling some possible Path values (specifically, a path segment containing "/" cannot be created, because it will always be intepreted as a segment separator).

No other "cleaning" of the path occurs. See the documentation of the Path struct; in particular, note that ".." does not mean "go up", nor does "." mean "stay here" -- correspondingly, there isn't anything to "clean" in the same sense as 'filepath.Clean' from the standard library filesystem path packages would.

If the provided string contains unprintable characters, or non-UTF-8 or non-NFC-canonicalized bytes, no remark will be made about this, and those bytes will remain part of the PathSegments in the resulting Path.

func (Path) AppendSegment

func (p Path) AppendSegment(ps PathSegment) Path

AppendSegmentString is as per Join, but a shortcut when appending single segments using strings.

func (Path) AppendSegmentString added in v0.0.2

func (p Path) AppendSegmentString(ps string) Path

AppendSegmentString is as per Join, but a shortcut when appending single segments using strings.

func (Path) Join

func (p Path) Join(p2 Path) Path

Join creates a new path composed of the concatenation of this and the given path's segments.

func (Path) Last added in v0.7.0

func (p Path) Last() PathSegment

Last returns the trailing segment of the path.

func (Path) Len added in v0.7.0

func (p Path) Len() int

Len returns the number of segments in this path.

Zero segments means the path refers to "the current node". One segment means it refers to a child of the current node; etc.

func (Path) Parent

func (p Path) Parent() Path

Parent returns a path with the last of its segments popped off (or the zero path if it's already empty).

func (Path) Segments

func (p Path) Segments() []PathSegment

Segments returns a slice of the path segment strings.

It is not lawful to mutate nor append the returned slice.

func (Path) Shift added in v0.7.0

func (p Path) Shift() (PathSegment, Path)

Shift returns the first segment of the path together with the remaining path after that first segment. If applied to a zero-length path, it returns an empty segment and the same zero-length path.

func (Path) String

func (p Path) String() string

String representation of a Path is simply the join of each segment with '/'. It does not include a leading nor trailing slash.

This is a handy, but not a general-purpose nor spec-compliant (!), way to reduce a Path to a string. There is no escaping mechanism used by this function, and as a result, not all possible valid Path values (such as those with empty segments or with segments containing "/") can be encoded unambiguously. For Path values containing these problematic segments, ParsePath applied to the string returned from this function may return a nonequal Path value.

No escaping for unprintable characters is provided. No guarantee that the resulting string is UTF-8 nor NFC canonicalized is provided unless all the constituent PathSegment had those properties.

func (Path) Truncate

func (p Path) Truncate(i int) Path

Truncate returns a path with only as many segments remaining as requested.

type PathSegment added in v0.0.2

type PathSegment struct {
	// contains filtered or unexported fields
}

PathSegment can describe either a key in a map, or an index in a list.

Create a PathSegment via either ParsePathSegment, PathSegmentOfString, or PathSegmentOfInt; or, via one of the constructors of Path, which will implicitly create PathSegment internally. Using PathSegment's natural zero value directly is discouraged (it will act like ParsePathSegment("0"), which likely not what you'd expect).

Path segments are "stringly typed" -- they may be interpreted as either strings or ints depending on context. A path segment of "123" will be used as a string when traversing a node of map kind; and it will be converted to an integer when traversing a node of list kind. (If a path segment string cannot be parsed to an int when traversing a node of list kind, then traversal will error.) It is not possible to ask which kind (string or integer) a PathSegment is, because that is not defined -- this is *only* intepreted contextually.

Internally, PathSegment will store either a string or an integer, depending on how it was constructed, and will automatically convert to the other on request. (This means if two pieces of code communicate using PathSegment, one producing ints and the other expecting ints, then they will work together efficiently.) PathSegment in a Path produced by ParsePath generally have all strings internally, because there is no distinction possible when parsing a Path string (and attempting to pre-parse all strings into ints "just in case" would waste time in almost all cases).

Be cautious of attempting to use PathSegment as a map key! Due to the implementation detail of internal storage, it's possible for PathSegment values which are "equal" per PathSegment.Equal's definition to still be unequal in the eyes of golang's native maps. You should probably use the string values of the PathSegment as map keys. (This has the additional bonus of hitting a special fastpath that the golang built-in maps have specifically for plain string keys.)

func ParsePathSegment added in v0.0.2

func ParsePathSegment(s string) PathSegment

ParsePathSegment parses a string into a PathSegment, handling any escaping if present. (Note: there is currently no escaping specified for PathSegments, so this is currently functionally equivalent to PathSegmentOfString.)

func PathSegmentOfInt added in v0.0.2

func PathSegmentOfInt(i int64) PathSegment

PathSegmentOfString boxes an int into a PathSegment.

func PathSegmentOfString added in v0.0.2

func PathSegmentOfString(s string) PathSegment

PathSegmentOfString boxes a string into a PathSegment. It does not attempt to parse any escaping; use ParsePathSegment for that.

func (PathSegment) Equals added in v0.0.2

func (x PathSegment) Equals(o PathSegment) bool

Equals checks if two PathSegment values are equal.

Because PathSegment is "stringly typed", this comparison does not regard if one of the segments is stored as a string and one is stored as an int; if string values of two segments are equal, they are "equal" overall. In other words, `PathSegmentOfInt(2).Equals(PathSegmentOfString("2")) == true`! (You should still typically prefer this method over converting two segments to string and comparing those, because even though that may be functionally correct, this method will be faster if they're both ints internally.)

func (PathSegment) Index added in v0.0.2

func (ps PathSegment) Index() (int64, error)

Index returns the PathSegment as an integer, or returns an error if the segment is a string that can't be parsed as an int.

func (PathSegment) String added in v0.0.2

func (ps PathSegment) String() string

String returns the PathSegment as a string.

Directories

Path Synopsis
_rsrch
adl
rot13adl
rot13adl is a demo ADL -- its purpose is to show what an ADL and its public interface can look like.
rot13adl is a demo ADL -- its purpose is to show what an ADL and its public interface can look like.
dagjson2
Several groups of exported symbols are available at different levels of abstraction: - You might just want the multicodec registration! Then never deal with this package directly again.
Several groups of exported symbols are available at different levels of abstraction: - You might just want the multicodec registration! Then never deal with this package directly again.
jst
"jst" -- JSON Table -- is a format that's parsable as JSON, while sprucing up the display to humans using the non-significant whitespace cleverly.
"jst" -- JSON Table -- is a format that's parsable as JSON, while sprucing up the display to humans using the non-significant whitespace cleverly.
raw
Package raw implements IPLD's raw codec, which simply writes and reads a Node which can be represented as bytes.
Package raw implements IPLD's raw codec, which simply writes and reads a Node which can be represented as bytes.
The fluent package offers helper utilities for using NodeAssembler more tersely by providing an interface that handles all errors for you, and allows use of closures for any recursive assembly so that creating trees of data results in indentation for legibility.
The fluent package offers helper utilities for using NodeAssembler more tersely by providing an interface that handles all errors for you, and allows use of closures for any recursive assembly so that creating trees of data results in indentation for legibility.
qp
qp is similar to fluent/quip, but with a bit more magic.
qp is similar to fluent/quip, but with a bit more magic.
quip
quip is a package of quick ipld patterns.
quip is a package of quick ipld patterns.
linking
cid
Package 'must' provides another alternative to the 'fluent' package, providing many helpful functions for wrapping methods with multiple returns into a single return (converting errors into panics).
Package 'must' provides another alternative to the 'fluent' package, providing many helpful functions for wrapping methods with multiple returns into a single return (converting errors into panics).
The 'node' package gathers various general purpose Node implementations; the first one you should jump to is 'node/basic'.
The 'node' package gathers various general purpose Node implementations; the first one you should jump to is 'node/basic'.
tests/corpus
The corpus package exports some values useful for building tests and benchmarks.
The corpus package exports some values useful for building tests and benchmarks.
dmt
Storage contains some simple implementations for the ipld.BlockReadOpener and ipld.BlockWriteOpener interfaces, which are typically used by composition in a LinkSystem.
Storage contains some simple implementations for the ipld.BlockReadOpener and ipld.BlockWriteOpener interfaces, which are typically used by composition in a LinkSystem.
bsadapter Module
bsrvadapter Module
dsadapter Module
This package provides functional utilities for traversing and transforming IPLD nodes.
This package provides functional utilities for traversing and transforming IPLD nodes.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL