ipld

package module
v0.21.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 10, 2023 License: MIT Imports: 13 Imported by: 538

README

go-ipld-prime

go-ipld-prime is an implementation of the IPLD spec interfaces, a batteries-included codec implementations of IPLD for CBOR and JSON, and tooling for basic operations on IPLD objects (traversals, etc).

API

The API is split into several packages based on responsibly of the code. The most central interfaces are the base package, but you'll certainly need to import additional packages to get concrete implementations into action.

Roughly speaking, the core package interfaces are all about the IPLD Data Model; the codec/* packages contain functions for parsing serial data into the IPLD Data Model, and converting Data Model content back into serial formats; the traversal package is an example of higher-order functions on the Data Model; concrete ipld.Node implementations ready to use can be found in packages in the node/* directory; and several additional packages contain advanced features such as IPLD Schemas.

(Because the codecs, as well as higher-order features like traversals, are implemented in a separate package from the core interfaces or any of the Node implementations, you can be sure they're not doing any funky "magic" -- all this stuff will work the same if you want to write your own extensions, whether for new Node implementations or new codecs, or new higher-order order functions!)

  • github.com/ipld/go-ipld-prime -- imported as just ipld -- contains the core interfaces for IPLD. The most important interfaces are Node, NodeBuilder, Path, and Link.
  • github.com/ipld/go-ipld-prime/node/basicnode -- provides concrete implementations of Node and NodeBuilder which work for any kind of data, using unstructured memory.
  • github.com/ipld/go-ipld-prime/node/bindnode -- provides concrete implementations of Node and NodeBuilder which store data in native golang structures, interacting with it via reflection. Also supports IPLD Schemas!
  • github.com/ipld/go-ipld-prime/traversal -- contains higher-order functions for traversing graphs of data easily.
  • github.com/ipld/go-ipld-prime/traversal/selector -- contains selectors, which are sort of like regexps, but for trees and graphs of IPLD data!
  • github.com/ipld/go-ipld-prime/codec -- parent package of all the codec implementations!
  • github.com/ipld/go-ipld-prime/codec/dagcbor -- implementations of marshalling and unmarshalling as CBOR (a fast, binary serialization format).
  • github.com/ipld/go-ipld-prime/codec/dagjson -- implementations of marshalling and unmarshalling as JSON (a popular human readable format).
  • github.com/ipld/go-ipld-prime/linking/cid -- imported as cidlink -- provides concrete implementations of Link as a CID. Also, the multicodec registry.
  • github.com/ipld/go-ipld-prime/schema -- contains the schema.Type and schema.TypedNode interface declarations, which represent IPLD Schema type information.
  • github.com/ipld/go-ipld-prime/node/typed -- provides concrete implementations of schema.TypedNode which decorate a basic Node at runtime to have additional features described by IPLD Schemas.

Getting Started

Let's say you want to create some data programmatically, and then serialize it, or save it as [blocks].

You've got a ton of different options, depending on what golang convention you want to use:

Once you've got a Node full of data, you can serialize it:

https://pkg.go.dev/github.com/ipld/go-ipld-prime#example-package-CreateDataAndMarshal

But probably you want to do more than that; probably you want to store this data as a block, and get a CID that links back to it. For this you use LinkSystem:

https://pkg.go.dev/github.com/ipld/go-ipld-prime/linking#example-LinkSystem.Store

Hopefully these pointers give you some useful getting-started focal points. The API docs should help from here on out. We also highly recommend scanning the godocs for other pieces of example code, in various packages!

Let us know in issues, chat, or other community spaces if you need more help, or have suggestions on how we can improve the getting-started experiences!

Other IPLD Libraries

The IPLD specifications are designed to be language-agnostic. Many implementations exist in a variety of languages.

For overall behaviors and specifications, refer to the IPLD website, or its source, in IPLD meta repo:

  • https://ipld.io/
  • https://github.com/ipld/ipld/ You should find specs in the specs/ dir there, human-friendly docs in the docs/ dir, and information about why things are designed the way they are mostly in the design/ directories.

There are also pages in the IPLD website specifically about golang IPLD libraries, and your alternatives: https://ipld.io/libraries/golang/

distinctions from go-ipld-interface&go-ipld-cbor

This library ("go ipld prime") is the current head of development for golang IPLD, and we recommend new developments in golang be done using this library as the basis.

However, several other libraries exist in golang for working with IPLD data. Most of these predate go-ipld-prime and no longer receive active development, but since they do support a lot of other software, you may continue to seem them around for a while. go-ipld-prime is generally serially compatible with these -- just like it is with IPLD libraries in other languages.

In terms of programmatic API and features, go-ipld-prime is a clean take on the IPLD interfaces, and chose to address several design decisions very differently than older generation of libraries:

  • The Node interfaces map cleanly to the IPLD Data Model;
  • Many features known to be legacy are dropped;
  • The Link implementations are purely CIDs (no "name" nor "size" properties);
  • The Path implementations are provided in the same box;
  • The JSON and CBOR implementations are provided in the same box;
  • Several odd dependencies on blockstore and other interfaces that were closely coupled with IPFS are replaced by simpler, less-coupled interfaces;
  • New features like IPLD Selectors are only available from go-ipld-prime;
  • New features like ADLs (Advanced Data Layouts), which provide features like transparent sharding and indexing for large data, are only available from go-ipld-prime;
  • Declarative transformations can be applied to IPLD data (defined in terms of the IPLD Data Model) using go-ipld-prime;
  • and many other small refinements.

In particular, the clean and direct mapping of "Node" to concepts in the IPLD Data Model ensures a much more consistent set of rules when working with go-ipld-prime data, regardless of which codecs are involved. (Codec-specific embellishments and edge-cases were common in the previous generation of libraries.) This clarity is also what provides the basis for features like Selectors, ADLs, and operations such as declarative transformations.

Many of these changes had been discussed for the other IPLD codebases as well, but we chose clean break v2 as a more viable project-management path. Both go-ipld-prime and these legacy libraries can co-exist on the same import path, and both refer to the same kinds of serial data. Projects wishing to migrate can do so smoothly and at their leisure.

We now consider many of the earlier golang IPLD libraries to be defacto deprecated, and you should expect new features here, rather than in those libraries. (Those libraries still won't be going away anytime soon, but we really don't recomend new construction on them.)

migrating

For recommendations on where to start when migrating: see README_migrationGuide. That document will provide examples of which old concepts and API names map to which new APIs, and should help set you on the right track.

unixfsv1

Lots of people who hear about IPLD have heard about it through IPFS. IPFS has IPLD-native APIs, but IPFS also makes heavy use of a specific system called "UnixFSv1", so people often wonder if UnixFSv1 is supported in IPLD libraries.

The answer is "yes" -- but it's not part of the core.

UnixFSv1 is now treated as an ADL, and a go-ipld-prime compatible implementation can be found in the ipfs/go-unixfsnode repo.

Additionally, the codec used in UnixFSv1 -- dag-pb -- can be found implemented in the ipld/go-codec-dagpb repo.

A "some assembly required" advisory may still be in effect for these pieces; check the readmes in those repos for details on what they support.

The move to making UnixFSv1 a non-core system has been an arduous retrofit. However, framing it as an ADL also provides many advantages:

  • it demonstrates that ADLs as a plugin system work, and others can develop new systems in this pattern!
  • it has made pathing over UnixFSv1 much more standard and well-defined
  • this standardization means systems like Selectors work naturally over UnixFSv1...
  • ... which in turn means anything using them (ex: CAR export; graphsync; etc) can very easily be asked to produce a merkle-proof for a path over UnixFSv1 data, without requiring the querier to know about the internals. Whew!

We hope users and developers alike will find value in how these systems are now layered.

Change Policy

The go-ipld-prime library is ready to use, and we value stability highly.

We make releases periodically. However, using a commit hash to pin versions precisely when depending on this library is also perfectly acceptable. (Only commit hashes on the master branch can be expected to persist, however; depending on a commit hash in a branch is not recommended. See development branches.)

We maintain a CHANGELOG! Please read it, when updating!

We do make reasonable attempts to minimize the degree of changes to the library which will create "breaking change" experiences for downstream consumers, and we do document these in the changelog (often, even with specific migration instructions). However, we do also still recommend running your own compile and test suites as a matter of course after updating.

You can help make developing this library easier by staying up-to-date as a downstream consumer! When we do discover a need for API changes, we typically try to introduce the new API first, and do at least one release tag in which the old API is deprecated (but not yet removed). We will all be able to develop software faster, together, as an ecosystem, if libraries can keep reasonably closely up-to-date with the most recent tags.

Version Names

When a tag is made, version number steps in go-ipld-prime advance as follows:

  1. the number bumps when the lead maintainer says it does.
  2. even numbers should be easy upgrades; odd numbers may change things.
  3. the version will start with v0. until further notice.

This is WarpVer.

These version numbers are provided as hints about what to expect, but ultimately, you should always invoke your compiler and your tests to tell you about compatibility, as well as read the changelog.

Updating

Read the CHANGELOG.

Really, read it. We put exact migration instructions in there, as much as possible. Even outright scripts, when feasible.

An even-number release tag is usually made very shortly before an odd number tag, so if you're cautious about absorbing changes, you should update to the even number first, run all your tests, and then upgrade to the odd number. Usually the step to the even number should go off without a hitch, but if you do get problems from advancing to an even number tag, A) you can be pretty sure it's a bug, and B) you didn't have to edit a bunch of code before finding that out.

Development branches

The following are norms you can expect of changes to this codebase, and the treatment of branches:

  • The master branch will not be force-pushed.
    • (exceptional circumstances may exist, but such exceptions will only be considered valid for about as long after push as the "$N-second-rule" about dropped food).
    • Therefore, commit hashes on master are gold to link against.
  • All other branches can be force-pushed.
    • Therefore, commit hashes not reachable from the master branch are inadvisable to link against.
  • If it's on master, it's understood to be good, in as much as we can tell.
    • Changes and features don't get merged until their tests pass!
    • Packages of "alpha" developmental status may exist, and be more subject to change than other more finalized parts of the repo, but their self-tests will at least pass.
  • Development proceeds -- both starting from and ending on -- the master branch.
    • There are no other long-running supported-but-not-master branches.
    • The existence of tags at any particular commit do not indicate that we will consider starting a long running and supported diverged branch from that point, nor start doing backports, etc.

Documentation

Overview

go-ipld-prime is a series of go interfaces for manipulating IPLD data.

See https://ipld.io/ for more information about the basics of "What is IPLD?".

Here in the godoc, the first couple of types to look at should be:

  • Node
  • NodeBuilder and NodeAssembler
  • NodePrototype.

These types provide a generic description of the data model.

A Node is a piece of IPLD data which can be inspected. A NodeAssembler is used to create Nodes. (A NodeBuilder is just like a NodeAssembler, but allocates memory (whereas a NodeAssembler just fills up memory; using these carefully allows construction of very efficient code.)

Different NodePrototypes can be used to describe Nodes which follow certain logical rules (e.g., we use these as part of implementing Schemas), and can also be used so that programs can use different memory layouts for different data (which can be useful for constructing efficient programs when data has known shape for which we can use specific or compacted memory layouts).

If working with linked data (data which is split into multiple trees of Nodes, loaded separately, and connected by some kind of "link" reference), the next types you should look at are:

  • LinkSystem
  • ... and its fields.

The most typical use of LinkSystem is to use the linking/cid package to get a LinkSystem that works with CIDs:

lsys := cidlink.DefaultLinkSystem()

... and then assign the StorageWriteOpener and StorageReadOpener fields in order to control where data is stored to and read from. Methods on the LinkSystem then provide the functions typically used to get data in and out of Nodes so you can work with it.

This root package gathers some of the most important ease-of-use functions all in one place, but is mostly aliases out to features originally found in other more specific sub-packages. (If you're interested in keeping your binary sizes small, and don't use some of the features of this library, you'll probably want to look into using the relevant sub-packages directly.)

Particularly interesting subpackages include:

  • datamodel -- the most essential interfaces for describing data live here, describing Node, NodePrototype, NodeBuilder, Link, and Path.
  • node/* -- various Node + NodeBuilder implementations.
  • node/basicnode -- the first Node implementation you should try.
  • codec/* -- functions for serializing and deserializing Nodes.
  • linking -- the LinkSystem, which is a facade to all data loading and storing and hashing.
  • linking/* -- ways to bind concrete Link implementations (namely, the linking/cidlink package, which connects the go-cid library to our datamodel.Link interface).
  • traversal -- functions for walking Node graphs (including automatic link loading) and visiting them programmatically.
  • traversal/selector -- functions for working with IPLD Selectors, which are a language-agnostic declarative format for describing graph walks.
  • fluent/* -- various options for making datamodel Node and NodeBuilder easier to work with.
  • schema -- interfaces for working with IPLD Schemas, which can bring constraints and validation systems to otherwise schemaless and unstructured IPLD data.
  • adl/* -- examples of creating and using Advanced Data Layouts (in short, custom Node implementations) to do complex data structures transparently within the IPLD Data Model.
Example (CreateDataAndMarshal)

Example_createDataAndMarshal shows how you can feed data into a NodeBuilder, and also how to then hand that to an Encoder.

Often you'll encoding implicitly through a LinkSystem.Store call instead, but you can do it directly, too.

package main

import (
	"os"

	"github.com/ipld/go-ipld-prime/codec/dagjson"
	"github.com/ipld/go-ipld-prime/node/basicnode"
)

func main() {
	np := basicnode.Prototype.Any // Pick a prototype: this is how we decide what implementation will store the in-memory data.
	nb := np.NewBuilder()         // Create a builder.
	ma, _ := nb.BeginMap(2)       // Begin assembling a map.
	ma.AssembleKey().AssignString("hey")
	ma.AssembleValue().AssignString("it works!")
	ma.AssembleKey().AssignString("yes")
	ma.AssembleValue().AssignBool(true)
	ma.Finish()     // Call 'Finish' on the map assembly to let it know no more data is coming.
	n := nb.Build() // Call 'Build' to get the resulting Node.  (It's immutable!)

	dagjson.Encode(n, os.Stdout)

}
Output:

{"hey":"it works!","yes":true}
Example (GoValueWithSchema)

Example_goValueWithSchema shows how to combine a Go value with an IPLD schema, which can then be used as an IPLD node.

For more examples and documentation, see the node/bindnode package.

type Person struct {
	Name    string
	Age     int
	Friends []string
}

ts, err := ipld.LoadSchemaBytes([]byte(`
		type Person struct {
			name    String
			age     Int
			friends [String]
		} representation tuple
	`))
if err != nil {
	panic(err)
}
schemaType := ts.TypeByName("Person")
person := &Person{Name: "Alice", Age: 34, Friends: []string{"Bob"}}
node := bindnode.Wrap(person, schemaType)

fmt.Printf("%#v\n", person)
dagjson.Encode(node.Representation(), os.Stdout)
Output:

&ipld_test.Person{Name:"Alice", Age:34, Friends:[]string{"Bob"}}
["Alice",34,["Bob"]]
Example (Marshal)
type Foobar struct {
	Foo string
	Bar string
}
encoded, err := ipld.Marshal(json.Encode, &Foobar{"wow", "whee"}, nil)
fmt.Printf("error: %v\n", err)
fmt.Printf("data: %s\n", string(encoded))
Output:

error: <nil>
data: {
	"Foo": "wow",
	"Bar": "whee"
}
Example (UnmarshalData)

Example_unmarshalData shows how you can use a Decoder and a NodeBuilder (or NodePrototype) together to do unmarshalling.

Often you'll do this implicitly through a LinkSystem.Load call instead, but you can do it directly, too.

package main

import (
	"fmt"
	"strings"

	"github.com/ipld/go-ipld-prime/codec/dagjson"
	"github.com/ipld/go-ipld-prime/node/basicnode"
)

func main() {
	serial := strings.NewReader(`{"hey":"it works!","yes": true}`)

	np := basicnode.Prototype.Any // Pick a stle for the in-memory data.
	nb := np.NewBuilder()         // Create a builder.
	dagjson.Decode(nb, serial)    // Hand the builder to decoding -- decoding will fill it in!
	n := nb.Build()               // Call 'Build' to get the resulting Node.  (It's immutable!)

	fmt.Printf("the data decoded was a %s kind\n", n.Kind())
	fmt.Printf("the length of the node is %d\n", n.Length())

}
Output:

the data decoded was a map kind
the length of the node is 2
Example (Unmarshal_withSchema)
typesys := schema.MustTypeSystem(
	schema.SpawnStruct("Foobar",
		[]schema.StructField{
			schema.SpawnStructField("foo", "String", false, false),
			schema.SpawnStructField("bar", "String", false, false),
		},
		schema.SpawnStructRepresentationMap(nil),
	),
	schema.SpawnString("String"),
)

type Foobar struct {
	Foo string
	Bar string
}
serial := []byte(`{"foo":"wow","bar":"whee"}`)
foobar := Foobar{}
n, err := ipld.Unmarshal(serial, json.Decode, &foobar, typesys.TypeByName("Foobar"))
fmt.Printf("error: %v\n", err)
fmt.Printf("go struct: %v\n", foobar)
fmt.Printf("node kind and length: %s, %d\n", n.Kind(), n.Length())
fmt.Printf("node lookup 'foo': %q\n", must.String(must.Node(n.LookupByString("foo"))))
Output:

error: <nil>
go struct: {wow whee}
node kind and length: map, 2
node lookup 'foo': "wow"

Index

Examples

Constants

View Source
const (
	Kind_Invalid = datamodel.Kind_Invalid
	Kind_Map     = datamodel.Kind_Map
	Kind_List    = datamodel.Kind_List
	Kind_Null    = datamodel.Kind_Null
	Kind_Bool    = datamodel.Kind_Bool
	Kind_Int     = datamodel.Kind_Int
	Kind_Float   = datamodel.Kind_Float
	Kind_String  = datamodel.Kind_String
	Kind_Bytes   = datamodel.Kind_Bytes
	Kind_Link    = datamodel.Kind_Link
)

Variables

View Source
var (
	Null   = datamodel.Null
	Absent = datamodel.Absent
)
View Source
var (
	KindSet_Recursive  = datamodel.KindSet_Recursive
	KindSet_Scalar     = datamodel.KindSet_Scalar
	KindSet_JustMap    = datamodel.KindSet_JustMap
	KindSet_JustList   = datamodel.KindSet_JustList
	KindSet_JustNull   = datamodel.KindSet_JustNull
	KindSet_JustBool   = datamodel.KindSet_JustBool
	KindSet_JustInt    = datamodel.KindSet_JustInt
	KindSet_JustFloat  = datamodel.KindSet_JustFloat
	KindSet_JustString = datamodel.KindSet_JustString
	KindSet_JustBytes  = datamodel.KindSet_JustBytes
	KindSet_JustLink   = datamodel.KindSet_JustLink
)

Future: These aliases for the `KindSet_*` values may be dropped someday. I don't think they're very important to have cluttering up namespace here. They're included for a brief transitional period, largely for the sake of codegen things which have referred to them, but may disappear in the future.

Functions

func DeepEqual added in v0.10.0

func DeepEqual(x, y Node) bool

DeepEqual reports whether x and y are "deeply equal" as IPLD nodes. This is similar to reflect.DeepEqual, but based around the Node interface.

This is exactly equivalent to the datamodel.DeepEqual function.

func Encode added in v0.12.1

func Encode(n Node, encFn Encoder) ([]byte, error)

Encode serializes the given Node using the given Encoder function, returning the serialized data or an error.

The exact result data will depend the node content and on the encoder function, but for example, using a json codec on a node with kind map will produce a result starting in `{`, etc.

Encode will automatically switch to encoding the representation form of the Node, if it discovers the Node matches the schema.TypedNode interface. This is probably what you want, in most cases; if this is not desired, you can use the underlaying functions directly (just look at the source of this function for an example of how!).

If you would like this operation, but applied directly to a golang type instead of a Node, look to the Marshal function.

func EncodeStreaming added in v0.12.1

func EncodeStreaming(wr io.Writer, n Node, encFn Encoder) error

EncodeStreaming is like Encode, but emits output to an io.Writer.

func LoadSchema added in v0.14.0

func LoadSchema(name string, r io.Reader) (*schema.TypeSystem, error)

LoadSchema parses an IPLD Schema in its DSL form and compiles its types into a standalone TypeSystem.

Example
ts, err := ipld.LoadSchema("sample.ipldsch", strings.NewReader(`
		type Root struct {
			foo Int
			bar nullable String
		}
		`))
if err != nil {
	panic(err)
}
typeRoot := ts.TypeByName("Root").(*schema.TypeStruct)
for _, field := range typeRoot.Fields() {
	fmt.Printf("field name=%q nullable=%t type=%v\n",
		field.Name(), field.IsNullable(), field.Type().Name())
}
Output:

field name="foo" nullable=false type=Int
field name="bar" nullable=true type=String

func LoadSchemaBytes added in v0.14.0

func LoadSchemaBytes(src []byte) (*schema.TypeSystem, error)

LoadSchemaBytes is a shortcut for LoadSchema for the common case where the schema is available as a buffer or a string, such as via go:embed.

func LoadSchemaFile added in v0.14.0

func LoadSchemaFile(path string) (*schema.TypeSystem, error)

LoadSchemaBytes is a shortcut for LoadSchema for the common case where the schema is a file on disk.

func Marshal added in v0.12.1

func Marshal(encFn Encoder, bind interface{}, typ schema.Type) ([]byte, error)

Marshal accepts a pointer to a Go value and an IPLD schema type, and encodes the representation form of that data (which may be configured with the schema!) using the given Encoder function.

Marshal uses the node/bindnode subsystem. See the documentation in that package for more details about its workings. Please note that this subsystem is relatively experimental at this time.

The schema.Type parameter is optional, and can be nil. If given, it controls what kind of schema.Type (and what kind of representation strategy!) to use when processing the data. If absent, a default schema.Type will be inferred based on the golang type (so, a struct in go will be inferred to have a schema with a similar struct, and the default representation strategy (e.g. map), etc). Note that not all features of IPLD Schemas can be inferred from golang types alone. For example, to use union types, the schema parameter will be required. Similarly, to use most kinds of non-default representation strategy, the schema parameter is needed in order to convey that intention.

func MarshalStreaming added in v0.12.1

func MarshalStreaming(wr io.Writer, encFn Encoder, bind interface{}, typ schema.Type) error

MarshalStreaming is like Marshal, but emits output to an io.Writer.

Types

type ADL added in v0.9.0

type ADL = adl.ADL

type BlockReadOpener added in v0.9.0

type BlockReadOpener = linking.BlockReadOpener

type BlockWriteCommitter added in v0.9.0

type BlockWriteCommitter = linking.BlockWriteCommitter

type BlockWriteOpener added in v0.9.0

type BlockWriteOpener = linking.BlockWriteOpener

type Decoder added in v0.9.0

type Decoder = codec.Decoder

type Encoder added in v0.9.0

type Encoder = codec.Encoder

type ErrHashMismatch added in v0.9.0

type ErrHashMismatch = linking.ErrHashMismatch

Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.

type ErrInvalidKey added in v0.0.2

type ErrInvalidKey = schema.ErrInvalidKey

Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.

type ErrInvalidSegmentForList added in v0.4.0

type ErrInvalidSegmentForList = datamodel.ErrInvalidSegmentForList

Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.

type ErrIteratorOverread

type ErrIteratorOverread = datamodel.ErrIteratorOverread

Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.

type ErrMissingRequiredField added in v0.0.3

type ErrMissingRequiredField = schema.ErrMissingRequiredField

Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.

type ErrNotExists

type ErrNotExists = datamodel.ErrNotExists

Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.

type ErrRepeatedMapKey added in v0.0.3

type ErrRepeatedMapKey = datamodel.ErrRepeatedMapKey

Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.

type ErrWrongKind

type ErrWrongKind = datamodel.ErrWrongKind

Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.

type Kind added in v0.7.0

type Kind = datamodel.Kind
type Link = datamodel.Link

type LinkContext

type LinkContext = linking.LinkContext

type LinkPrototype added in v0.9.0

type LinkPrototype = datamodel.LinkPrototype

type LinkSystem added in v0.9.0

type LinkSystem = linking.LinkSystem

type ListAssembler added in v0.0.3

type ListAssembler = datamodel.ListAssembler

type ListIterator

type ListIterator = datamodel.ListIterator

type MapAssembler added in v0.0.3

type MapAssembler = datamodel.MapAssembler

type MapIterator

type MapIterator = datamodel.MapIterator

type Node

type Node = datamodel.Node

func Decode added in v0.12.1

func Decode(b []byte, decFn Decoder) (Node, error)

Decode parses the given bytes into a Node using the given Decoder function, returning a new Node or an error.

The new Node that is returned will be the implementation from the node/basicnode package. This implementation of Node will work for storing any kind of data, but note that because it is general, it is also not necessarily optimized. If you want more control over what kind of Node implementation (and thus memory layout) is used, or want to use features like IPLD Schemas (which can be engaged by using a schema.TypedPrototype), then look to the DecodeUsingPrototype family of functions, which accept more parameters in order to give you that kind of control.

If you would like this operation, but applied directly to a golang type instead of a Node, look to the Unmarshal function.

func DecodeStreaming added in v0.12.1

func DecodeStreaming(r io.Reader, decFn Decoder) (Node, error)

DecodeStreaming is like Decode, but works on an io.Reader for input.

func DecodeStreamingUsingPrototype added in v0.12.1

func DecodeStreamingUsingPrototype(r io.Reader, decFn Decoder, np NodePrototype) (Node, error)

DecodeStreamingUsingPrototype is like DecodeUsingPrototype, but works on an io.Reader for input.

func DecodeUsingPrototype added in v0.12.1

func DecodeUsingPrototype(b []byte, decFn Decoder, np NodePrototype) (Node, error)

DecodeUsingPrototype is like Decode, but with a NodePrototype parameter, which gives you control over the Node type you'll receive, and thus control over the memory layout, and ability to use advanced features like schemas. (Decode is simply this function, but hardcoded to use basicnode.Prototype.Any.)

DecodeUsingPrototype internally creates a NodeBuilder, and thows it away when done. If building a high performance system, and creating data of the same shape repeatedly, you may wish to use NodeBuilder directly, so that you can control and avoid these allocations.

For symmetry with the behavior of Encode, DecodeUsingPrototype will automatically switch to using the representation form of the node for decoding if it discovers the NodePrototype matches the schema.TypedPrototype interface. This is probably what you want, in most cases; if this is not desired, you can use the underlaying functions directly (just look at the source of this function for an example of how!).

func Unmarshal added in v0.12.1

func Unmarshal(b []byte, decFn Decoder, bind interface{}, typ schema.Type) (Node, error)

Unmarshal accepts a pointer to a Go value and an IPLD schema type, and fills the value with data by decoding into it with the given Decoder function.

Unmarshal uses the node/bindnode subsystem. See the documentation in that package for more details about its workings. Please note that this subsystem is relatively experimental at this time.

The schema.Type parameter is optional, and can be nil. If given, it controls what kind of schema.Type (and what kind of representation strategy!) to use when processing the data. If absent, a default schema.Type will be inferred based on the golang type (so, a struct in go will be inferred to have a schema with a similar struct, and the default representation strategy (e.g. map), etc). Note that not all features of IPLD Schemas can be inferred from golang types alone. For example, to use union types, the schema parameter will be required. Similarly, to use most kinds of non-default representation strategy, the schema parameter is needed in order to convey that intention.

In contrast to some other unmarshal conventions common in golang, notice that we also return a Node value. This Node points to the same data as the value you handed in as the bind parameter, while making it available to read and iterate and handle as a ipld datamodel.Node. If you don't need that interface, or intend to re-bind it later, you can discard that value.

The 'bind' parameter may be nil. In that case, the type of the nil is still used to infer what kind of value to return, and a Node will still be returned based on that type. bindnode.Unwrap can be used on that Node and will still return something of the same golang type as the typed nil that was given as the 'bind' parameter.

func UnmarshalStreaming added in v0.12.1

func UnmarshalStreaming(r io.Reader, decFn Decoder, bind interface{}, typ schema.Type) (Node, error)

UnmarshalStreaming is like Unmarshal, but works on an io.Reader for input.

type NodeAssembler added in v0.0.3

type NodeAssembler = datamodel.NodeAssembler

type NodeBuilder

type NodeBuilder = datamodel.NodeBuilder

type NodePrototype added in v0.5.0

type NodePrototype = datamodel.NodePrototype

type NodeReifier added in v0.10.0

type NodeReifier = linking.NodeReifier

type Path

type Path = datamodel.Path

func NewPath added in v0.0.2

func NewPath(segments []PathSegment) Path

NewPath is an alias for datamodel.NewPath.

Pathing is a concept defined in the data model layer of IPLD.

func ParsePath

func ParsePath(pth string) Path

ParsePath is an alias for datamodel.ParsePath.

Pathing is a concept defined in the data model layer of IPLD.

type PathSegment added in v0.0.2

type PathSegment = datamodel.PathSegment

func ParsePathSegment added in v0.0.2

func ParsePathSegment(s string) PathSegment

ParsePathSegment is an alias for datamodel.ParsePathSegment.

Pathing is a concept defined in the data model layer of IPLD.

func PathSegmentOfInt added in v0.0.2

func PathSegmentOfInt(i int64) PathSegment

PathSegmentOfInt is an alias for datamodel.PathSegmentOfInt.

Pathing is a concept defined in the data model layer of IPLD.

func PathSegmentOfString added in v0.0.2

func PathSegmentOfString(s string) PathSegment

PathSegmentOfString is an alias for datamodel.PathSegmentOfString.

Pathing is a concept defined in the data model layer of IPLD.

Directories

Path Synopsis
adl
rot13adl
rot13adl is a demo ADL -- its purpose is to show what an ADL and its public interface can look like.
rot13adl is a demo ADL -- its purpose is to show what an ADL and its public interface can look like.
dagcbor
The dagcbor package provides a DAG-CBOR codec implementation.
The dagcbor package provides a DAG-CBOR codec implementation.
raw
Package raw implements IPLD's raw codec, which simply writes and reads a Node which can be represented as bytes.
Package raw implements IPLD's raw codec, which simply writes and reads a Node which can be represented as bytes.
The datamodel package defines the most essential interfaces for describing IPLD Data -- such as Node, NodePrototype, NodeBuilder, Link, and Path.
The datamodel package defines the most essential interfaces for describing IPLD Data -- such as Node, NodePrototype, NodeBuilder, Link, and Path.
The fluent package offers helper utilities for using NodeAssembler more tersely by providing an interface that handles all errors for you, and allows use of closures for any recursive assembly so that creating trees of data results in indentation for legibility.
The fluent package offers helper utilities for using NodeAssembler more tersely by providing an interface that handles all errors for you, and allows use of closures for any recursive assembly so that creating trees of data results in indentation for legibility.
qp
qp helps to quickly build IPLD nodes.
qp helps to quickly build IPLD nodes.
cid
Package 'must' provides another alternative to the 'fluent' package, providing many helpful functions for wrapping methods with multiple returns into a single return (converting errors into panics).
Package 'must' provides another alternative to the 'fluent' package, providing many helpful functions for wrapping methods with multiple returns into a single return (converting errors into panics).
The 'node' package gathers various general purpose Node implementations; the first one you should jump to is 'node/basicnode'.
The 'node' package gathers various general purpose Node implementations; the first one you should jump to is 'node/basicnode'.
basic
This is a transitional package: please move your references to `node/basicnode`.
This is a transitional package: please move your references to `node/basicnode`.
bindnode
Package bindnode provides a datamodel.Node implementation via Go reflection.
Package bindnode provides a datamodel.Node implementation via Go reflection.
tests/corpus
The corpus package exports some values useful for building tests and benchmarks.
The corpus package exports some values useful for building tests and benchmarks.
Printer provides features for printing out IPLD nodes and their contained data in a human-readable diagnostic format.
Printer provides features for printing out IPLD nodes and their contained data in a human-readable diagnostic format.
dmt
Package schema/dmt contains types and functions for dealing with the data model form of IPLD Schemas.
Package schema/dmt contains types and functions for dealing with the data model form of IPLD Schemas.
dsl
The storage package contains interfaces for storage systems, and functions for using them.
The storage package contains interfaces for storage systems, and functions for using them.
sharding
This package contains several useful readymade sharding functions, which should plug nicely into most storage implementations.
This package contains several useful readymade sharding functions, which should plug nicely into most storage implementations.
bsadapter Module
bsrvadapter Module
dsadapter Module
Package traversal provides functional utilities for traversing and transforming IPLD graphs.
Package traversal provides functional utilities for traversing and transforming IPLD graphs.
patch
Package patch provides an implementation of the IPLD Patch specification.
Package patch provides an implementation of the IPLD Patch specification.
selector/parse
selectorparse package contains some helpful functions for parsing the serial form of Selectors.
selectorparse package contains some helpful functions for parsing the serial form of Selectors.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL