dataset

package module
v0.0.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 30, 2017 License: BSD-3-Clause Imports: 40 Imported by: 1

README

dataset DOI

dataset is a small collection of command line tools for working with JSON documents stored as collections. This include basic storage actions (e.g. CRUD operations, filtering and extraction) as well as indexing, searching and even web hosting. A project goal of dataset is to "play nice" with shell scripts and other Unix tools (e.g. it respects standard in, out and error with minimal side effects). This means it is easily scriptable via Bash shell or interpretted languages like Python.

dataset is also golang package for managing JSON documents and their attachments on disc or in cloud storage (e.g. Amazon S3, Google Cloud Storage). The command line utilities excersize this package extensively.

The inspiration for creating dataset was the desire to process metadata as JSON document collections using Unix shell utilities and pipe lines. While it has grown in capabilities that remains a core use case.

dataset organanizes JSON documents by unique names in collections. Collections are represented as an index into a series of buckets. The buckets are subdirectories (or paths under cloud storage services) holding individual JSON documents and their attachments. The JSON documents in a collection as assigned to a bucket (and the bucket generated if necessary) automatically when the document is added to the collection. The assigment to the buckets is round robin determined by the order of addition. This avoids having too many documents assigned to a single path (e.g. on some Unix there is a limit to how many documents are held in a single directory). This means you can list and manipulate the JSON documents directly with with common Unix commands like ls, find, grep or their cloud counter parts.

Limitations of dataset

dataset has many limitations, some are listed below

  • it is not a real-time data store
  • it is not a repository management system
  • it is not a general purpose multiuser database system

Operations

The basic operations support by dataset are listed below organized by collection and JSON document level.

Collection Level
  • Create a collection
  • List the JSON document ids in a collection
  • Create named lists of JSON document ids (aka select lists)
  • Read back a named list of JSON document ids
  • Delete a named list of JSON document ids
  • Import JSON documents from rows of a CSV file or Google Sheets
  • Filter JSON documents and return a list of matching ids
  • Extract Unique JSON attribute values from a collection
JSON Document level
  • Create a JSON document in a collection
  • Update a JSON document in a collection
  • Read back a JSON document in a collection
  • Delete a JSON document in a collection
  • Join a JSON document with a documents in a collection

Additionally

  • Attach a file to a JSON document in a collection
  • List the files attached to a JSON document in a collection
  • Update a file attached to a JSON document in a collection
  • Delete one or more attached files of a JSON document in a collection

Examples

Common operations using the dataset command line tool

  • create collection
  • create a JSON document to collection
  • read a JSON document
  • update a JSON document
  • delete a JSON document
    # Create a collection "mystuff" inside the directory called demo
    dataset init demo/mystuff
    # if successful an expression to export the collection name is show
    export DATASET=demo/mystuff

    # Create a JSON document 
    dataset create freda.json '{"name":"freda","email":"freda@inverness.example.org"}'
    # If successful then you should see an OK or an error message

    # Read a JSON document
    dataset read freda.json

    # Path to JSON document
    dataset path freda.json

    # Update a JSON document
    dataset update freda.json '{"name":"freda","email":"freda@zbs.example.org"}'
    # If successful then you should see an OK or an error message

    # List the keys in the collection
    dataset keys

    # Filter for the name "freda"
    dataset filter '(eq .name "freda")'

    # Join freda-profile.json with "freda" adding unique key/value pairs
    dataset join update freda freda-profile.json

    # Join freda-profile.json overwriting in commont key/values adding unique key/value pairs
    # from freda-profile.json
    dataset join overwrite freda freda-profile.json

    # Delete a JSON document
    dataset delete freda.json

    # To remove the collection just use the Unix shell command
    # /bin/rm -fR demo/mystuff

Releases

Compiled versions are provided for Linux (amd64), Mac OS X (amd64), Windows 10 (amd64) and Raspbian (ARM7). See https://github.com/caltechlibrary/dataset/releases.

Documentation

Overview

Package dataset is a go package for managing JSON documents stored on disc

Author R. S. Doiel, <rsdoiel@library.caltech.edu>

Copyright (c) 2017, Caltech All rights not granted herein are expressly reserved by Caltech.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Package dataset is a go package for managing JSON documents stored on disc

Author R. S. Doiel, <rsdoiel@library.caltech.edu>

Copyright (c) 2017, Caltech All rights not granted herein are expressly reserved by Caltech.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Package dataset provides a common approach of storing JSON documents and related attachments systematically and predictably on the file systems. The driving usecase behind dataset is creating a unified approach to harvesting metadata from various hetrogenious systems using in Caltech Library (e.g. EPrints, ArchivesSpace, Islandora and outside API like ORCID, CrossRef, OCLC). This suggests that dataset as a go package and command line tool may have more general applications where a database sytems might be more than you need and ad-hoc collections on disc maybe inconvient to evolve as your data explortion takes you in different directions in analysis. The dataset command line tool is in intended to be easy to script both in Bash as well as more featureful languages like Python.

Dataset is not a good choice if you need a fast key/value store or actual database features. It doesn't support multiple users, record locking or a query language interface. It is targetting the sweet spot between ad-hoc JSON document storage on disc and needing a more complete system like Couch, Solr, or Fedora4 for managing JSON docs.

Use case

Caltech Library has many repository, catelog and record management systems (e.g. EPrints, Invenion, ArchivesSpace, Islandora, Invenio). It is common practice to harvest data from these systems for analysis or processing. Harvested records typically come in XML or JSON format. JSON has proven a flexibly way for working with the data and in our more modern tools the common format we use to move data around. We needed a way to standardize how we stored these JSON records for intermediate processing to allow us to use the growing ecosystem of JSON related tooling available under Posix/Unix compatible systems.

Aproach to file system layout

+ /dataset (directory on file system)

  • collection (directory on file system)
  • collection.json - metadata about collection
  • maps the filename of the JSON blob stored to a bucket in the collection
  • e.g. file "mydocs.jons" stored in bucket "aa" would have a map of {"mydocs.json": "aa"}
  • keys.json - a list of keys in the collection (it is the default select list)
  • BUCKETS - a sequence of alphabet names for buckets holding JSON documents and their attachments
  • Buckets let supporting common commands like ls, tree, etc. when the doc count is high
  • SELECT_LIST.json - a JSON document holding an array of keys
  • the default select list is "keys", it is not mutable by Push, Pop, Shift and Unshift
  • select lists cannot be named "keys" or "collection"

BUCKETS are names without meaning normally using Alphabetic characters. A dataset defined with four buckets might looks like aa, ab, ba, bb. These directories will contains JSON documents and a tar file if the document has attachments.

Operations

+ Collection level

  • Create (collection) - creates or opens collection structure on disc, creates collection.json and keys.json if new
  • Open (collection) - opens an existing collections and reads collection.json into memory
  • Close (collection) - writes changes to collection.json to disc if dirty
  • Delete (collection) - removes a collection from disc
  • Keys (collection) - list of keys in the collection
  • Select (collection) - returns the request select list, will create the list if not exist, append keys if provided
  • Clear (collection) - Removes a select list from a collection and disc
  • Lists (collection) - returns the names of the available select lists

+ JSON document level

  • Create (JSON document) - saves a new JSON blob or overwrites and existing one on disc with given blob name, updates keys.json if needed
  • Read (JSON document)) - finds the JSON document in the buckets and returns the JSON document contents
  • Update (JSON document) - updates an existing blob on disc (record must already exist)
  • Delete (JSON document) - removes a JSON blob from its disc
  • Path (JSON document) - returns the path to the JSON document

+ Select list level

  • First (select list) - returns the value of the first key in the select list (non-distructively)
  • Last (select list) - returns the value of the last key in the select list (non-distructively)
  • Rest (select list) - returns values of all keys in the select list except the first (non-destructively)
  • List (select list) - returns values of all keys in the select list (non-destructively)
  • Length (select list) - returns the number of keys in a select list
  • Push (select list) - appends one or more keys to an existing select list
  • Pop (select list) - returns the last key in select list and removes it
  • Unshift (select list) - inserts one or more new keys at the beginning of the select list
  • Shift (select list) - returns the first key in a select list and removes it
  • Sort (select list) - orders the select lists' keys in ascending or descending alphabetical order
  • Reverse (select list) - flips the order of the keys in the select list

Example

Common operations using the *dataset* command line tool

+ create collection + create a JSON document to collection + read a JSON document + update a JSON document + delete a JSON document

Example Bash script usage

# Create a collection "mystuff" inside the directory called demo
dataset init demo/mystuff
# if successful an expression to export the collection name is show
export DATASET=demo/mystuff

# Create a JSON document
dataset create freda.json '{"name":"freda","email":"freda@inverness.example.org"}'
# If successful then you should see an OK or an error message

# Read a JSON document
dataset read freda.json

# Path to JSON document
dataset path freda.json

# Update a JSON document
dataset update freda.json '{"name":"freda","email":"freda@zbs.example.org"}'
# If successful then you should see an OK or an error message

# List the keys in the collection
dataset keys

# Delete a JSON document
dataset delete freda.json

# To remove the collection just use the Unix shell command
# /bin/rm -fR demo/mystuff

Common operations shown in Golang

+ create collection + create a JSON document to collection + read a JSON document + update a JSON document + delete a JSON document

Example Go code

// Create a collection "mystuff" inside the directory called demo
collection, err := dataset.Create("demo/mystuff", dataset.GenerateBucketNames("ab", 2))
if err != nil {
    log.Fatalf("%s", err)
}
defer collection.Close()
// Create a JSON document
docName := "freda.json"
document := map[string]string{"name":"freda","email":"freda@inverness.example.org"}
if err := collection.Create(docName, document); err != nil {
    log.Fatalf("%s", err)
}
// Attach an image file to freda.json in the collection
if buf, err := ioutil.ReadAll("images/freda.png"); err != nil {
   collection.Attach("freda", "images/freda.png", buf)
} else {
   log.Fatalf("%s", err)
}
// Read a JSON document
if err := collection.Read(docName, document); err != nil {
    log.Fatalf("%s", err)
}
// Update a JSON document
document["email"] = "freda@zbs.example.org"
if err := collection.Update(docName, document); err != nil {
    log.Fatalf("%s", err)
}
// Delete a JSON document
if err := collection.Delete(docName); err != nil {
    log.Fatalf("%s", err)
}

Working with attachments in Go

    collection, err := dataset.Open("dataset/mystuff")
    if err != nil {
        log.Fatalf("%s", err)
    }
    defer collection.Close()

	// Add a helloworld.txt file to freda.json record as an attachment.
    if err := collection.Attach("freda", "docs/helloworld.txt", []byte("Hello World!!!!")); err != nil {
        log.Fatalf("%s", err)
    }

	// Attached files aditional files from the filesystem by their relative file path
	if err := collection.AttachFiles("freda", "docs/presentation-article.pdf", "docs/charts-and-figures.zip", "docs/transcript.fdx") {
        log.Fatalf("%s", err)
	}

	// List the attached files for freda.json
	if filenames, err := collection.Attachments("freda"); err != nil {
        log.Fatalf("%s", err)
	} else {
		fmt.Printf("%s\n", strings.Join(filenames, "\n"))
	}

	// Get an array of attachments (reads in content into memory as an array of Attachment Structs)
	allAttachments, err := collection.GetAttached("freda")
	if err != nil {
        log.Fatalf("%s", err)
	}
	fmt.Printf("all attachments: %+v\n", allAttachments)

	// Get two attachments docs/transcript.fdx, docs/helloworld.txt
	twoAttachments, _ := collection.GetAttached("fred", "docs/transcript.fdx", "docs/helloworld.txt")
	fmt.Printf("two attachments: %+v\n", twoAttachments)

    // Get attached files writing them out to disc relative to your working directory
	if err := collection.GetAttachedFiles("freda"); err != nil {
        log.Fatalf("%s", err)
	}

	// Get two selection attached files writing them out to disc relative to your working directory
	if err := collection.GetAttached("fred", "docs/transcript.fdx", "docs/helloworld.txt"); err != nil {
        log.Fatalf("%s", err)
	}

    // Remove docs/transcript.fdx and docs/helloworld.txt from freda.json attachments
	if err := collection.Detach("fred", "docs/transcript.fdx", "docs/helloworld.txt"); err != nil {
        log.Fatalf("%s", err)
	}

	// Remove all attached files from freda.json
	if err := collection.Detach("fred")
        log.Fatalf("%s", err)
	}

Package dataset is a go package for managing JSON documents stored on disc

Author R. S. Doiel, <rsdoiel@library.caltech.edu>

Copyright (c) 2017, Caltech All rights not granted herein are expressly reserved by Caltech.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Index

Constants

View Source
const (
	// Version of the dataset package
	Version = "v0.0.5-dev"

	// License is a formatted from for dataset package based command line tools
	License = `` /* 1530-byte string literal not displayed */

	DefaultAlphabet = `abcdefghijklmnopqrstuvwxyz`

	ASC  = iota
	DESC = iota
)

Variables

This section is empty.

Functions

func Analyzer added in v0.0.3

func Analyzer(collectionName string) error

Analyzer checks a collection for problems

+ checks if collection.json exists and is valid + checks if keys.json exits and is valid + checks version of collection and version of dataset tool running + compares keys.json with k/v pairs in collectio.keymap + checks if all collection.buckets exist + checks for unaccounted for buckets + checks if all keys in collection.keymap exist + checks for unaccounted for keys in buckets + checks for keys in multiple buckets and reports duplicate record modified times

func CSVFormatter added in v0.0.3

func CSVFormatter(out io.Writer, results *bleve.SearchResult, colNames []string, skipHeaderRow bool) error

CSVFormatter writes out CSV representation using encoding/csv

func Delete

func Delete(name string) error

Delete an entire collection

func Find

func Find(out io.Writer, idxAlias bleve.IndexAlias, queryStrings []string, options map[string]string) (*bleve.SearchResult, error)

Find takes a Bleve index name and query string, opens the index, and writes the results to the os.File provided. Function returns an error if their are problems.

func Formatter added in v0.0.3

func Formatter(out io.Writer, results *bleve.SearchResult, tmpl *template.Template, tName string, pageData map[string]string) error

Formatter writes out a format based on the specified template name merging any additional pageData provided

func GenerateBucketNames

func GenerateBucketNames(alphabet string, length int) []string

GenerateBucketNames provides a list of permutations of requested length to use as bucket names

func JSONFormatter added in v0.0.3

func JSONFormatter(out io.Writer, results *bleve.SearchResult) error

JSONFormatter writes out JSON representation using encoding/json

func OpenIndexes added in v0.0.3

func OpenIndexes(indexNames []string) (bleve.IndexAlias, []string, error)

OpenIndexes opens a list of index names and returns an index alias, a combined list of fields and error

func Repair added in v0.0.3

func Repair(collectionName string) error

Repair will take a collection name and attempt to recreate valid collection.json and keys.json files from content in discovered buckets and json documents

Types

type Attachment

type Attachment struct {
	// Name is the filename and path to be used inside the generated tar file
	Name string
	// Body is a byte array for storing the content associated with Name
	Body []byte
}

Attachment is a structure for holding non-JSON content you wish to store alongside a JSON document in a collection

type Collection

type Collection struct {
	// Version of collection being stored
	Version string `json:"verison"`
	// Name of collection
	Name string `json:"name"`
	// Buckets is a list of bucket names used by collection
	Buckets []string `json:"buckets"`
	// KeyMap holds the document name to bucket map for the collection
	KeyMap map[string]string `json:"keymap"`
	// Store holds the storage system information (e.g. local disc, S3, GS)
	// and related methods for interacting with it
	Store *storage.Store `json:"-"`
	// FullPath is the fully qualified path on disc or URI to S3 or GS bucket
	FullPath string `json:"-"`
}

Collection is the container holding buckets which in turn hold JSON docs

func Create

func Create(name string, bucketNames []string) (*Collection, error)

Create - create a new collection structure on disc name should be filesystem friendly

func Open

func Open(name string) (*Collection, error)

Open reads in a collection's metadata and returns and new collection structure and err

func (*Collection) Attach

func (c *Collection) Attach(name string, attachments ...*Attachment) error

Attach a non-JSON document to a JSON document in the collection. Attachments are stored in a tar file, if tar file exits then attachment(s) are appended to tar file.

func (*Collection) AttachFiles

func (c *Collection) AttachFiles(name string, fileNames ...string) error

AttachFiles a non-JSON documents to a JSON document in the collection. Attachments are stored in a tar file, if tar file exits then attachment(s) are appended to tar file.

func (*Collection) Attachments

func (c *Collection) Attachments(name string) ([]string, error)

Attachments returns a list of files in the attached tarball for a given name in the collection

func (*Collection) Close

func (c *Collection) Close() error

Close closes a collection, writing the updated keys to disc

func (*Collection) Create

func (c *Collection) Create(name string, data interface{}) error

Create a JSON doc from an interface{} and adds it to a collection, if problem returns an error name must be unique

func (*Collection) CreateAsJSON

func (c *Collection) CreateAsJSON(name string, src []byte) error

CreateAsJSON adds or replaces a JSON doc to a collection, if problem returns an error name must be unique (treated like a key in a key/value store)

func (*Collection) Delete

func (c *Collection) Delete(name string) error

Delete removes a JSON doc from a collection

func (*Collection) Detach

func (c *Collection) Detach(name string, filterNames ...string) error

Detach a non-JSON document from a JSON document in the collection.

func (*Collection) DocPath

func (c *Collection) DocPath(name string) (string, error)

DocPath returns a full path to a key or an error if not found

func (*Collection) ExportCSV added in v0.0.3

func (c *Collection) ExportCSV(fp io.Writer, filterExpr string, dotPaths []string, colNames []string, verboseLog bool) (int, error)

ExportCSV takes a reader and iterates over the rows and exports then as a CSV file

func (*Collection) Extract added in v0.0.3

func (c *Collection) Extract(filterExpr string, dotPath string) ([]string, error)

Extract takes a collection, a filter and a dot path and returns a list of unique values E.g. in a collection article records extracting orcid ids which are values in a authors field

func (*Collection) GetAttached

func (c *Collection) GetAttached(name string, filterNames ...string) ([]Attachment, error)

GetAttached returns an Attachment array or error If no filterNames provided then return all attachments or error

func (*Collection) GetAttachedFiles

func (c *Collection) GetAttachedFiles(name string, filterNames ...string) error

GetAttachedFiles returns an error if encountered, side effect is to write file to destination directory If no filterNames provided then return all attachments or error

func (*Collection) HasKey added in v0.0.3

func (c *Collection) HasKey(key string) bool

HasKey returns true if key is in collection's KeyMap, false otherwise

func (*Collection) ImportCSV added in v0.0.3

func (c *Collection) ImportCSV(buf io.Reader, skipHeaderRow bool, idCol int, useUUID bool, verboseLog bool) (int, error)

ImportCSV takes a reader and iterates over the rows and imports them as a JSON records into dataset.

func (*Collection) ImportTable added in v0.0.4

func (c *Collection) ImportTable(table [][]string, skipHeaderRow bool, idCol int, useUUID bool, verboseLog bool) (int, error)

ImportTable takes a [][]string and iterates over the rows and imports them as a JSON records into dataset.

func (*Collection) Indexer

func (c *Collection) Indexer(idxName string, idxMapName string, batchSize int, keys []string) error

Indexer ingests all the records of a collection applying the definition creating or updating a Bleve index. Returns an error.

func (*Collection) Keys

func (c *Collection) Keys() []string

Keys returns a list of keys in a collection

func (*Collection) Read

func (c *Collection) Read(name string, data interface{}) error

Read finds the record in a collection, updates the data interface provide and if problem returns an error name must exist or an error is returned

func (*Collection) ReadAsJSON

func (c *Collection) ReadAsJSON(name string) ([]byte, error)

ReadAsJSON finds a the record in the collection and returns the JSON source

func (*Collection) Update

func (c *Collection) Update(name string, data interface{}) error

Update JSON doc in a collection from the provided data interface (note: JSON doc must exist or returns an error )

func (*Collection) UpdateAsJSON

func (c *Collection) UpdateAsJSON(name string, src []byte) error

UpdateAsJSON takes a JSON doc and writes it to a collection (note: Record must exist or returns an error)

Directories

Path Synopsis
analyzers
cmds
dataset
dataset is a command line utility to manage content stored in a dataset collection.
dataset is a command line utility to manage content stored in a dataset collection.
dsfind
dsfind is a command line utility that will search one or more Blevesearch indexes created by dsindexer.
dsfind is a command line utility that will search one or more Blevesearch indexes created by dsindexer.
dsindexer
dsindexer creates Blevesearch indexes for a dataset collection.
dsindexer creates Blevesearch indexes for a dataset collection.
dsws
dsws.go - A web server/service for hosting dataset search and related static pages.
dsws.go - A web server/service for hosting dataset search and related static pages.
gsheets.go is a part of the dataset package written to allow import/export of records to/from dataset collections.
gsheets.go is a part of the dataset package written to allow import/export of records to/from dataset collections.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL