dataset

package module
v0.0.67 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 15, 2019 License: BSD-3-Clause Imports: 25 Imported by: 0

README

dataset DOI

dataset is a command line tool, Go package, and an experimental C shared library for working with JSON objects as collections. Collections can be stored on disc or in Cloud Storage. JSON objects are stored in collections as plain UTF-8 text. This means the objects can be accessed with common Unix text processing tools as well as most programming languages. dataset is also available as a Python package, see py_dataset

The dataset command line tool supports common data manage operations such as initialization of collections, creation, reading, updating and deleting JSON objects in the collection. Some of its enhanced features include the ability to generate data frames as well as the ability to import, export and synchronize JSON objects to and from CSV files and Google Sheets (experimental).

dataset is written in the Go programming language. It can be used as a Go package by other Go based software. Go supports generating C shared libraries. By compiling the Go source you can create a libdataset C shared library. The C shared library is currently being used by the Digital Library Development Group in Caltech Library from Python 3.7 (see py_dataset). This approach looks promising if you need support from other programming languages (e.g. Julia can call shared libraries easily with a ccall function).

See getting-started-with-datataset.md for a tour and tutorial. Include are both the command line as well as examples in Python use py_dataset.

Design choices

dataset isn't a database or a replacement for repository systems. It is guided by the idea that you should be able to work with text files, the JSON objects documents, with standard Unix text utilities. It is intended to be simple to use with minimal setup (e.g. dataset init mycollection.ds would create a new collection called 'mycollection.ds'). It is built around a few abstractions -- dataset stores JSON objects in collections, collections are a folder(s) containing the JSON object documents and any attachments, a collections.json file describes the mapping of keys to folder locations). dataset takes minimal system resources and keeps all content, except JSON object attachments, in plain UTF-8 text. Attachments are stored using the venerable "tar" archive format.

The choice of plain UTF-8 and tar balls is intended to help future proof reading dataset collections. Care has been taken to keep dataset simple enough and light weight enough that it will run on a machine as small as a Raspberry Pi while being equally comfortable on a more resource rich server or desktop environment. It should be easy to do alternative implementations in any language that has good string, JSON support and memory management.

Workflows

A typical library processing pattern is to write a "harvester" which then stores it results in a dataset collection. Write something that transforms or aggregates harvested options and then write a final rendering program to prepare the data for the web. The the hearvesters are typically written in Python or as a simple Bash scripts storing the results in a dataset collection. Depending on the performance needs our transform and aggregates stage are written either in Python or Go and our final rendering stages are typically written in Python or as simple Bash scripts.

Features

dataset supports

  • Basic storage actions (create, read, update and delete)
  • listing of collection keys (including filtering and sorting)
  • import/export of CSV files and Google Sheets
  • The ability to reshape data by performing simple object joins
  • The ability to create data grids and frames from collections based on keys lists and dot paths into stored JSON objects

You can work with dataset collections via the command line tool, via Go using the dataset package or in Python 3.7 using the py_dataset python package. dataset is useful for general data science applications which need intermediate JSON object management but not a full blown database.

Limitations of dataset

dataset has many limitations, some are listed below

  • it is not a multi-process, multi-user data store (it's files on "disc" without locking)
  • it is not a replacement for a repository management system
  • it is not a general purpose database system
  • it does not supply version control on collections or objects

Explore dataset through A Shell Example, Getting Started with Dataset, How To guides, topics and Documentation.

Releases

Compiled versions are provided for Linux (amd64), Mac OS X (amd64), Windows 10 (amd64) and Raspbian (ARM7). See https://github.com/caltechlibrary/dataset/releases.

You can use dataset from Python via the py_dataset package.

Documentation

Overview

Package dataset includes the operations needed for processing collections of JSON documents and their attachments.

Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu>

Copyright (c) 2019, Caltech All rights not granted herein are expressly reserved by Caltech.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Package dataset includes the operations needed for processing collections of JSON documents and their attachments.

Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu>

Copyright (c) 2019, Caltech All rights not granted herein are expressly reserved by Caltech.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Package dataset provides a common approach for storing JSON object documents on local disc, on S3 and Google Cloud Storage. It is intended as a single user system for intermediate processing of JSON content for analysis or batch processing. It is not a database management system (if you need a JSON database system I would suggest looking at Couchdb, Mongo and Redis as a starting point).

The approach dataset takes is to store JSON documents in a pairtree structure under the collection folder. The keys are the JSON document names. JSON documents (and possibly their attachments) are then stored based on that assignment in the pairtree. Conversely the collection.json document is used to find and retrieve documents from the collection. The layout of the metadata is as follows

+ Collection - a directory

  • Collection/collection.json - metadata for retrieval
  • Collection/[Pairtree] - holds individual JSON docs and attachments

A key feature of dataset is to be Posix shell friendly. This has lead to storing the JSON documents in a directory structure that standard Posix tooling can traverse. It has also mean that the JSON documents themselves remain on "disc" as plain text. This has facilitated integration with many other applications, programming langauages and systems.

Attachments are non-JSON documents explicitly "attached" that share the same pairtree path but are placed in a sub directory called "_". If the document name is "Jane.Doe.json" and the attachment is photo.jpg the JSON document is "pairtree/Ja/ne/.D/e./Jane.Doe.json" and the photo is in "pairtree/Ja/ne/.D/e./_/photo.jpg".

Additional operations beside storing and reading JSON documents are also supported. These include creating lists (arrays) of JSON documents from a list of keys, listing keys in the collection, counting documents in the collection, indexing and searching by indexes.

The primary use case driving the development of dataset is harvesting API content for library systems (e.g. EPrints, Invenio, ArchivesSpace, ORCID, CrossRef, OCLC). The harvesting needed to be done in such a way as to leverage existing Posix tooling (e.g. grep, sed, etc) for processing and analysis.

Initial use case:

Caltech Library has many repository, catelog and record management systems (e.g. EPrints, Invenion, ArchivesSpace, Islandora, Invenio). It is common practice to harvest data from these systems for analysis or processing. Harvested records typically come in XML or JSON format. JSON has proven a flexibly way for working with the data and in our more modern tools the common format we use to move data around. We needed a way to standardize how we stored these JSON records for intermediate processing to allow us to use the growing ecosystem of JSON related tooling available under Posix/Unix compatible systems.

Package dataset includes the operations needed for processing collections of JSON documents and their attachments.

Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu>

Copyright (c) 2019, Caltech All rights not granted herein are expressly reserved by Caltech.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Package dataset includes the operations needed for processing collections of JSON documents and their attachments.

Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu>

Copyright (c) 2019, Caltech All rights not granted herein are expressly reserved by Caltech.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Package dataset includes the operations needed for processing collections of JSON documents and their attachments.

Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu>

Copyright (c) 2019, Caltech All rights not granted herein are expressly reserved by Caltech.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Package dataset includes the operations needed for processing collections of JSON documents and their attachments.

Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu>

Copyright (c) 2019, Caltech All rights not granted herein are expressly reserved by Caltech.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Package dataset includes the operations needed for processing collections of JSON documents and their attachments.

Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu>

Copyright (c) 2019, Caltech All rights not granted herein are expressly reserved by Caltech.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Index

Constants

View Source
const (
	// Version of the dataset package
	Version = `v0.0.67`

	// License is a formatted from for dataset package based command line tools
	License = `` /* 1530-byte string literal not displayed */

	// Sort directions
	ASC  = iota
	DESC = iota

	// Pairtree is the only supported file layout
	PAIRTREE_LAYOUT = iota
)

Variables

This section is empty.

Functions

func Analyzer added in v0.0.3

func Analyzer(collectionName string) error

Analyzer checks the collection version and either calls bucketAnalyzer or pairtreeAnalyzer as appropriate.

func Delete

func Delete(name string) error

Delete an entire collection

func IsCollection added in v0.0.45

func IsCollection(p string) bool

IsCollection checks to see if a given path contains a collection.json file

func Repair added in v0.0.3

func Repair(collectionName string) error

Repair takes a collection name and calls wither bucketRepair or pairtreeRepair as appropriate.

Types

type Attachment

type Attachment struct {
	// Name is the filename and path to be used inside the generated tar file
	Name string `json:"name"`

	// Content is a byte array for storing the content associated with Name
	// NOTE: It is NOT written out in the Attachment metadata, hence json:"-".
	Content []byte `json:"-"`

	// Size remains to to help us migrate pre v0.0.61 collections.
	// It should reflect the last size added.
	Size int64 `json:"size"`

	// Sizes is the sizes associated with the version being attached
	Sizes map[string]int64 `json:"sizes"`

	// Current holds the semver to the last added version
	Version string `json:"version"`

	// Checksum, current implemented as a MD5 checksum for now
	// You should have one checksum per attached version.
	Checksums map[string]string `json:"checksums"`

	// HRef points at last attached version of the attached document, e.g. v0.0.0/photo.png
	// If you moved an object out of the pairtree it should be a URL.
	HRef string `json:"href"`

	// VersionHRefs is a map to all versions of the attached document
	// {
	//    "v0.0.0": "... /photo.png",
	//    "v0.0.1": "... /photo.png",
	//    "v0.0.2": "... /photo.png"
	// }
	VersionHRefs map[string]string `json:"version_hrefs"`

	// Created a date string in RTC3339 format
	Created string `json:"created"`

	// Modified a date string in RFC3339 format
	Modified string `json:"modified"`

	// Metadata is a map for application specific metadata about attachments.
	Metadata map[string]interface{} `json:"metadata,omitempty"`
}

Attachment is a structure for holding non-JSON content you wish to store alongside a JSON document in a collection

type Collection

type Collection struct {
	// DatasetVersion of the collection
	DatasetVersion string `json:"dataset_version"`

	// Name (filename) of collection
	Name string `json:"name"`

	// KeyMap holds the document key to path in the collection
	KeyMap map[string]string `json:"keymap"`

	// Store holds the storage system information (e.g. local disc, S3, GS)
	// and related methods for interacting with it
	Store *storage.Store `json:"-"`

	// FrameMap is a list of frame names and with rel path to the frame defined in the collection
	FrameMap map[string]string `json:"frames"`

	// Created is the date/time the init command was run in
	// RFC1123 format.
	Created string `json:"created,omitempty"`

	// Version of collection being stored in semvar notation
	Version string `json:"version,omitempty"`

	// Contact info
	Contact string `json:"contact,omitempty"`

	// CodeMeta is a relative path or URL to a Code Meta
	// JSON document for the collection.  Often it'll be
	// in the collection's root and have the value "codemeta.json"
	// but also may be stored someplace else. It should be
	// an empty string if the codemeta.json file has not been
	// created.
	CodeMeta string `json:"codemeta,omitempty"`

	// Who is the person(s)/organization(s) that created the collection
	Who []string `json:"who,omitempty"`
	// What - description of collection
	What string `json:"what,omitempty"`
	// When - date associated with collection (e.g. 2019,
	// 2019-10, 2019-10-02), should map to an approx date like in
	// archival work.
	When string `json:"when,omitempty"`
	// Where - location (e.g. URL, address) of collection
	Where string `json:"where,omitempty"`
	// contains filtered or unexported fields
}

Collection is the container holding a pairtree containing JSON docs

func InitCollection added in v0.0.8

func InitCollection(name string) (*Collection, error)

InitCollection - creates a new collection with default alphabet and names of length 2.

func Open

func Open(name string) (*Collection, error)

Open reads in a collection's metadata and returns and new collection structure and err

func (*Collection) AttachFile added in v0.0.33

func (c *Collection) AttachFile(keyName, semver string, fullName string) error

AttachFile is for attaching a single non-JSON document to a dataset record. It will replace ANY existing attached content with the same semver and basename.

func (*Collection) AttachFiles

func (c *Collection) AttachFiles(keyName string, semver string, fileNames ...string) error

AttachFiles attaches non-JSON documents to a JSON document in the collection. Attachments are stored in a tar file, if tar file exits then attachment(s) are appended to tar file.

func (*Collection) AttachStream added in v0.0.63

func (c *Collection) AttachStream(keyName, semver, fullName string, buf io.Reader) error

AttachStream is for attaching open a non-JSON file buffer (via an io.Reader).

func (*Collection) Attachments

func (c *Collection) Attachments(keyName string) ([]string, error)

Attachments returns a list of files and size attached for a key name in the collection

func (*Collection) Clone added in v0.0.39

func (c *Collection) Clone(cloneName string, keys []string, verbose bool) error

Clone copies the current collection records into a newly initialized collection given a list of keys and new collection name. Returns an error value if there is a problem. Clone does NOT copy attachments, only the JSON records.

func (*Collection) CloneSample added in v0.0.39

func (c *Collection) CloneSample(trainingCollectionName string, testCollectionName string, keys []string, sampleSize int, verbose bool) error

CloneSample takes the current collection, a sample size, a training collection name and a test collection name. The training collection will be created and receive a random sample of the records from the current collection based on the sample size provided. Sample size must be greater than zero and less than the total number of records in the current collection.

If the test collection name is not an empty string it will be created and any records not in the training collection will be cloned from the current collection into the test collection.

func (*Collection) Close

func (c *Collection) Close() error

Close closes a collection, writing the updated keys to disc

func (*Collection) Create

func (c *Collection) Create(name string, data map[string]interface{}) error

Create a JSON doc from an map[string]interface{} and adds it to a collection, if problem returns an error name must be unique. Document must be an JSON object (not an array).

func (*Collection) CreateJSON added in v0.0.33

func (c *Collection) CreateJSON(key string, src []byte) error

CreateJSON adds a JSON doc to a collection, if a problem occurs it returns an error

func (*Collection) Delete

func (c *Collection) Delete(name string) error

Delete removes a JSON doc from a collection

func (*Collection) DeleteFrame added in v0.0.41

func (c *Collection) DeleteFrame(name string) error

DeleteFrame removes a frame from a collection, returns an error if frame can't be deleted.

func (*Collection) DocPath

func (c *Collection) DocPath(name string) (string, error)

DocPath returns a full path to a key or an error if not found

func (*Collection) ExportCSV added in v0.0.3

func (c *Collection) ExportCSV(fp io.Writer, eout io.Writer, f *DataFrame, verboseLog bool) (int, error)

ExportCSV takes a reader and frame and iterates over the objects generating rows and exports then as a CSV file

func (*Collection) ExportTable added in v0.0.47

func (c *Collection) ExportTable(eout io.Writer, f *DataFrame, verboseLog bool) (int, [][]interface{}, error)

ExportTable takes a reader and frame and iterates over the objects generating rows and exports then as a CSV file

func (*Collection) Frame added in v0.0.41

func (c *Collection) Frame(name string, keys []string, dotPaths []string, labels []string, verbose bool) (*DataFrame, error)

Frame takes a set of collection keys, dotpaths and labels builds an ObjectList and assembles metadata returning a new CollectionFrame and error. Frames are associated with the collection and can be re-generated. If the length of labels and dotpaths mis-match an error will be returned. If the frame already exists the definition is NOT UPDATED and the existing frame is returned. If you need to update a frame use ReFrame().

func (*Collection) Frames added in v0.0.41

func (c *Collection) Frames() []string

Frames retrieves a list of available frames associated with a collection

func (*Collection) GetAttachedFiles

func (c *Collection) GetAttachedFiles(keyName string, semver string, filterNames ...string) error

GetAttachedFiles returns an error if encountered, a side effect is the file(s) are written to the current work directory If no filterNames provided then return all attachments are written out An error value is always returned.

func (*Collection) Grid added in v0.0.41

func (c *Collection) Grid(keys []string, dotPaths []string, verbose bool) ([][]interface{}, error)

Grid takes a set of collection keys and builds a grid (a 2D array of cells) from the array of keys and dot paths provided

func (*Collection) HasFrame added in v0.0.47

func (c *Collection) HasFrame(name string) bool

HasFrame checkes to see if a frame is already defined.

func (*Collection) HasKey added in v0.0.3

func (c *Collection) HasKey(key string) bool

HasKey returns true if key is in collection's KeyMap, false otherwise

func (*Collection) ImportCSV added in v0.0.3

func (c *Collection) ImportCSV(buf io.Reader, idCol int, skipHeaderRow bool, overwrite bool, verboseLog bool) (int, error)

ImportCSV takes a reader and iterates over the rows and imports them as a JSON records into dataset. BUG: returns lines processed should probably return number of rows imported

func (*Collection) ImportTable added in v0.0.4

func (c *Collection) ImportTable(table [][]interface{}, idCol int, useHeaderRow bool, overwrite, verboseLog bool) (int, error)

ImportTable takes a [][]interface{} and iterates over the rows and imports them as a JSON records into dataset.

func (*Collection) Join added in v0.0.47

func (c *Collection) Join(key string, obj map[string]interface{}, overwrite bool) error

Join takes a key, a map[string]interface{}{} and overwrite bool and merges the map with an existing JSON object in the collection. BUG: This is a naive join, it assumes the keys in object are top level properties.

func (*Collection) KeyFilter added in v0.0.33

func (c *Collection) KeyFilter(keyList []string, filterExpr string) ([]string, error)

KeyFilter takes a list of keys and filter expression and returns the list of keys passing through the filter or an error

func (*Collection) KeySortByExpression added in v0.0.33

func (c *Collection) KeySortByExpression(keys []string, expr string) ([]string, error)

KeySortByExpression takes a array of keys and a sort expression and turns a sorted list of keys.

func (*Collection) Keys

func (c *Collection) Keys() []string

Keys returns a list of keys in a collection

func (*Collection) Length added in v0.0.6

func (c *Collection) Length() int

Length returns the number of keys in a collection

func (*Collection) MergeFromTable added in v0.0.47

func (c *Collection) MergeFromTable(frameName string, table [][]interface{}, overwrite bool, verbose bool) error

MergeFromTable - uses a DataFrame associated in the collection to map columns from a table into JSON object attributes saving the JSON object in the collection. If overwrite is true then JSON objects for matching keys will be updated, if false only new objects will be added to collection. Returns an error value

func (*Collection) MergeIntoTable added in v0.0.47

func (c *Collection) MergeIntoTable(frameName string, table [][]interface{}, overwrite bool, verbose bool) ([][]interface{}, error)

MergeIntoTable - uses a DataFrame associated in the collection to map attributes into table appending new content and optionally overwriting existing content for rows with matching ids. Returns a new table (i.e. [][]interface{}) or error.

func (*Collection) ObjectList added in v0.0.61

func (c *Collection) ObjectList(keys []string, dotPaths []string, labels []string, verbose bool) ([]map[string]interface{}, error)

ObjectList (on a collection) takes a set of collection keys and builds an array of objects (i.e. map[string]interface{}) from the array of keys, dot paths and labels provided.

func (*Collection) Prune added in v0.0.33

func (c *Collection) Prune(keyName string, semver string, filterNames ...string) error

Prune a non-JSON document from a JSON document in the collection.

func (*Collection) Read

func (c *Collection) Read(name string, data map[string]interface{}, cleanObject bool) error

Read finds the record in a collection, updates the data interface provide and if problem returns an error name must exist or an error is returned

func (*Collection) ReadJSON added in v0.0.33

func (c *Collection) ReadJSON(name string) ([]byte, error)

ReadJSON finds a the record in the collection and returns the JSON source

func (*Collection) Reframe added in v0.0.41

func (c *Collection) Reframe(name string, keys []string, verbose bool) error

Reframe will re-generate contents of a frame based on the current records in a collection. If a list of keys is supplied then the regenerated frame will be based on the new set of keys provided

func (*Collection) SaveFrame added in v0.0.47

func (c *Collection) SaveFrame(name string, f *DataFrame) error

SaveFrame saves a frame in a collection or returns an error

func (*Collection) SaveMetadata added in v0.0.48

func (c *Collection) SaveMetadata() error

SaveMetadata writes the collection's metadata to c.Store and c.workPath

func (*Collection) Update

func (c *Collection) Update(name string, data map[string]interface{}) error

Update JSON doc in a collection from the provided data interface (note: JSON doc must exist or returns an error )

func (*Collection) UpdateJSON added in v0.0.33

func (c *Collection) UpdateJSON(name string, src []byte) error

UpdateJSON a JSON doc in a collection, returns an error if there is a problem

type DataFrame added in v0.0.41

type DataFrame struct {
	// Explicit at creation
	Name string `json:"frame_name"`

	// CollectionName holds the name of the collection the frame was generated from. In theory you could
	// define a frame in one collection and use its results in another. A DataFrame can be rendered as a JSON
	// document.
	CollectionName string `json:"collection_name"`

	// DotPaths is a slice holding the definitions of what each Object attribute's data source is.
	DotPaths []string `json:"dot_paths"`

	// NOTE: Keys should hold the same values as column zero of the grid.
	// Keys controls the order of rows in a grid when reframing.
	Keys []string `json:"keys"`

	// NOTE: ObjectList is a replacement for Grid. This representaition more
	// closely mirrors data frames as used in Python and R. It also can help
	// avoid an inner loop on iteration because we don't need to track a relative
	// index to get a "cell" value from a column heading.
	ObjectList []map[string]interface{} `json:"object_list"`

	// Created is the date the frame is originally generated and defined
	Created time.Time `json:"created"`

	// Updated is the date the frame is updated (e.g. reframed)
	Updated time.Time `json:"updated,omitempty"`

	// AllKeys is a flag used to define a frame as operating over an entire collection,
	// this allows for simplier update.  NOTE: this value effects how Reframe works.
	AllKeys bool `json:"use_all_keys"`

	// FilterExpr is a the expression used to filter a collections keys to determine
	// how a frame is "reframed".  It generally is faster to create your key list outside
	// the frame but that approach has the disadvantage of not persisting with the frame.
	// NOTE: this value effects how Reframe works.
	FilterExpr string `json:"filter_expr,omitempty"`

	// SortExpr holds the sort expression so it persists with the frame. Often you can
	// get a faster sort outside the frame but that comes at a disadvantage of not being
	// persisted with the frame. NOTE: this value effects how Reframe works.
	SortExpr string `json:"sort_expr,omitempty"`
	// SampleSize is used to hold a frame intended to be a sample. It is used when re-generating
	// the same. NOTE: this value effects how Reframe works.
	SampleSize int `json:"sample_size"`

	// Labels are derived from the DotPaths provided but can be replaced without changing
	// the dotpaths. Typically this is used to surface a deeper dotpath's value as something more
	// useful in the frame's context (e.g. first_title from an array of titles might be labeled "title")
	Labels []string `json:"labels,omitempty"`
}

DataFrame is the basic structure holding a list of objects as well as the definition of the list (so you can regenerate an updated list from a changed collection). It persists with the collection.

func (*DataFrame) Grid added in v0.0.41

func (f *DataFrame) Grid(includeHeaderRow bool) [][]interface{}

Grid returns a Grid representaiton of a DataFrame's ObjectList

func (*DataFrame) Objects added in v0.0.64

func (f *DataFrame) Objects() []map[string]interface{}

Objects returns a copy of DataFrame.ObjectList (array of map[string]interface{})

func (*DataFrame) String added in v0.0.41

func (f *DataFrame) String() string

String renders the data structure DataFrame as JSON to a string

type Err added in v0.0.62

type Err struct {
	Msg string
}

Err holds Semver's error messages

func (*Err) Error added in v0.0.62

func (err *Err) Error() string

type KeyValue added in v0.0.7

type KeyValue struct {
	// JSON Record ID in collection
	ID string
	// The value of the field to be sorted from record
	Value interface{}
}

type KeyValues added in v0.0.7

type KeyValues []KeyValue

func (KeyValues) Len added in v0.0.7

func (a KeyValues) Len() int

func (KeyValues) Less added in v0.0.7

func (a KeyValues) Less(i, j int) bool

func (KeyValues) Swap added in v0.0.7

func (a KeyValues) Swap(i, j int)

type Semver added in v0.0.62

type Semver struct {
	// Major version number (required, must be an integer as string)
	Major string `json:"major"`
	// Minor version number (required, must be an integer as string)
	Minor string `json:"minor"`
	// Patch level (optional, must be an integer as string)
	Patch string `json:"patch,omitempty"`
	// Suffix string, (optional, any string)
	Suffix string `json:"suffix,omitempty"`
}

Semver holds the information to generate a semver string

func ParseSemver added in v0.0.62

func ParseSemver(src []byte) (*Semver, error)

ParseSemver takes a byte slice and returns a version struct, and an error value.

func (*Semver) IncMajor added in v0.0.64

func (sv *Semver) IncMajor() error

IncMajor increments a major version number, zeros minor and patch values. Returns an error if increment fails.

func (*Semver) IncMinor added in v0.0.64

func (sv *Semver) IncMinor() error

IncMinor increments a minor version number and zeros the patch level or returns an error. Returns an error if increment fails.

func (*Semver) IncPatch added in v0.0.64

func (sv *Semver) IncPatch() error

IncPatch increments the patch level if it is numeric or returns an error.

func (*Semver) String added in v0.0.62

func (v *Semver) String() string

func (*Semver) ToJSON added in v0.0.62

func (v *Semver) ToJSON() []byte

ToJSON takes a version struct and returns JSON as byte slice

Directories

Path Synopsis
cmd
dataset
dataset is a command line tool, Go package, shared library and Python package for working with JSON objects as collections on disc, in an S3 bucket or in Cloud Storage @Author R. S. Doiel, <rsdoiel@library.caltech.edu> Copyright (c) 2018, Caltech All rights not granted herein are expressly reserved by Caltech.
dataset is a command line tool, Go package, shared library and Python package for working with JSON objects as collections on disc, in an S3 bucket or in Cloud Storage @Author R. S. Doiel, <rsdoiel@library.caltech.edu> Copyright (c) 2018, Caltech All rights not granted herein are expressly reserved by Caltech.
gsheetaccess
This is based on Google demo code to access It's Google Sheets API.
This is based on Google demo code to access It's Google Sheets API.
gsheets.go is a part of the dataset package written to allow import/export of records to/from dataset collections.
gsheets.go is a part of the dataset package written to allow import/export of records to/from dataset collections.
tbl.go provides some utility functions to move string one and two demensional slices into/out of one and two deminsional slices.
tbl.go provides some utility functions to move string one and two demensional slices into/out of one and two deminsional slices.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL