restic

package module
v0.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 21, 2015 License: BSD-2-Clause Imports: 24 Imported by: 0

README

Stories in Ready Build Status Build status sourcegraph status Coverage Status

Restic Design Principles

Restic is a program that does backups right and was designed with the following principles in mind:

  • Easy: Doing backups should be a frictionless process, otherwise you might be tempted to skip it. Restic should be easy to configure and use, so that, in the event of a data loss, you can just restore it. Likewise, restoring data should not be complicated.

  • Fast: Backing up your data with restic should only be limited by your network or hard disk bandwidth so that you can backup your files every day. Nobody does backups if it takes too much time. Restoring backups should only transfer data that is needed for the files that are to be restored, so that this process is also fast.

  • Verifiable: Much more important than backup is restore, so restic enables you to easily verify that all data can be restored.

  • Secure: Restic uses cryptography to guarantee confidentiality and integrity of your data. The location the backup data is stored is assumed not to be a trusted environment (e.g. a shared space where others like system administrators are able to access your backups). Restic is built to secure your data against such attackers.

  • Efficient: With the growth of data, additional snapshots should only take the storage of the actual increment. Even more, duplicate data should be de-duplicated before it is actually written to the storage back end to save precious backup space.

Build restic

Install Go/Golang (at least version 1.3), then run go run build.go, afterwards you'll find the binary in the current directory:

$ go run build.go

$ ./restic --help
Usage:
  restic [OPTIONS] <command>

Application Options:
  -r, --repo= Repository directory to backup to/restore from

Help Options:
  -h, --help  Show this help message

Available commands:
  backup     save file/directory
  cache      manage cache
  cat        dump something
  check      check the repository
  find       find a file/directory
  init       create repository
  key        manage keys
  list       lists data
  ls         list files
  restore    restore a snapshot
  snapshots  show snapshots
  unlock     remove locks
  version    display version

A short demo recording can be found here: asciicast

Compatibility

Backward compatibility for backups is important so that our users are always able to restore saved data. Therefore restic follows Semantic Versioning to clearly define which versions are compatible. The repository and data structures contained therein are considered the "Public API" in the sense of Semantic Versioning.

We guarantee backward compatibility of all repositories within one major version; as long as we do not increment the major version, data can be read and restored. We strive to be fully backward compatible to all prior versions.

Contribute and Documentation

Contributions are welcome! More information can be found in CONTRIBUTING.md. A document describing the design of restic and the data structures stored on the backend is contained in doc/Design.md. The development environment is described in CONTRIBUTING.md.

Contact

If you discover a bug or find something surprising, please feel free to open a github issue. If you would like to chat about restic, there is also the IRC channel #restic on irc.freenode.net. Or just write me an email :)

Important: If you discover something that you believe to be a possible critical security problem, please do not open a GitHub issue but send an email directly to alexander@bumpern.de. If possible, please encrypt your email using PGP (0xD3F7A907).

Talks

The following talks will be or have been given about restic:

License

Restic is licensed under "BSD 2-Clause License". You can find the complete text in the file LICENSE.

Documentation

Overview

Package restic is the top level package for the restic backup program, please see https://github.com/restic/restic for more information.

This package exposes the main components needed to create and restore a backup as well as handling things like a local cache of objects.

Index

Constants

This section is empty.

Variables

View Source
var (
	ErrNodeNotFound      = errors.New("named node not found")
	ErrNodeAlreadyInTree = errors.New("node already present")
)

Functions

func FindSnapshot

func FindSnapshot(repo *repository.Repository, s string) (backend.ID, error)

FindSnapshot takes a string and tries to find a snapshot whose ID matches the string as closely as possible.

func IsAlreadyLocked

func IsAlreadyLocked(err error) bool

IsAlreadyLocked returns true iff err is an instance of ErrAlreadyLocked.

func RemoveAllLocks

func RemoveAllLocks(repo *repository.Repository) error

RemoveAllLocks removes all locks forcefully.

func RemoveStaleLocks

func RemoveStaleLocks(repo *repository.Repository) error

RemoveStaleLocks deletes all locks detected as stale from the repository.

func WalkTree

func WalkTree(repo *repository.Repository, id backend.ID, done chan struct{}, jobCh chan<- WalkTreeJob)

WalkTree walks the tree specified by id recursively and sends a job for each file and directory it finds. When the channel done is closed, processing stops.

Types

type Archiver

type Archiver struct {
	Error        func(dir string, fi os.FileInfo, err error) error
	SelectFilter pipe.SelectFunc
	Excludes     []string
	// contains filtered or unexported fields
}

Archiver is used to backup a set of directories.

func NewArchiver

func NewArchiver(repo *repository.Repository) *Archiver

NewArchiver returns a new archiver.

func (*Archiver) Save

func (arch *Archiver) Save(t pack.BlobType, id backend.ID, length uint, rd io.Reader) error

Save stores a blob read from rd in the repository.

func (*Archiver) SaveFile

func (arch *Archiver) SaveFile(p *Progress, node *Node) error

SaveFile stores the content of the file on the backend as a Blob by calling Save for each chunk.

func (*Archiver) SaveTreeJSON

func (arch *Archiver) SaveTreeJSON(item interface{}) (backend.ID, error)

SaveTreeJSON stores a tree in the repository.

func (*Archiver) Snapshot

func (arch *Archiver) Snapshot(p *Progress, paths []string, parentID *backend.ID) (*Snapshot, backend.ID, error)

Snapshot creates a snapshot of the given paths. If parentID is set, this is used to compare the files to the ones archived at the time this snapshot was taken.

type Cache

type Cache struct {
	// contains filtered or unexported fields
}

Cache is used to locally cache items from a repository.

func NewCache

func NewCache(repo *repository.Repository, cacheDir string) (*Cache, error)

NewCache returns a new cache at cacheDir. If it is the empty string, the default cache location is chosen.

func (*Cache) Clear

func (c *Cache) Clear(repo *repository.Repository) error

Clear removes information from the cache that isn't present in the repository any more.

func (*Cache) Has

func (c *Cache) Has(t backend.Type, subtype string, id backend.ID) (bool, error)

Has checks if the local cache has the id.

func (*Cache) Load

func (c *Cache) Load(t backend.Type, subtype string, id backend.ID) (io.ReadCloser, error)

Load returns information from the cache. The returned io.ReadCloser must be closed by the caller.

func (*Cache) Store

func (c *Cache) Store(t backend.Type, subtype string, id backend.ID) (io.WriteCloser, error)

Store returns an io.WriteCloser that is used to save new information to the cache. The returned io.WriteCloser must be closed by the caller after all data has been written.

type ErrAlreadyLocked

type ErrAlreadyLocked struct {
	// contains filtered or unexported fields
}

ErrAlreadyLocked is returned when NewLock or NewExclusiveLock are unable to acquire the desired lock.

func (ErrAlreadyLocked) Error

func (e ErrAlreadyLocked) Error() string

type Lock

type Lock struct {
	Time      time.Time `json:"time"`
	Exclusive bool      `json:"exclusive"`
	Hostname  string    `json:"hostname"`
	Username  string    `json:"username"`
	PID       int       `json:"pid"`
	UID       uint32    `json:"uid,omitempty"`
	GID       uint32    `json:"gid,omitempty"`
	// contains filtered or unexported fields
}

Lock represents a process locking the repository for an operation.

There are two types of locks: exclusive and non-exclusive. There may be many different non-exclusive locks, but at most one exclusive lock, which can only be acquired while no non-exclusive lock is held.

A lock must be refreshed regularly to not be considered stale, this must be triggered by regularly calling Refresh.

func LoadLock

func LoadLock(repo *repository.Repository, id backend.ID) (*Lock, error)

LoadLock loads and unserializes a lock from a repository.

func NewExclusiveLock

func NewExclusiveLock(repo *repository.Repository) (*Lock, error)

NewExclusiveLock returns a new, exclusive lock for the repository. If another lock (normal and exclusive) is already held by another process, ErrAlreadyLocked is returned.

func NewLock

func NewLock(repo *repository.Repository) (*Lock, error)

NewLock returns a new, non-exclusive lock for the repository. If an exclusive lock is already held by another process, ErrAlreadyLocked is returned.

func (*Lock) Refresh

func (l *Lock) Refresh() error

Refresh refreshes the lock by creating a new file in the backend with a new timestamp. Afterwards the old lock is removed.

func (*Lock) Stale

func (l *Lock) Stale() bool

Stale returns true if the lock is stale. A lock is stale if the timestamp is older than 30 minutes or if it was created on the current machine and the process isn't alive any more.

func (Lock) String

func (l Lock) String() string

func (*Lock) Unlock

func (l *Lock) Unlock() error

Unlock removes the lock from the repository.

type Node

type Node struct {
	Name       string       `json:"name"`
	Type       string       `json:"type"`
	Mode       os.FileMode  `json:"mode,omitempty"`
	ModTime    time.Time    `json:"mtime,omitempty"`
	AccessTime time.Time    `json:"atime,omitempty"`
	ChangeTime time.Time    `json:"ctime,omitempty"`
	UID        uint32       `json:"uid"`
	GID        uint32       `json:"gid"`
	User       string       `json:"user,omitempty"`
	Group      string       `json:"group,omitempty"`
	Inode      uint64       `json:"inode,omitempty"`
	Size       uint64       `json:"size,omitempty"`
	Links      uint64       `json:"links,omitempty"`
	LinkTarget string       `json:"linktarget,omitempty"`
	Device     uint64       `json:"device,omitempty"`
	Content    []backend.ID `json:"content"`
	Subtree    *backend.ID  `json:"subtree,omitempty"`

	Error string `json:"error,omitempty"`
	// contains filtered or unexported fields
}

Node is a file, directory or other item in a backup.

func NodeFromFileInfo

func NodeFromFileInfo(path string, fi os.FileInfo) (*Node, error)

NodeFromFileInfo returns a new node from the given path and FileInfo.

func (*Node) CreateAt

func (node *Node) CreateAt(path string, repo *repository.Repository) error

CreateAt creates the node at the given path and restores all the meta data.

func (Node) Equals

func (node Node) Equals(other Node) bool

func (Node) MarshalJSON

func (node Node) MarshalJSON() ([]byte, error)

func (*Node) OpenForReading

func (node *Node) OpenForReading() (*os.File, error)

func (Node) RestoreTimestamps

func (node Node) RestoreTimestamps(path string) error

func (Node) String

func (node Node) String() string

func (Node) Tree

func (node Node) Tree() *Tree

func (*Node) UnmarshalJSON

func (node *Node) UnmarshalJSON(data []byte) error

type Progress

type Progress struct {
	OnStart  func()
	OnUpdate ProgressFunc
	OnDone   ProgressFunc
	// contains filtered or unexported fields
}

func NewProgress

func NewProgress(d time.Duration) *Progress

NewProgress returns a new progress reporter. When Start() is called, the function OnStart is executed once. Afterwards the function OnUpdate is called when new data arrives or at least every d interval. The function OnDone is called when Done() is called. Both functions are called synchronously and can use shared state.

func (*Progress) Done

func (p *Progress) Done()

Done closes the progress report.

func (*Progress) Report

func (p *Progress) Report(s Stat)

Report adds the statistics from s to the current state and tries to report the accumulated statistics via the feedback channel.

func (*Progress) Reset

func (p *Progress) Reset()

Reset resets all statistic counters to zero.

func (*Progress) Start

func (p *Progress) Start()

Start resets and runs the progress reporter.

type ProgressFunc

type ProgressFunc func(s Stat, runtime time.Duration, ticker bool)

type Restorer

type Restorer struct {
	Error        func(dir string, node *Node, err error) error
	SelectFilter func(item string, dstpath string, node *Node) bool
	// contains filtered or unexported fields
}

Restorer is used to restore a snapshot to a directory.

func NewRestorer

func NewRestorer(repo *repository.Repository, id backend.ID) (*Restorer, error)

NewRestorer creates a restorer preloaded with the content from the snapshot id.

func (*Restorer) RestoreTo

func (res *Restorer) RestoreTo(dir string) error

RestoreTo creates the directories and files in the snapshot below dir. Before an item is created, res.Filter is called.

func (*Restorer) Snapshot

func (res *Restorer) Snapshot() *Snapshot

Snapshot returns the snapshot this restorer is configured to use.

type Snapshot

type Snapshot struct {
	Time     time.Time   `json:"time"`
	Parent   *backend.ID `json:"parent,omitempty"`
	Tree     *backend.ID `json:"tree"`
	Paths    []string    `json:"paths"`
	Hostname string      `json:"hostname,omitempty"`
	Username string      `json:"username,omitempty"`
	UID      uint32      `json:"uid,omitempty"`
	GID      uint32      `json:"gid,omitempty"`
	Excludes []string    `json:"excludes,omitempty"`
	// contains filtered or unexported fields
}

func LoadSnapshot

func LoadSnapshot(repo *repository.Repository, id backend.ID) (*Snapshot, error)

func NewSnapshot

func NewSnapshot(paths []string) (*Snapshot, error)

func (Snapshot) ID

func (sn Snapshot) ID() *backend.ID

func (Snapshot) String

func (sn Snapshot) String() string

type Stat

type Stat struct {
	Files  uint64
	Dirs   uint64
	Bytes  uint64
	Trees  uint64
	Blobs  uint64
	Errors uint64
}

func Scan

func Scan(dirs []string, filter pipe.SelectFunc, p *Progress) (Stat, error)

Scan traverses the dirs to collect Stat information while emitting progress information with p.

func (*Stat) Add

func (s *Stat) Add(other Stat)

Add accumulates other into s.

func (Stat) String

func (s Stat) String() string

type Tree

type Tree struct {
	Nodes []*Node `json:"nodes"`
}

func LoadTree

func LoadTree(repo *repository.Repository, id backend.ID) (*Tree, error)

func NewTree

func NewTree() *Tree

func (Tree) Equals

func (t Tree) Equals(other *Tree) bool

Equals returns true if t and other have exactly the same nodes.

func (Tree) Find

func (t Tree) Find(name string) (*Node, error)

func (*Tree) Insert

func (t *Tree) Insert(node *Node) error

func (Tree) String

func (t Tree) String() string

func (Tree) Subtrees

func (t Tree) Subtrees() (trees backend.IDs)

Subtrees returns a slice of all subtree IDs of the tree.

type WalkTreeJob

type WalkTreeJob struct {
	Path  string
	Error error

	Node *Node
	Tree *Tree
}

Directories

Path Synopsis
Godeps
_workspace/src/bazil.org/fuse
Package fuse enables writing FUSE file systems on Linux, OS X, and FreeBSD.
Package fuse enables writing FUSE file systems on Linux, OS X, and FreeBSD.
_workspace/src/bazil.org/fuse/examples/clockfs
Clockfs implements a file system with the current time in a file.
Clockfs implements a file system with the current time in a file.
_workspace/src/bazil.org/fuse/examples/hellofs
Hellofs implements a simple "hello world" file system.
Hellofs implements a simple "hello world" file system.
_workspace/src/bazil.org/fuse/fs/bench
Package bench contains benchmarks.
Package bench contains benchmarks.
_workspace/src/bazil.org/fuse/syscallx
Package syscallx provides wrappers that make syscalls on various platforms more interoperable.
Package syscallx provides wrappers that make syscalls on various platforms more interoperable.
_workspace/src/github.com/jessevdk/go-flags
Package flags provides an extensive command line option parser.
Package flags provides an extensive command line option parser.
_workspace/src/github.com/juju/errors
[godoc-link-here] The juju/errors provides an easy way to annotate errors without losing the orginal error context.
[godoc-link-here] The juju/errors provides an easy way to annotate errors without losing the orginal error context.
_workspace/src/github.com/kr/fs
Package fs provides filesystem-related functions.
Package fs provides filesystem-related functions.
_workspace/src/github.com/mitchellh/goamz/aws
goamz - Go packages to interact with the Amazon Web Services.
goamz - Go packages to interact with the Amazon Web Services.
_workspace/src/github.com/pkg/sftp
Package sftp implements the SSH File Transfer Protocol as described in https://filezilla-project.org/specs/draft-ietf-secsh-filexfer-02.txt
Package sftp implements the SSH File Transfer Protocol as described in https://filezilla-project.org/specs/draft-ietf-secsh-filexfer-02.txt
_workspace/src/github.com/pkg/sftp/examples/buffered-read-benchmark
buffered-read-benchmark benchmarks the peformance of reading from /dev/zero on the server to a []byte on the client via io.Copy.
buffered-read-benchmark benchmarks the peformance of reading from /dev/zero on the server to a []byte on the client via io.Copy.
_workspace/src/github.com/pkg/sftp/examples/buffered-write-benchmark
buffered-write-benchmark benchmarks the peformance of writing a single large []byte on the client to /dev/null on the server via io.Copy.
buffered-write-benchmark benchmarks the peformance of writing a single large []byte on the client to /dev/null on the server via io.Copy.
_workspace/src/github.com/pkg/sftp/examples/streaming-read-benchmark
streaming-read-benchmark benchmarks the peformance of reading from /dev/zero on the server to /dev/null on the client via io.Copy.
streaming-read-benchmark benchmarks the peformance of reading from /dev/zero on the server to /dev/null on the client via io.Copy.
_workspace/src/github.com/pkg/sftp/examples/streaming-write-benchmark
streaming-write-benchmark benchmarks the peformance of writing from /dev/zero on the client to /dev/null on the server via io.Copy.
streaming-write-benchmark benchmarks the peformance of writing from /dev/zero on the client to /dev/null on the server via io.Copy.
_workspace/src/github.com/restic/chunker
Package chunker implements Content Defined Chunking (CDC) based on a rolling Rabin Checksum.
Package chunker implements Content Defined Chunking (CDC) based on a rolling Rabin Checksum.
_workspace/src/github.com/vaughan0/go-ini
Package ini provides functions for parsing INI configuration files.
Package ini provides functions for parsing INI configuration files.
_workspace/src/golang.org/x/crypto/pbkdf2
Package pbkdf2 implements the key derivation function PBKDF2 as defined in RFC 2898 / PKCS #5 v2.0.
Package pbkdf2 implements the key derivation function PBKDF2 as defined in RFC 2898 / PKCS #5 v2.0.
_workspace/src/golang.org/x/crypto/poly1305
Package poly1305 implements Poly1305 one-time message authentication code as specified in http://cr.yp.to/mac/poly1305-20050329.pdf.
Package poly1305 implements Poly1305 one-time message authentication code as specified in http://cr.yp.to/mac/poly1305-20050329.pdf.
_workspace/src/golang.org/x/crypto/scrypt
Package scrypt implements the scrypt key derivation function as defined in Colin Percival's paper "Stronger Key Derivation via Sequential Memory-Hard Functions" (http://www.tarsnap.com/scrypt/scrypt.pdf).
Package scrypt implements the scrypt key derivation function as defined in Colin Percival's paper "Stronger Key Derivation via Sequential Memory-Hard Functions" (http://www.tarsnap.com/scrypt/scrypt.pdf).
_workspace/src/golang.org/x/crypto/ssh
Package ssh implements an SSH client and server.
Package ssh implements an SSH client and server.
_workspace/src/golang.org/x/crypto/ssh/agent
Package agent implements a client to an ssh-agent daemon.
Package agent implements a client to an ssh-agent daemon.
_workspace/src/golang.org/x/crypto/ssh/terminal
Package terminal provides support functions for dealing with terminals, as commonly found on UNIX systems.
Package terminal provides support functions for dealing with terminals, as commonly found on UNIX systems.
_workspace/src/golang.org/x/crypto/ssh/test
This package contains integration tests for the golang.org/x/crypto/ssh package.
This package contains integration tests for the golang.org/x/crypto/ssh package.
_workspace/src/golang.org/x/net/context
Package context defines the Context type, which carries deadlines, cancelation signals, and other request-scoped values across API boundaries and between processes.
Package context defines the Context type, which carries deadlines, cancelation signals, and other request-scoped values across API boundaries and between processes.
Package backend provides local and remote storage for restic repositories.
Package backend provides local and remote storage for restic repositories.
local
Package local implements repository storage in a local directory.
Package local implements repository storage in a local directory.
s3
sftp
Package sftp implements repository storage in a directory on a remote server via the sftp protocol.
Package sftp implements repository storage in a directory on a remote server via the sftp protocol.
cmd
restic
This package contains the code for the restic executable.
This package contains the code for the restic executable.
Package crypto provides all cryptographic operations needed in restic.
Package crypto provides all cryptographic operations needed in restic.
Package debug provides an infrastructure for logging debug information and breakpoints.
Package debug provides an infrastructure for logging debug information and breakpoints.
Package filter implements filters for files similar to filepath.Glob, but in contrast to filepath.Glob a pattern may specify directories.
Package filter implements filters for files similar to filepath.Glob, but in contrast to filepath.Glob a pattern may specify directories.
Package pack provides functions for combining and parsing pack files.
Package pack provides functions for combining and parsing pack files.
Package pipe implements walking a directory in a deterministic order.
Package pipe implements walking a directory in a deterministic order.
Package repository implements a restic repository on top of a backend.
Package repository implements a restic repository on top of a backend.
Package test_helper provides helper functions for writing tests for restic.
Package test_helper provides helper functions for writing tests for restic.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL