omnistorage

package module
v0.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 10, 2026 License: MIT Imports: 13 Imported by: 0

README

OmniStorage

Build Status Lint Status Go Report Card Docs Visualization License

OmniStorage is a unified storage abstraction layer for Go, inspired by rclone. It provides a single interface for reading and writing to various storage backends with composable layers for compression and record framing.

Full Documentation | API Reference

Features

  • Single interface for multiple storage backends (local files, S3, cloud drives, etc.)
  • Composable layers for compression (gzip, zstd) and formatting (NDJSON)
  • Sync engine for file synchronization between backends (like rclone sync)
  • Extended interface for metadata, server-side copy/move, and capability discovery
  • Backend registration allowing external packages to implement backends

Installation

go get github.com/grokify/omnistorage

Quick Start

Basic Read/Write
package main

import (
    "context"
    "io"
    "log"

    "github.com/grokify/omnistorage/backend/file"
)

func main() {
    ctx := context.Background()

    // Create a file backend
    backend := file.New(file.Config{Root: "/data"})
    defer backend.Close()

    // Write a file
    w, err := backend.NewWriter(ctx, "hello.txt")
    if err != nil {
        log.Fatal(err)
    }
    w.Write([]byte("Hello, World!"))
    w.Close()

    // Read it back
    r, err := backend.NewReader(ctx, "hello.txt")
    if err != nil {
        log.Fatal(err)
    }
    data, _ := io.ReadAll(r)
    r.Close()

    log.Println(string(data)) // "Hello, World!"
}
With Compression
import (
    "github.com/grokify/omnistorage/backend/file"
    "github.com/grokify/omnistorage/compress/gzip"
)

// Write compressed data
fileWriter, _ := backend.NewWriter(ctx, "data.txt.gz")
gzipWriter, _ := gzip.NewWriter(fileWriter)
gzipWriter.Write([]byte("compressed content"))
gzipWriter.Close()

// Read compressed data
fileReader, _ := backend.NewReader(ctx, "data.txt.gz")
gzipReader, _ := gzip.NewReader(fileReader)
data, _ := io.ReadAll(gzipReader)
gzipReader.Close()
With NDJSON Format
import (
    "github.com/grokify/omnistorage/backend/file"
    "github.com/grokify/omnistorage/format/ndjson"
)

// Write NDJSON records
w, _ := backend.NewWriter(ctx, "records.ndjson")
ndjsonWriter := ndjson.NewWriter(w)
ndjsonWriter.Write([]byte(`{"id":1,"name":"alice"}`))
ndjsonWriter.Write([]byte(`{"id":2,"name":"bob"}`))
ndjsonWriter.Close()

// Read NDJSON records
r, _ := backend.NewReader(ctx, "records.ndjson")
ndjsonReader := ndjson.NewReader(r)
for {
    record, err := ndjsonReader.Read()
    if err == io.EOF {
        break
    }
    log.Println(string(record))
}
ndjsonReader.Close()
Using the Registry
import "github.com/grokify/omnistorage"

// Open backend by name
backend, _ := omnistorage.Open("file", map[string]string{
    "root": "/data",
})
defer backend.Close()

// List registered backends
backends := omnistorage.Backends() // ["file", "memory", "s3"]

Backends

File Backend

Local filesystem storage.

import "github.com/grokify/omnistorage/backend/file"

backend := file.New(file.Config{
    Root: "/data",  // Base directory for all operations
})
Memory Backend

In-memory storage for testing.

import "github.com/grokify/omnistorage/backend/memory"

backend := memory.New()
S3 Backend

S3-compatible storage (AWS S3, Cloudflare R2, MinIO, Wasabi, etc.).

import "github.com/grokify/omnistorage/backend/s3"

// AWS S3
backend, _ := s3.New(s3.Config{
    Bucket: "my-bucket",
    Region: "us-east-1",
})

// Cloudflare R2
backend, _ := s3.New(s3.Config{
    Bucket:   "my-bucket",
    Endpoint: "https://<account_id>.r2.cloudflarestorage.com",
    Region:   "auto",
})

// MinIO (local)
backend, _ := s3.New(s3.Config{
    Bucket:       "my-bucket",
    Endpoint:     "http://localhost:9000",
    UsePathStyle: true,
    DisableSSL:   true,
})

// From environment variables
backend, _ := s3.New(s3.ConfigFromEnv())
Google Drive

Google Drive is in a separate repository to keep the core lightweight.

go get github.com/grokify/omnistorage-google
import "github.com/grokify/omnistorage-google/backend/drive"

backend, _ := drive.New(drive.Config{
    CredentialsFile: "credentials.json",
    RootFolder:      "My App Data",
})

See omnistorage-google for details.

Sync Operations

The sync package provides rclone-like file synchronization.

Sync (Mirror)

Make destination match source, including deletes.

import "github.com/grokify/omnistorage/sync"

result, err := sync.Sync(ctx, srcBackend, dstBackend, "data/", "backup/", sync.Options{
    DeleteExtra: true,  // Delete files in dst not in src
    DryRun:      false,
})
fmt.Printf("Copied: %d, Updated: %d, Deleted: %d\n",
    result.Copied, result.Updated, result.Deleted)
Copy

Copy files without deleting extras.

// Copy a directory
result, _ := sync.Copy(ctx, src, dst, "data/", "backup/", sync.Options{})

// Copy a single file
err := sync.CopyFile(ctx, src, dst, "file.txt", "file_copy.txt")

// Copy with progress
result, _ := sync.CopyWithProgress(ctx, src, dst, "data/", "backup/",
    func(file string, bytes int64) {
        fmt.Printf("Copying %s: %d bytes\n", file, bytes)
    })
Bisync (Bidirectional Sync)

Two-way synchronization with conflict resolution.

import "github.com/grokify/omnistorage/sync"

result, err := sync.Bisync(ctx, backend1, backend2, "folder1/", "folder2/", sync.BisyncOptions{
    ConflictStrategy: sync.ConflictNewerWins,  // Newer file wins conflicts
    DryRun:           false,
})
fmt.Printf("Copied to path1: %d, Copied to path2: %d, Conflicts: %d\n",
    result.CopiedToPath1, result.CopiedToPath2, len(result.Conflicts))

Conflict resolution strategies:

  • ConflictNewerWins - Newer file overwrites older (default)
  • ConflictLargerWins - Larger file overwrites smaller
  • ConflictSourceWins - First backend (backend1) always wins
  • ConflictDestWins - Second backend (backend2) always wins
  • ConflictKeepBoth - Keep both files with conflict suffix
  • ConflictSkip - Skip conflicting files
  • ConflictError - Record as error, don't resolve
Check (Verify)

Verify files match between backends.

// Simple check
inSync, _ := sync.Verify(ctx, src, dst, "data/", "backup/", sync.Options{})

// Detailed check
result, _ := sync.Check(ctx, src, dst, "data/", "backup/", sync.Options{})
fmt.Printf("Match: %d, Differ: %d, SrcOnly: %d, DstOnly: %d\n",
    len(result.Match), len(result.Differ), len(result.SrcOnly), len(result.DstOnly))

// Human-readable report
report, _ := sync.VerifyAndReport(ctx, src, dst, "data/", "backup/", sync.Options{})
fmt.Println(report)
Options
sync.Options{
    DeleteExtra:    true,   // Delete extra files in destination
    DryRun:         true,   // Report changes without making them
    Checksum:       true,   // Compare by checksum (slower but accurate)
    SizeOnly:       true,   // Compare by size only (fast)
    IgnoreExisting: true,   // Skip files that exist in destination
    MaxErrors:      10,     // Stop after N errors (0 = stop on first)
    Concurrency:    4,      // Concurrent transfers
    Progress: func(p sync.Progress) {
        fmt.Printf("%s: %d/%d files\n", p.Phase, p.FilesTransferred, p.TotalFiles)
    },
}
Logging

Sync operations support structured logging via *slog.Logger.

import (
    "log/slog"
    "os"
    "github.com/grokify/omnistorage/sync"
)

// With custom logger
result, _ := sync.Sync(ctx, src, dst, "data/", "backup/", sync.Options{
    Logger: slog.New(slog.NewTextHandler(os.Stderr, &slog.HandlerOptions{
        Level: slog.LevelDebug,
    })),
})

// Output includes:
// - Sync start/complete with summary
// - File scan progress
// - Copy/delete operations (at debug level)
// - Errors with context

When no logger is provided, a null logger is used (no output).

Extended Interface

Backends may implement ExtendedBackend for additional capabilities.

// Check if backend supports extended operations
if ext, ok := omnistorage.AsExtended(backend); ok {
    // Get file metadata
    info, _ := ext.Stat(ctx, "file.txt")
    fmt.Printf("Size: %d, Modified: %s\n", info.Size(), info.ModTime())

    // Server-side copy (no download/upload)
    if ext.Features().Copy {
        ext.Copy(ctx, "source.txt", "dest.txt")
    }

    // Server-side move
    if ext.Features().Move {
        ext.Move(ctx, "old.txt", "new.txt")
    }

    // Directory operations
    ext.Mkdir(ctx, "new-folder")
    ext.Rmdir(ctx, "empty-folder")
}
Feature Discovery
features := ext.Features()
if features.Copy {
    // Backend supports server-side copy
}
if features.Move {
    // Backend supports server-side move
}

Compression

Gzip
import "github.com/grokify/omnistorage/compress/gzip"

// Write
gzWriter, _ := gzip.NewWriter(writer)
gzWriter.Write(data)
gzWriter.Close()

// Read
gzReader, _ := gzip.NewReader(reader)
data, _ := io.ReadAll(gzReader)
gzReader.Close()
Zstandard
import "github.com/grokify/omnistorage/compress/zstd"

// Write
zstdWriter, _ := zstd.NewWriter(writer)
zstdWriter.Write(data)
zstdWriter.Close()

// Read
zstdReader, _ := zstd.NewReader(reader)
data, _ := io.ReadAll(zstdReader)
zstdReader.Close()

Interfaces

Backend

The core interface for all storage backends.

type Backend interface {
    NewWriter(ctx context.Context, path string, opts ...WriterOption) (io.WriteCloser, error)
    NewReader(ctx context.Context, path string, opts ...ReaderOption) (io.ReadCloser, error)
    Exists(ctx context.Context, path string) (bool, error)
    Delete(ctx context.Context, path string) error
    List(ctx context.Context, prefix string) ([]string, error)
    Close() error
}
ExtendedBackend

Extended interface for metadata and server-side operations.

type ExtendedBackend interface {
    Backend
    Stat(ctx context.Context, path string) (ObjectInfo, error)
    Mkdir(ctx context.Context, path string) error
    Rmdir(ctx context.Context, path string) error
    Copy(ctx context.Context, src, dst string) error
    Move(ctx context.Context, src, dst string) error
    Features() Features
}
RecordWriter / RecordReader

For streaming record-oriented data (logs, events, NDJSON).

type RecordWriter interface {
    Write(data []byte) error
    Flush() error
    Close() error
}

type RecordReader interface {
    Read() ([]byte, error)
    Close() error
}

Implementing a Backend

External packages can implement and register backends.

package mybackend

import "github.com/grokify/omnistorage"

func init() {
    omnistorage.Register("mybackend", func(config map[string]string) (omnistorage.Backend, error) {
        return New(ConfigFromMap(config))
    })
}

type Backend struct { /* ... */ }

func (b *Backend) NewWriter(ctx context.Context, path string, opts ...omnistorage.WriterOption) (io.WriteCloser, error) {
    // Implementation
}

// ... implement other Backend methods
  • omnistorage-google - Google Drive and GCS backends
  • rclone - Inspiration for backend coverage and sync capabilities
  • go-cloud - Google's portable cloud APIs
  • afero - Filesystem abstraction

Roadmap

See ROADMAP.md for planned features including:

  • Additional cloud backends (GCS, Azure, Dropbox, OneDrive)
  • Filtering system (glob patterns, size/age filters) (implemented)
  • Transfer controls (bandwidth limiting, parallel transfers) (implemented)
  • Bidirectional sync with conflict resolution (implemented)
  • Structured logging via slog (implemented)
  • Security features (credential management, signed URLs)
  • CLI tool

Contributing

Contributions are welcome! Priority areas:

  1. New backends - Follow backend/file as a template
  2. Tests - Especially integration tests with real services
  3. Documentation - Examples, guides, GoDoc improvements
  4. Bug fixes - Issues labeled good first issue

License

MIT License - see LICENSE for details.

Documentation

Overview

Package omnistorage provides a unified storage abstraction layer for Go.

It supports multiple storage backends (local files, S3, GCS, etc.) through a common interface, with composable layers for compression and record framing.

Basic usage:

backend, _ := file.New(map[string]string{"root": "/data"})
raw, _ := backend.NewWriter(ctx, "logs/app.ndjson")
w := ndjson.NewWriter(raw)
w.Write([]byte(`{"msg":"hello"}`))
w.Close()

Index

Constants

This section is empty.

Variables

View Source
var (
	// ErrNotFound is returned when a path does not exist.
	ErrNotFound = errors.New("omnistorage: not found")

	// ErrAlreadyExists is returned when attempting to create a path that already exists,
	// if the backend does not support overwriting.
	ErrAlreadyExists = errors.New("omnistorage: already exists")

	// ErrPermissionDenied is returned when access to a path is denied.
	ErrPermissionDenied = errors.New("omnistorage: permission denied")

	// ErrBackendClosed is returned when operating on a closed backend.
	ErrBackendClosed = errors.New("omnistorage: backend closed")

	// ErrWriterClosed is returned when writing to a closed writer.
	ErrWriterClosed = errors.New("omnistorage: writer closed")

	// ErrReaderClosed is returned when reading from a closed reader.
	ErrReaderClosed = errors.New("omnistorage: reader closed")

	// ErrInvalidPath is returned when a path is invalid (e.g., contains forbidden characters).
	ErrInvalidPath = errors.New("omnistorage: invalid path")

	// ErrNotSupported is returned when an operation is not supported by the backend.
	ErrNotSupported = errors.New("omnistorage: operation not supported")

	// ErrUnknownBackend is returned by Open when the backend name is not registered.
	ErrUnknownBackend = errors.New("omnistorage: unknown backend")
)

Common errors returned by omnistorage backends and utilities.

Functions

func Backends

func Backends() []string

Backends returns a sorted list of registered backend names.

func CopyPath

func CopyPath(ctx context.Context, srcBackend Backend, srcPath string, dstBackend Backend, dstPath string, opts ...WriterOption) error

CopyPath copies an object from src to dst, potentially across different backends. This is a client-side copy that streams data through the caller.

If srcBackend and dstBackend are the same ExtendedBackend with Copy support, consider using ExtendedBackend.Copy() for server-side copy instead.

Options can be passed to configure the destination writer.

func CopyPathWithHash

func CopyPathWithHash(ctx context.Context, srcBackend Backend, srcPath string, dstBackend Backend, dstPath string, hashType HashType, opts ...WriterOption) (string, error)

CopyPathWithHash copies an object and verifies the content hash. Returns the computed hash of the copied data. Returns an error if the copy fails or if hash computation fails.

func HashBytes

func HashBytes(data []byte, t HashType) string

HashBytes computes the hash of a byte slice. Returns the hex-encoded hash string.

func HashBytesFromSum

func HashBytesFromSum(sum []byte) string

HashBytesFromSum converts a hash sum to hex string.

func HashReader

func HashReader(r io.Reader, t HashType) (string, error)

HashReader computes the hash of data from a reader. Returns the hex-encoded hash string.

func IsNotFound

func IsNotFound(err error) bool

IsNotFound returns true if the error indicates a path was not found.

func IsNotSupported

func IsNotSupported(err error) bool

IsNotSupported returns true if the error indicates an unsupported operation.

func IsPermissionDenied

func IsPermissionDenied(err error) bool

IsPermissionDenied returns true if the error indicates permission was denied.

func IsRegistered

func IsRegistered(name string) bool

IsRegistered returns true if a backend with the given name is registered.

func MovePath

func MovePath(ctx context.Context, srcBackend Backend, srcPath string, dstBackend Backend, dstPath string, opts ...WriterOption) error

MovePath moves an object from src to dst by copying then deleting. This works across different backends.

If srcBackend and dstBackend are the same ExtendedBackend with Move support, consider using ExtendedBackend.Move() for server-side move instead.

Options can be passed to configure the destination writer.

func NewHash

func NewHash(t HashType) hash.Hash

NewHash creates a new hash.Hash for the given hash type. Returns nil if the hash type is not supported.

func Register

func Register(name string, factory BackendFactory)

Register registers a backend factory under the given name. It is typically called from init() in backend packages.

Register panics if:

  • factory is nil
  • a backend with the same name is already registered

Example:

func init() {
    omnistorage.Register("mybackend", New)
}

func SmartCopy

func SmartCopy(ctx context.Context, srcBackend Backend, srcPath string, dstBackend Backend, dstPath string, opts ...WriterOption) error

SmartCopy attempts server-side copy first, falling back to client-side copy. Use this when you want the best performance but need a guaranteed fallback.

If both backends are the same ExtendedBackend with Copy support, uses server-side copy. Otherwise, falls back to CopyPath.

func SmartMove

func SmartMove(ctx context.Context, srcBackend Backend, srcPath string, dstBackend Backend, dstPath string, opts ...WriterOption) error

SmartMove attempts server-side move first, falling back to copy-then-delete. Use this when you want the best performance but need a guaranteed fallback.

If both backends are the same ExtendedBackend with Move support, uses server-side move. Otherwise, falls back to MovePath (copy then delete).

func Unregister

func Unregister(name string) bool

Unregister removes a registered backend. This is primarily useful for testing. Returns true if the backend was registered, false otherwise.

Types

type Backend

type Backend interface {
	// NewWriter creates a writer for the given path/key.
	// The returned writer must be closed after use to ensure
	// all data is flushed and resources are released.
	//
	// The path format depends on the backend:
	//   - File backend: relative path from root
	//   - S3 backend: object key
	//   - etc.
	//
	// The context is used for the initial creation; the returned writer
	// may have its own context or timeout handling.
	NewWriter(ctx context.Context, path string, opts ...WriterOption) (io.WriteCloser, error)

	// NewReader creates a reader for the given path/key.
	// Returns ErrNotFound if the path does not exist.
	// The returned reader must be closed after use.
	//
	// The context is used for the initial creation; the returned reader
	// may have its own context or timeout handling.
	NewReader(ctx context.Context, path string, opts ...ReaderOption) (io.ReadCloser, error)

	// Exists checks if a path exists.
	Exists(ctx context.Context, path string) (bool, error)

	// Delete removes a path.
	// Returns nil if the path does not exist (idempotent).
	Delete(ctx context.Context, path string) error

	// List lists paths with the given prefix.
	// Returns an empty slice if no paths match.
	// The returned paths are relative to the backend root.
	List(ctx context.Context, prefix string) ([]string, error)

	// Close releases any resources held by the backend.
	// After Close, all other methods return ErrBackendClosed.
	Close() error
}

Backend represents a storage backend (S3, GCS, local file, etc.). Implementations handle raw byte transport to/from storage.

Backends are safe for concurrent use by multiple goroutines. All methods accept a context.Context for cancellation and timeouts.

func Open

func Open(name string, config map[string]string) (Backend, error)

Open opens a backend by name with the given configuration. The config map is passed directly to the backend's factory function.

Open returns ErrUnknownBackend if no backend with the given name is registered.

Example:

backend, err := omnistorage.Open("s3", map[string]string{
    "bucket": "my-bucket",
    "region": "us-west-2",
})

type BackendFactory

type BackendFactory func(config map[string]string) (Backend, error)

BackendFactory creates a Backend from configuration. The config map contains backend-specific configuration keys.

type BasicObjectInfo

type BasicObjectInfo struct {
	ObjectPath        string
	ObjectSize        int64
	ObjectModTime     time.Time
	ObjectIsDir       bool
	ObjectContentType string
	ObjectHashes      map[HashType]string
	ObjectMetadata    map[string]string
}

BasicObjectInfo is a simple implementation of ObjectInfo. Use this when creating ObjectInfo instances in backend implementations.

func (*BasicObjectInfo) ContentType

func (o *BasicObjectInfo) ContentType() string

ContentType returns the MIME type of the object.

func (*BasicObjectInfo) Hash

func (o *BasicObjectInfo) Hash(t HashType) string

Hash returns the object's hash for the given hash type.

func (*BasicObjectInfo) IsDir

func (o *BasicObjectInfo) IsDir() bool

IsDir returns true if this object represents a directory.

func (*BasicObjectInfo) Metadata

func (o *BasicObjectInfo) Metadata() map[string]string

Metadata returns the object's custom metadata.

func (*BasicObjectInfo) ModTime

func (o *BasicObjectInfo) ModTime() time.Time

ModTime returns the object's last modification time.

func (*BasicObjectInfo) Path

func (o *BasicObjectInfo) Path() string

Path returns the object's path.

func (*BasicObjectInfo) Size

func (o *BasicObjectInfo) Size() int64

Size returns the object's size in bytes.

type ExtendedBackend

type ExtendedBackend interface {
	Backend

	// Stat returns metadata about an object.
	// Returns ErrNotFound if the path does not exist.
	// Returns ErrNotSupported if the backend doesn't support Stat.
	Stat(ctx context.Context, path string) (ObjectInfo, error)

	// Mkdir creates a directory at the given path.
	// Creates parent directories as needed (like mkdir -p).
	// Returns nil if the directory already exists.
	// Returns ErrNotSupported for backends that don't need directories (S3, GCS).
	Mkdir(ctx context.Context, path string) error

	// Rmdir removes an empty directory.
	// Returns ErrNotFound if the directory does not exist.
	// Returns an error if the directory is not empty.
	// Returns ErrNotSupported for backends that don't need directories.
	Rmdir(ctx context.Context, path string) error

	// Copy copies an object from src to dst within the same backend.
	// This is a server-side copy when supported (no data transfer through client).
	// Returns ErrNotFound if src does not exist.
	// Returns ErrNotSupported if server-side copy is not available.
	// Use CopyPath() helper for a fallback that works with any backend.
	Copy(ctx context.Context, src, dst string) error

	// Move moves/renames an object from src to dst within the same backend.
	// This is a server-side move when supported (no data transfer).
	// Returns ErrNotFound if src does not exist.
	// Returns ErrNotSupported if server-side move is not available.
	// Use MovePath() helper for a fallback that works with any backend.
	Move(ctx context.Context, src, dst string) error

	// Features returns the capabilities of this backend.
	// Use this to check which operations are supported before calling them,
	// or to select optimal code paths.
	Features() Features
}

ExtendedBackend extends Backend with additional operations for metadata access, directory management, and server-side operations.

Not all backends support all operations. Use Features() to check which operations are available, or check for specific errors.

Applications that only need basic read/write operations should use the Backend interface for broader compatibility.

func AsExtended

func AsExtended(b Backend) (ExtendedBackend, bool)

AsExtended attempts to convert a Backend to ExtendedBackend. Returns the ExtendedBackend and true if the backend supports extended operations. Returns nil and false otherwise.

func MustExtended

func MustExtended(b Backend) ExtendedBackend

MustExtended converts a Backend to ExtendedBackend or panics. Use this when you know the backend supports extended operations.

type Features

type Features struct {
	// Copy indicates the backend supports server-side copy.
	// When true, Copy() is efficient (no data transfer through client).
	// When false, use CopyPath() helper which streams through the client.
	Copy bool

	// Move indicates the backend supports server-side move/rename.
	// When true, Move() is efficient (no data transfer).
	// When false, use MovePath() helper which copies then deletes.
	Move bool

	// Mkdir indicates the backend supports creating directories.
	// Object stores (S3, GCS) typically don't need this (directories are implicit).
	// Filesystems and some cloud drives require explicit directory creation.
	Mkdir bool

	// Rmdir indicates the backend supports removing directories.
	Rmdir bool

	// Stat indicates the backend supports getting object metadata.
	// When true, Stat() returns size, modtime, hashes, etc.
	Stat bool

	// Hashes lists the hash types supported by this backend.
	// Use this to determine which hash to request from ObjectInfo.Hash().
	Hashes []HashType

	// CanStream indicates the backend supports streaming writes.
	// When true, data can be written incrementally.
	// When false, the entire content must be buffered before upload.
	CanStream bool

	// ServerSideEncryption indicates the backend supports server-side encryption.
	ServerSideEncryption bool

	// Versioning indicates the backend supports object versioning.
	Versioning bool

	// RangeRead indicates the backend supports reading byte ranges.
	// When true, WithOffset() and WithLimit() are efficient.
	RangeRead bool

	// ListPrefix indicates the backend supports efficient prefix listing.
	// When true, List() with a prefix is efficient.
	// When false, the entire tree may be scanned.
	ListPrefix bool

	// SetModTime indicates the backend supports setting modification time.
	// When true, modification time can be preserved during copy/sync.
	SetModTime bool

	// CustomMetadata indicates the backend supports custom metadata.
	// When true, arbitrary key-value metadata can be stored with objects.
	CustomMetadata bool
}

Features describes the capabilities of a backend. Use this to check what operations are supported before calling them, or to select optimal code paths.

func (Features) CommonHash

func (f Features) CommonHash(other Features) HashType

CommonHash returns a hash type supported by both feature sets. Returns HashNone if no common hash type exists.

func (Features) PreferredHash

func (f Features) PreferredHash() HashType

PreferredHash returns the preferred hash type for this backend. Returns HashNone if no hashes are supported. Preference order: SHA256, SHA1, MD5, CRC32C.

func (Features) SupportsHash

func (f Features) SupportsHash(t HashType) bool

SupportsHash returns true if the backend supports the given hash type.

type HashSet

type HashSet map[HashType]string

HashSet holds multiple hash values for an object.

func (HashSet) Equal

func (hs HashSet) Equal(other HashSet) bool

Equal compares two hash sets for equality on common hash types. Returns true if at least one common hash type matches. Returns false if no common hash types exist or if any common hash differs.

func (HashSet) Get

func (hs HashSet) Get(t HashType) string

Get returns the hash value for the given type, or empty string if not present.

func (HashSet) Has

func (hs HashSet) Has(t HashType) bool

Has returns true if the hash set contains the given hash type.

func (HashSet) Set

func (hs HashSet) Set(t HashType, value string)

Set sets a hash value.

type HashType

type HashType string

HashType represents a hash algorithm used for content verification.

const (
	// HashNone indicates no hash.
	HashNone HashType = ""

	// HashMD5 is the MD5 hash algorithm.
	// Supported by: S3, GCS, Azure, most backends.
	HashMD5 HashType = "md5"

	// HashSHA1 is the SHA-1 hash algorithm.
	// Supported by: GCS, Dropbox.
	HashSHA1 HashType = "sha1"

	// HashSHA256 is the SHA-256 hash algorithm.
	// Supported by: S3 (as x-amz-checksum-sha256).
	HashSHA256 HashType = "sha256"

	// HashCRC32C is the CRC32C checksum.
	// Supported by: GCS.
	HashCRC32C HashType = "crc32c"
)

func SupportedHashes

func SupportedHashes() []HashType

SupportedHashes returns all supported hash types.

func (HashType) String

func (h HashType) String() string

String returns the string representation of the hash type.

type ObjectInfo

type ObjectInfo interface {
	// Path returns the object's path relative to the backend root.
	Path() string

	// Size returns the object's size in bytes.
	// Returns -1 if the size is unknown.
	Size() int64

	// ModTime returns the object's last modification time.
	// Returns zero time if unknown.
	ModTime() time.Time

	// IsDir returns true if this object represents a directory.
	IsDir() bool

	// ContentType returns the MIME type of the object.
	// Returns empty string if unknown.
	ContentType() string

	// Hash returns the object's hash for the given hash type.
	// Returns empty string if the hash type is not available.
	// Use Features().Hashes to check which hash types are supported.
	Hash(HashType) string

	// Metadata returns custom metadata associated with the object.
	// Returns nil if no custom metadata or backend doesn't support it.
	// Use Features().CustomMetadata to check if supported.
	Metadata() map[string]string
}

ObjectInfo provides metadata about a stored object. Not all backends support all fields; check Features() for capabilities.

type ReaderConfig

type ReaderConfig struct {
	// BufferSize is the buffer size in bytes.
	// 0 means use the backend's default.
	BufferSize int

	// Offset is the byte offset to start reading from.
	// Not all backends support this.
	Offset int64

	// Limit is the maximum number of bytes to read.
	// 0 means no limit.
	Limit int64
}

ReaderConfig holds configuration for creating a reader.

func ApplyReaderOptions

func ApplyReaderOptions(opts ...ReaderOption) *ReaderConfig

ApplyReaderOptions applies options to a ReaderConfig.

type ReaderOption

type ReaderOption func(*ReaderConfig)

ReaderOption configures a reader created by Backend.NewReader.

func WithLimit

func WithLimit(limit int64) ReaderOption

WithLimit sets the maximum number of bytes to read.

func WithOffset

func WithOffset(offset int64) ReaderOption

WithOffset sets the byte offset to start reading from.

func WithReaderBufferSize

func WithReaderBufferSize(size int) ReaderOption

WithReaderBufferSize sets the buffer size for the reader.

type RecordReader

type RecordReader interface {
	// Read reads the next record.
	// Returns io.EOF when no more records are available.
	// The returned slice is valid until the next call to Read.
	Read() ([]byte, error)

	// Close releases any resources held by the reader.
	Close() error
}

RecordReader reads framed records from an underlying reader. Implementations handle record parsing (newlines, length-prefix, etc.).

type RecordWriter

type RecordWriter interface {
	// Write writes a single record.
	// The record should not contain the delimiter (e.g., no trailing newline for NDJSON).
	// Implementations may buffer writes; call Flush to ensure data is written.
	Write(data []byte) error

	// Flush flushes any buffered data to the underlying writer.
	Flush() error

	// Close flushes any remaining data and closes the writer.
	// After Close, Write and Flush return errors.
	Close() error
}

RecordWriter writes framed records (byte slices) to an underlying writer. Implementations handle record delimiting (newlines, length-prefix, etc.).

RecordWriter is useful for streaming record-oriented data like logs, events, or NDJSON documents.

type WriterConfig

type WriterConfig struct {
	// BufferSize is the buffer size in bytes.
	// 0 means use the backend's default.
	BufferSize int

	// ContentType is a MIME type hint for the content.
	// Some backends (S3, HTTP) use this for Content-Type headers.
	ContentType string

	// Metadata is backend-specific metadata.
	// For S3, these become object metadata.
	// For file backend, this is ignored.
	Metadata map[string]string
}

WriterConfig holds configuration for creating a writer.

func ApplyWriterOptions

func ApplyWriterOptions(opts ...WriterOption) *WriterConfig

ApplyWriterOptions applies options to a WriterConfig.

type WriterOption

type WriterOption func(*WriterConfig)

WriterOption configures a writer created by Backend.NewWriter.

func WithBufferSize

func WithBufferSize(size int) WriterOption

WithBufferSize sets the buffer size for the writer.

func WithContentType

func WithContentType(contentType string) WriterOption

WithContentType sets the content type hint.

func WithMetadata

func WithMetadata(metadata map[string]string) WriterOption

WithMetadata sets backend-specific metadata.

Directories

Path Synopsis
backend
channel
Package channel provides a Go channel-based backend for omnistorage.
Package channel provides a Go channel-based backend for omnistorage.
file
Package file provides a local filesystem backend for omnistorage.
Package file provides a local filesystem backend for omnistorage.
memory
Package memory provides an in-memory backend for omnistorage.
Package memory provides an in-memory backend for omnistorage.
s3
Package s3 provides an S3-compatible backend for omnistorage.
Package s3 provides an S3-compatible backend for omnistorage.
sftp
Package sftp provides an SFTP backend for omnistorage.
Package sftp provides an SFTP backend for omnistorage.
compress
gzip
Package gzip provides gzip compression support for omnistorage.
Package gzip provides gzip compression support for omnistorage.
zstd
Package zstd provides Zstandard compression support for omnistorage.
Package zstd provides Zstandard compression support for omnistorage.
format
ndjson
Package ndjson provides NDJSON (newline-delimited JSON) format support for omnistorage.
Package ndjson provides NDJSON (newline-delimited JSON) format support for omnistorage.
Package multi provides fan-out writing to multiple backends simultaneously.
Package multi provides fan-out writing to multiple backends simultaneously.
Package sync provides file synchronization between omnistorage backends.
Package sync provides file synchronization between omnistorage backends.
filter
Package filter provides file filtering for sync operations.
Package filter provides file filtering for sync operations.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL