Documentation
¶
Overview ¶
Package s2 is a lightweight object storage abstraction with multiple backends and an embeddable S3-compatible server.
Overview ¶
s2 provides a single Storage interface that all backends implement, and a small set of value types (Object, Metadata, Config) for moving data through it. The same interface drives a local-filesystem backend (osfs), an in-memory backend (memfs), and an AWS S3 / S3-compatible backend (s3). Backends register themselves via blank import:
import (
_ "github.com/mojatter/s2/fs" // osfs + memfs
_ "github.com/mojatter/s2/s3" // AWS S3
)
strg, err := s2.NewStorage(ctx, s2.Config{Type: s2.TypeOSFS, Root: "/var/data"})
Errors ¶
Operations that report a missing object wrap the sentinel ErrNotExist. Detect them with errors.Is:
if _, err := strg.Get(ctx, "foo.txt"); errors.Is(err, s2.ErrNotExist) {
// not found
}
NewStorage wraps ErrUnknownType when the requested backend has not been registered.
Concurrency ¶
All Storage implementations shipped with s2 are safe for concurrent use by multiple goroutines. Methods that mutate state (Put, Delete, PutMetadata, ...) are independently atomic per object: a single Put is either fully visible to subsequent reads or not visible at all. Multiple concurrent Puts to the same object resolve to one of the writes — there is no defined ordering — but no torn writes are exposed.
PutMetadata is NOT atomic with Put. Calling Put followed by PutMetadata leaves a window during which the object exists with whatever metadata Put itself wrote.
Atomicity per backend ¶
The following table summarizes the atomicity guarantees of each operation across the bundled backends.
Operation | osfs | memfs | s3 ----------------------|-----------------------|-----------------------|----------------------- Put | atomic (temp+rename) | atomic | atomic (server-side) Copy | atomic per dst | atomic per dst | atomic (server-side) Move (via [Move]) | atomic (rename) | non-atomic | non-atomic Delete | atomic | atomic | atomic DeleteRecursive | best-effort, partial | best-effort, partial | best-effort, partial
"Best-effort, partial" means the operation deletes objects one at a time (or in pages, for s3) and may leave some objects behind on error.
Stability ¶
s2 follows semantic versioning. Until v1.0.0 the public API may change between minor versions; breaking changes are documented in the release notes. After v1.0.0, the package will only break compatibility on a major-version bump.
Index ¶
- Variables
- func Move(ctx context.Context, s Storage, src, dst string) error
- func RegisterNewStorageFunc(t Type, fn NewStorageFunc)
- func UnregisterNewStorageFunc(t Type)
- type Config
- type ListOptions
- type ListResult
- type Metadata
- type Mover
- type NewStorageFunc
- type Object
- type ObjectOption
- type S3Config
- type SignedURLMethod
- type SignedURLOptions
- type Storage
- type Type
Examples ¶
Constants ¶
This section is empty.
Variables ¶
var ErrNotExist = errors.New("s2: object not exist")
ErrNotExist is returned when an operation targets an object that does not exist. Backends wrap this with the missing object's name via fmt.Errorf, so callers should detect it with errors.Is rather than direct equality:
if errors.Is(err, s2.ErrNotExist) {
// handle missing object
}
var ErrUnknownType = errors.New("s2: unknown storage type")
ErrUnknownType is returned by NewStorage when no plugin is registered for the requested Type. Detect with errors.Is:
if errors.Is(err, s2.ErrUnknownType) {
// unknown backend
}
Functions ¶
func Move ¶ added in v0.2.0
Move moves src to dst on s. If s implements Mover, its Move method is used (which may be atomic and is generally more efficient). Otherwise Move falls back to Copy followed by Delete; that fallback is NOT atomic: if Delete fails after a successful Copy, both objects exist.
Example ¶
ExampleMove uses the free function s2.Move, which transparently picks the fastest path: backends that implement the optional s2.Mover interface (e.g. osfs via filesystem rename) get an atomic rename, others fall back to Copy + Delete.
package main
import (
"context"
"errors"
"fmt"
"io"
"github.com/mojatter/s2"
_ "github.com/mojatter/s2/fs"
)
func main() {
ctx := context.Background()
strg, _ := s2.NewStorage(ctx, s2.Config{Type: s2.TypeMemFS})
_ = strg.Put(ctx, s2.NewObjectBytes("src.txt", []byte("hi")))
if err := s2.Move(ctx, strg, "src.txt", "dst.txt"); err != nil {
panic(err)
}
_, err := strg.Get(ctx, "src.txt")
fmt.Println("src missing:", errors.Is(err, s2.ErrNotExist))
got, _ := strg.Get(ctx, "dst.txt")
rc, _ := got.Open()
defer rc.Close()
body, _ := io.ReadAll(rc)
fmt.Println("dst body:", string(body))
}
Output: src missing: true dst body: hi
func RegisterNewStorageFunc ¶
func RegisterNewStorageFunc(t Type, fn NewStorageFunc)
RegisterNewStorageFunc registers a new storage function.
func UnregisterNewStorageFunc ¶
func UnregisterNewStorageFunc(t Type)
UnregisterNewStorageFunc unregisters a storage function. Primarily useful in tests that swap backends.
Types ¶
type Config ¶
type Config struct {
// Type is the type of storage.
Type Type `json:"type"`
// Root is the root path of the storage.
// If Type is TypeOSFS, Root is the root path of the file system.
// If Type is TypeMemFS, Root is not used.
// If Type is TypeS3, Root is the S3 bucket name. The string following / in the bucket name is treated as a prefix.
Root string `json:"root"`
// SignedURL is the presign URL of the storage. This is used for TypeOSFS and TypeMemFS.
SignedURL string `json:"signed_url,omitempty"`
// S3 holds S3-specific settings. Only used when Type is TypeS3.
S3 *S3Config `json:"s3,omitempty"`
}
Config is a configuration for a storage.
type ListOptions ¶ added in v0.2.0
type ListOptions struct {
// Prefix restricts the listing to objects whose names begin with Prefix.
Prefix string
// After is an opaque continuation token returned by a previous call as
// ListResult.NextAfter; pass it to fetch the next page. Empty for the
// first page.
After string
// Limit caps the number of returned Objects. Zero means no limit.
Limit int
// Recursive, when true, walks subdirectories and returns no
// CommonPrefixes; when false, the listing stops at the first "/" past
// Prefix and "directory-like" entries are surfaced via CommonPrefixes.
Recursive bool
}
ListOptions controls a Storage.List call.
All fields are optional. The zero value lists the entire flat namespace of the storage.
type ListResult ¶ added in v0.2.0
type ListResult struct {
// Objects are the objects matching the request, in lexicographic order.
// Their metadata may be unset depending on the backend; use Storage.Get
// to fetch full metadata.
Objects []Object
// CommonPrefixes are the directory-like grouping prefixes (only populated
// when ListOptions.Recursive is false).
CommonPrefixes []string
// NextAfter is an opaque continuation token. When empty, the listing is
// exhausted.
NextAfter string
}
ListResult is the response from Storage.List.
type Metadata ¶
Metadata holds object metadata as case-sensitive key/value pairs.
Metadata mirrors the http.Header / url.Values pattern: it is a named map type with helper methods, so callers can use the standard map operations (range, len, indexing, literal initialization) directly when convenient and the methods when ergonomics matter.
The zero value is nil and is read-safe; writes to a nil Metadata panic. Use make(s2.Metadata) or a literal s2.Metadata{...} to obtain a writable instance.
func (Metadata) Clone ¶ added in v0.2.0
Clone returns a deep copy of the metadata. The result is independent of the receiver: mutating one does not affect the other. Cloning a nil Metadata returns nil.
func (Metadata) Delete ¶ added in v0.2.0
Delete removes the entry for key. Calling Delete on a missing key is a no-op.
type Mover ¶ added in v0.2.0
Mover is an optional interface that a Storage implementation may satisfy to provide a move operation that is more efficient — and possibly atomic — than the default Copy + Delete fallback. The osfs backend implements Mover via a filesystem rename, for example.
Storage implementations are not required to satisfy Mover; the free function Move falls back to Copy + Delete when they do not.
type NewStorageFunc ¶
NewStorageFunc is a function that creates a new storage.
type Object ¶
type Object interface {
// Name returns the name of the object.
Name() string
// Open opens the object for reading and returns the reader stream.
// The caller is responsible for closing the returned io.ReadCloser.
Open() (io.ReadCloser, error)
// OpenRange opens the object for reading the specified range and returns the reader stream.
// The caller is responsible for closing the returned io.ReadCloser.
OpenRange(offset, length uint64) (io.ReadCloser, error)
// Length returns the length of the object in bytes.
Length() uint64
// LastModified returns the last modified time of the object.
LastModified() time.Time
// Metadata returns the metadata of the object.
//
// Note: Depending on the storage implementation (e.g., S3), objects
// returned by List operations may not contain metadata. Use Storage.Get
// to fetch the complete metadata.
Metadata() Metadata
}
Object is an interface that represents an object in a storage.
func NewObjectBytes ¶
func NewObjectBytes(name string, body []byte, opts ...ObjectOption) Object
NewObjectBytes creates new object from byte slice.
func NewObjectFromFile ¶ added in v0.2.0
NewObjectFromFile creates a new Object backed by a file on the local filesystem. The file at name must exist and not be a directory; otherwise the returned error wraps ErrNotExist. The Object's Length and LastModified are populated from os.Stat; metadata can be supplied via WithMetadata.
Reads via Open are performed lazily by re-opening the underlying file. The supplied ctx is currently unused but reserved for future cancellation.
func NewObjectReader ¶
func NewObjectReader(name string, body io.ReadCloser, length uint64, opts ...ObjectOption) Object
NewObjectReader creates new object from io.ReadCloser.
type ObjectOption ¶
type ObjectOption func(*object)
ObjectOption is a functional option for configuring objects created by NewObject, NewObjectReader, and NewObjectBytes.
func WithLastModified ¶
func WithLastModified(t time.Time) ObjectOption
WithLastModified sets the last modified time on the object.
func WithMetadata ¶
func WithMetadata(md Metadata) ObjectOption
WithMetadata sets the metadata on the object.
type S3Config ¶
type S3Config struct {
// EndpointURL is a custom S3-compatible endpoint (e.g. "http://localhost:9000/s3api").
EndpointURL string `json:"endpoint_url,omitempty"`
// Region is the AWS region (e.g. "ap-northeast-1").
Region string `json:"region,omitempty"`
// AccessKeyID is the AWS access key ID.
AccessKeyID string `json:"access_key_id,omitempty"`
// SecretAccessKey is the AWS secret access key.
SecretAccessKey string `json:"secret_access_key,omitempty"`
}
S3Config holds S3-specific configuration. When fields are empty, the AWS SDK defaults (environment variables, shared credentials, IAM role, etc.) are used.
type SignedURLMethod ¶ added in v0.2.0
type SignedURLMethod string
SignedURLMethod is the HTTP method that a presigned URL is authorized for.
const ( // SignedURLGet authorizes a GET request (download). SignedURLGet SignedURLMethod = "GET" // SignedURLPut authorizes a PUT request (upload). SignedURLPut SignedURLMethod = "PUT" )
type SignedURLOptions ¶ added in v0.2.0
type SignedURLOptions struct {
// Name is the object name to sign.
Name string
// Method is the HTTP method to authorize. Defaults to GET when empty.
Method SignedURLMethod
// TTL is how long the URL remains valid.
TTL time.Duration
}
SignedURLOptions controls a Storage.SignedURL call.
type Storage ¶
type Storage interface {
// Type returns the type of the storage.
Type() Type
// Sub returns a new storage scoped to the given prefix. The returned
// storage shares the parent's lifetime.
Sub(ctx context.Context, prefix string) (Storage, error)
// List returns the objects (and, when non-recursive, common prefixes)
// matching opts.
List(ctx context.Context, opts ListOptions) (ListResult, error)
// Get returns the object identified by name, including its metadata.
// If no object exists at name, the returned error wraps ErrNotExist.
Get(ctx context.Context, name string) (Object, error)
// Exists reports whether an object exists at name.
Exists(ctx context.Context, name string) (bool, error)
// Put writes obj to the storage atomically per object. Any metadata on
// obj is persisted as part of the same call.
Put(ctx context.Context, obj Object) error
// PutMetadata replaces the metadata of an existing object without
// rewriting its body. It is intended for hash- or ETag-style metadata
// that can only be computed after the body is written. Note: PutMetadata
// is NOT atomic with Put; a crash between the two leaves the object on
// disk with whatever metadata Put itself wrote. Replaces (does not merge)
// any existing metadata.
PutMetadata(ctx context.Context, name string, metadata Metadata) error
// Copy duplicates src to dst. The semantics are backend-defined: the s3
// backend uses server-side copy, while file-backed backends stream the
// body.
Copy(ctx context.Context, src, dst string) error
// Delete removes the object at name. Deleting a non-existent object is
// a no-op and does not return an error.
Delete(ctx context.Context, name string) error
// DeleteRecursive removes every object whose name begins with prefix.
// The operation is best-effort and not atomic across objects.
DeleteRecursive(ctx context.Context, prefix string) error
// SignedURL returns a presigned URL for the object identified by opts.
// Backends that do not support presigning return an error.
SignedURL(ctx context.Context, opts SignedURLOptions) (string, error)
}
Storage is a simple object storage abstraction. Implementations are expected to be safe for concurrent use by multiple goroutines.
Errors that report a missing object wrap ErrNotExist; detect them with errors.Is(err, s2.ErrNotExist).
func NewStorage ¶
NewStorage creates a new storage from the given configuration. If no plugin is registered for cfg.Type, the returned error wraps ErrUnknownType.
Example ¶
ExampleNewStorage shows how to construct an in-memory Storage. The blank import of github.com/mojatter/s2/fs registers both osfs and memfs.
package main
import (
"context"
"fmt"
"github.com/mojatter/s2"
_ "github.com/mojatter/s2/fs"
)
func main() {
ctx := context.Background()
strg, err := s2.NewStorage(ctx, s2.Config{Type: s2.TypeMemFS})
if err != nil {
panic(err)
}
fmt.Println(strg.Type())
}
Output: memfs
type Type ¶
type Type string
func KnownTypes ¶ added in v0.2.0
func KnownTypes() []Type
KnownTypes returns the list of storage Types that are known to s2. The returned slice is a fresh copy; mutating it does not affect future calls. Note that this only enumerates compiled-in types; whether a given type is *registered* depends on whether the corresponding backend package has been imported (e.g. _ "github.com/mojatter/s2/fs").
Directories
¶
| Path | Synopsis |
|---|---|
|
cmd
|
|
|
s2-server
command
|
|
|
internal
|
|
|
numconv
Package numconv contains internal helpers for converting between Go's signed and unsigned integer types when dealing with stdlib APIs that disagree on signedness (notably os.FileInfo.Size and io.Seeker offsets).
|
Package numconv contains internal helpers for converting between Go's signed and unsigned integer types when dealing with stdlib APIs that disagree on signedness (notably os.FileInfo.Size and io.Seeker offsets). |