s2

package module
v0.3.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 7, 2026 License: MIT Imports: 10 Imported by: 0

README

S2 — Simple Storage

PkgGoDev Go Report Card

S2 is a lightweight object storage library and S3-compatible server written in Go. It provides a unified interface for multiple storage backends and an embeddable S3-compatible server — all in a single package.

Why S2?

MinIO was the go-to S3-compatible server for local development, but it entered maintenance mode in December 2025 and was archived in February 2026. S2 fills this gap with a different philosophy:

  • Library-first — Use S2 as a Go library with a clean interface, or run it as a server. Most alternatives are server-only.
  • Truly lightweight — Single binary, no external dependencies, starts in milliseconds.
  • Test-friendly — Use memfs backend for fast, isolated tests without Docker or external processes.

Migrating from MinIO

For most local-development use cases, replacing MinIO with S2 is a one-line change in docker-compose.yml. S2 listens on the same :9000 port and serves the S3 API under /s3api.

docker-compose.yml

services:
  s2:
    image: mojatter/s2-server
    ports:
      - "9000:9000"
    environment:
      S2_SERVER_USER: myuser
      S2_SERVER_PASSWORD: mypassword
      S2_SERVER_BUCKETS: assets,uploads
    volumes:
      - s2-data:/var/lib/s2

volumes:
  s2-data:

Endpoint difference — MinIO serves the S3 API at the root (http://localhost:9000), while S2 serves it under /s3api (http://localhost:9000/s3api). Update your S3 client's endpoint URL accordingly. The path under /s3api is reserved for the Web Console.

Environment variable mapping

MinIO S2
MINIO_ROOT_USER S2_SERVER_USER
MINIO_ROOT_PASSWORD S2_SERVER_PASSWORD
MINIO_VOLUMES S2_SERVER_ROOT (default /var/lib/s2)
MINIO_DEFAULT_BUCKETS S2_SERVER_BUCKETS

Migrating existing data — S2's osfs backend stores objects as plain files on disk (no proprietary format), so any S3 client can copy data over:

# Mirror an existing MinIO instance into a fresh S2 instance
aws --endpoint-url http://old-minio:9000 s3 sync s3://my-bucket /tmp/dump
aws --endpoint-url http://localhost:9000/s3api s3 sync /tmp/dump s3://my-bucket

Or use mc mirror directly between the two endpoints.

Features

  • Unified Storage Interface — One API for local filesystem, in-memory, and AWS S3 backends
  • S3-Compatible Server — Serve any backend over S3 APIs; a drop-in replacement for MinIO in local development
  • Lightweight — Minimal dependencies, single binary, go install ready
  • Pluggable Backends — Register storage implementations with a blank import
  • Web Console — Built-in browser interface for managing buckets and objects

S2 Web Console

Install

go get github.com/mojatter/s2

To install the S2 server CLI:

go install github.com/mojatter/s2/cmd/s2-server@latest

Or run with Docker:

docker run -p 9000:9000 mojatter/s2-server

Quick Start

As a Library

Define your storage backends in a JSON config file:

{
  "assets": {
    "type": "osfs",
    "root": "/var/data/assets"
  },
  "backups": {
    "type": "s3",
    "root": "my-backup-bucket"
  }
}

Load and use them with s2env:

package main

import (
	"context"
	"fmt"

	"github.com/mojatter/s2"
	"github.com/mojatter/s2/s2env"
)

func main() {
	ctx := context.Background()

	// Load all storages from config file
	storages, err := s2env.Load(ctx, "s2.json")
	if err != nil {
		panic(err)
	}

	// Use a named storage
	assets := storages["assets"]

	// Put an object
	obj := s2.NewObjectBytes("hello.txt", []byte("Hello, S2!"))
	if err := assets.Put(ctx, obj); err != nil {
		panic(err)
	}

	// List objects
	res, err := assets.List(ctx, s2.ListOptions{Limit: 100})
	if err != nil {
		panic(err)
	}
	for _, o := range res.Objects {
		fmt.Println(o.Name())
	}
}

s2env automatically registers all built-in backends (osfs, memfs, s3), so no blank imports are needed.

As a Local S3 Server

Start the server:

# via go install
s2-server

# via Docker
docker run -p 9000:9000 -v /your/data:/var/lib/s2 mojatter/s2-server

Then access it with any S3 client:

package main

import (
	"context"
	"fmt"

	"github.com/mojatter/s2"
	_ "github.com/mojatter/s2/s3" // Register S3 backend
)

func main() {
	ctx := context.Background()
	strg, err := s2.NewStorage(ctx, s2.Config{
		Type: s2.TypeS3,
		Root: "my-bucket",
		S3: &s2.S3Config{
			EndpointURL: "http://localhost:9000/s3api",
		},
	})
	if err != nil {
		panic(err)
	}
	res, err := strg.List(ctx, s2.ListOptions{Limit: 1000})
	if err != nil {
		panic(err)
	}
	fmt.Printf("%v\n", res.Objects)
}

Or use the AWS CLI:

aws --endpoint-url http://localhost:9000/s3api s3 ls
aws --endpoint-url http://localhost:9000/s3api s3 cp ./file.txt s3://my-bucket/file.txt
In Tests

For tests, swap any backend for memfs to get an isolated, in-process storage with no Docker, no temp directories, and no cleanup. The same s2.Storage interface is used in production and tests.

package mypkg_test

import (
	"context"
	"testing"

	"github.com/mojatter/s2"
	_ "github.com/mojatter/s2/fs" // registers memfs
)

func TestUploadAvatar(t *testing.T) {
	ctx := context.Background()
	strg, err := s2.NewStorage(ctx, s2.Config{Type: s2.TypeMemFS})
	if err != nil {
		t.Fatal(err)
	}

	if err := UploadAvatar(ctx, strg, "user-1", []byte("...")); err != nil {
		t.Fatal(err)
	}
	// assert via strg.Get / strg.List ...
}

The s2test package provides reusable assertion helpers (e.g. s2test.TestStorageList) for validating Storage implementations and exercising your own code against any backend.

Storage Backends

Type Import Description
osfs github.com/mojatter/s2/fs Local filesystem storage
memfs github.com/mojatter/s2/fs In-memory filesystem (great for testing; see notes below)
s3 github.com/mojatter/s2/s3 AWS S3 (and any S3-compatible service)

Backends are registered via blank imports. Import only what you need:

import (
	_ "github.com/mojatter/s2/fs" // osfs + memfs
	_ "github.com/mojatter/s2/s3" // AWS S3
)

S3 Backend Configuration

When using the s3 backend, you can provide S3-specific settings via S3Config. Any field left empty falls back to the AWS SDK defaults (environment variables, ~/.aws/config, IAM roles, etc.).

strg, err := s2.NewStorage(ctx, s2.Config{
    Type: s2.TypeS3,
    Root: "my-bucket/optional-prefix",
    S3: &s2.S3Config{
        EndpointURL:    "http://localhost:9000/s3api",
        Region:         "ap-northeast-1",
        AccessKeyID:    "s2user",
        SecretAccessKey: "s2password",
    },
})

With s2env, use the "s3" key in JSON:

{
  "local": {
    "type": "s3",
    "root": "dev-bucket",
    "s3": {
      "endpoint_url": "http://localhost:9000/s3api",
      "access_key_id": "myuser",
      "secret_access_key": "mypassword"
    }
  },
  "prod": {
    "type": "s3",
    "root": "prod-bucket",
    "s3": {
      "region": "ap-northeast-1"
    }
  }
}
Field Description
endpoint_url Custom S3-compatible endpoint URL
region AWS region (e.g. ap-northeast-1)
access_key_id AWS access key ID
secret_access_key AWS secret access key

When S3Config is nil or all fields are empty, the standard AWS SDK credential chain is used.

Storage Interface

type Storage interface {
	Type() Type
	Sub(ctx context.Context, prefix string) (Storage, error)
	List(ctx context.Context, opts ListOptions) (ListResult, error)
	Get(ctx context.Context, name string) (Object, error)
	Exists(ctx context.Context, name string) (bool, error)
	Put(ctx context.Context, obj Object) error
	PutMetadata(ctx context.Context, name string, metadata Metadata) error
	Copy(ctx context.Context, src, dst string) error
	Delete(ctx context.Context, name string) error
	DeleteRecursive(ctx context.Context, prefix string) error
	SignedURL(ctx context.Context, opts SignedURLOptions) (string, error)
}

// One List method covers flat and recursive listings, with explicit
// pagination via continuation token.
type ListOptions struct {
	Prefix    string
	After     string // continuation token; empty = first page
	Limit     int    // 0 = backend default
	Recursive bool
}

type ListResult struct {
	Objects        []Object
	CommonPrefixes []string // empty when Recursive == true
	NextAfter      string   // empty when exhausted
}

// SignedURL is method-aware so backends can issue both download and upload URLs.
type SignedURLOptions struct {
	Name   string
	Method SignedURLMethod // SignedURLGet (default) or SignedURLPut
	TTL    time.Duration
}

Move is a free function rather than a method so backends do not have to implement two near-identical operations. Backends that can do better than Copy + Delete (e.g. osfs via filesystem rename) satisfy the optional s2.Mover interface, which s2.Move discovers via type assertion:

err := s2.Move(ctx, strg, "src.txt", "dst.txt")

Errors that report a missing object wrap s2.ErrNotExist; detect them with errors.Is:

if _, err := strg.Get(ctx, "missing.txt"); errors.Is(err, s2.ErrNotExist) {
	// handle not found
}

Server Configuration

Environment Variables
Variable Default Description
S2_SERVER_CONFIG Path to JSON config file
S2_SERVER_LISTEN :9000 Listen address
S2_SERVER_TYPE osfs Storage backend type
S2_SERVER_ROOT /var/lib/s2 Root directory for bucket data
S2_SERVER_USER Username for authentication (disables auth if empty)
S2_SERVER_PASSWORD Password for authentication
S2_SERVER_BUCKETS Comma-separated list of buckets to create on startup

Environment variables take precedence over the config file. Other settings (such as S3 backend options) are not configurable via environment variables — use S2_SERVER_CONFIG to point to a JSON config file instead.

Authentication

When S2_SERVER_USER is set, the server requires credentials on all routes:

  • Web Console — HTTP Basic Auth
  • S3 API — AWS Signature Version 4 (S2_SERVER_USER as the Access Key ID, S2_SERVER_PASSWORD as the Secret Access Key)
S2_SERVER_USER=myuser S2_SERVER_PASSWORD=mypassword s2-server

Using the AWS CLI:

AWS_ACCESS_KEY_ID=myuser AWS_SECRET_ACCESS_KEY=mypassword \
  aws --endpoint-url http://localhost:9000/s3api s3 ls

Or via a named profile in ~/.aws/config:

[profile s2]
endpoint_url = http://localhost:9000/s3api
aws_access_key_id = myuser
aws_secret_access_key = mypassword
aws --profile s2 s3 ls

When S2_SERVER_USER is empty (the default), authentication is disabled.

Presigned URLs — S2 verifies AWS SigV4 signatures passed in the query string (X-Amz-Algorithm=AWS4-HMAC-SHA256, X-Amz-Signature, …), so URLs produced by s3.NewPresignClient (Go) or s3.getSignedUrl (JavaScript) work for GET and PUT. The body of a presigned PUT is treated as UNSIGNED-PAYLOAD.

Config File
{
  "listen": ":9000",
  "type": "osfs",
  "root": "/var/lib/s2",
  "user": "myuser",
  "password": "mypassword",
  "buckets": ["assets", "uploads"]
}
s2-server -f config.json
S3 API Endpoints
Method Path Operation
GET /s3api ListBuckets
PUT /s3api/{bucket} CreateBucket
HEAD /s3api/{bucket} HeadBucket
DELETE /s3api/{bucket} DeleteBucket
GET /s3api/{bucket}?location GetBucketLocation
GET /s3api/{bucket} ListObjectsV2
GET /s3api/{bucket}/{key...} GetObject (Range supported)
HEAD /s3api/{bucket}/{key...} HeadObject
PUT /s3api/{bucket}/{key...} PutObject / CopyObject
DELETE /s3api/{bucket}/{key...} DeleteObject
POST /s3api/{bucket}?delete DeleteObjects
POST /s3api/{bucket}/{key...}?uploads CreateMultipartUpload
PUT /s3api/{bucket}/{key...}?uploadId&partNumber UploadPart
POST /s3api/{bucket}/{key...}?uploadId CompleteMultipartUpload
DELETE /s3api/{bucket}/{key...}?uploadId AbortMultipartUpload

Custom metadata is supported via x-amz-meta-* headers on PutObject/CopyObject and returned on GetObject/HeadObject.

S3 Compatibility

S2 Server is designed to drop-in replace MinIO for:

  • ✅ Local development against aws-sdk-go, boto3, @aws-sdk/client-s3, and other S3 SDKs
  • ✅ CI/test environments using S3 via testcontainers or docker-compose
  • ✅ Small-scale production for static assets, uploads, and backups
  • ✅ Presigned URL workflows (browser uploads/downloads)
  • ✅ Multipart uploads for large objects

It is not a replacement for AWS S3 in scenarios requiring versioning, server-side encryption, IAM policies, lifecycle management, or multi-node replication. See Limitations for details.

Limitations

S2 aims to cover the parts of the S3 API that matter for local development and lightweight production use. Some features are intentionally not implemented:

  • Object versioningVersionId, version listing, and s3:GetObjectVersion are not supported. Buckets behave as if versioning is permanently disabled.
  • ListObjectsV2 only — The legacy ListObjects (V1) API is not implemented. Most modern SDKs use V2 by default; older clients may need configuration changes.
  • Server-side encryption (SSE-S3 / SSE-KMS / SSE-C) — Not implemented. Use full-disk encryption at the OS level if needed.
  • Bucket policies, ACLs, IAM — Authentication is a single user/password pair; there is no per-bucket or per-object access control. For multi-tenant scenarios, use AWS S3 or another full-featured implementation.
  • Replication, lifecycle rules, object lock — Not implemented.

If your use case needs any of the above, S2 is probably not the right tool — consider AWS S3, Ceph RGW, or SeaweedFS.

memfs backend

The memfs backend holds every object body in process memory. It is designed for tests and local development, not production workloads:

  • All objects live in RAM for the lifetime of the process; nothing is persisted.
  • The default upload limit is 16 MiB (vs. 5 GiB for osfs/s3) to protect the host from accidental OOM. Set S2_SERVER_MAX_UPLOAD_SIZE (or Config.MaxUploadSize) to raise it if you genuinely need larger uploads against memfs.
  • There is no total-memory budget or backpressure across concurrent uploads.

If you need to handle large files, use the osfs or s3 backend instead.

License

MIT

Credits

The header image was generated with Google Gemini. It includes the Go Gopher mascot, originally designed by Renée French and licensed under CC BY 3.0.

Documentation

Overview

Package s2 is a lightweight object storage abstraction with multiple backends and an embeddable S3-compatible server.

Overview

s2 provides a single Storage interface that all backends implement, and a small set of value types (Object, Metadata, Config) for moving data through it. The same interface drives a local-filesystem backend (osfs), an in-memory backend (memfs), and an AWS S3 / S3-compatible backend (s3). Backends register themselves via blank import:

import (
    _ "github.com/mojatter/s2/fs" // osfs + memfs
    _ "github.com/mojatter/s2/s3" // AWS S3
)

strg, err := s2.NewStorage(ctx, s2.Config{Type: s2.TypeOSFS, Root: "/var/data"})

Errors

Operations that report a missing object wrap the sentinel ErrNotExist. Detect them with errors.Is:

if _, err := strg.Get(ctx, "foo.txt"); errors.Is(err, s2.ErrNotExist) {
    // not found
}

NewStorage wraps ErrUnknownType when the requested backend has not been registered.

Concurrency

All Storage implementations shipped with s2 are safe for concurrent use by multiple goroutines. Methods that mutate state (Put, Delete, PutMetadata, ...) are independently atomic per object: a single Put is either fully visible to subsequent reads or not visible at all. Multiple concurrent Puts to the same object resolve to one of the writes — there is no defined ordering — but no torn writes are exposed.

PutMetadata is NOT atomic with Put. Calling Put followed by PutMetadata leaves a window during which the object exists with whatever metadata Put itself wrote.

Atomicity per backend

The following table summarizes the atomicity guarantees of each operation across the bundled backends.

Operation             | osfs                  | memfs                 | s3
----------------------|-----------------------|-----------------------|-----------------------
Put                   | atomic (temp+rename)  | atomic                | atomic (server-side)
Copy                  | atomic per dst        | atomic per dst        | atomic (server-side)
Move (via [Move])     | atomic (rename)       | non-atomic            | non-atomic
Delete                | atomic                | atomic                | atomic
DeleteRecursive       | best-effort, partial  | best-effort, partial  | best-effort, partial

"Best-effort, partial" means the operation deletes objects one at a time (or in pages, for s3) and may leave some objects behind on error.

Stability

s2 follows semantic versioning. Until v1.0.0 the public API may change between minor versions; breaking changes are documented in the release notes. After v1.0.0, the package will only break compatibility on a major-version bump.

Index

Examples

Constants

This section is empty.

Variables

View Source
var ErrNotExist = errors.New("s2: object not exist")

ErrNotExist is returned when an operation targets an object that does not exist. Backends wrap this with the missing object's name via fmt.Errorf, so callers should detect it with errors.Is rather than direct equality:

if errors.Is(err, s2.ErrNotExist) {
    // handle missing object
}
View Source
var ErrUnknownType = errors.New("s2: unknown storage type")

ErrUnknownType is returned by NewStorage when no plugin is registered for the requested Type. Detect with errors.Is:

if errors.Is(err, s2.ErrUnknownType) {
    // unknown backend
}

Functions

func Move added in v0.2.0

func Move(ctx context.Context, s Storage, src, dst string) error

Move moves src to dst on s. If s implements Mover, its Move method is used (which may be atomic and is generally more efficient). Otherwise Move falls back to Copy followed by Delete; that fallback is NOT atomic: if Delete fails after a successful Copy, both objects exist.

Example

ExampleMove uses the free function s2.Move, which transparently picks the fastest path: backends that implement the optional s2.Mover interface (e.g. osfs via filesystem rename) get an atomic rename, others fall back to Copy + Delete.

package main

import (
	"context"
	"errors"
	"fmt"
	"io"

	"github.com/mojatter/s2"
	_ "github.com/mojatter/s2/fs"
)

func main() {
	ctx := context.Background()
	strg, _ := s2.NewStorage(ctx, s2.Config{Type: s2.TypeMemFS})
	_ = strg.Put(ctx, s2.NewObjectBytes("src.txt", []byte("hi")))

	if err := s2.Move(ctx, strg, "src.txt", "dst.txt"); err != nil {
		panic(err)
	}

	_, err := strg.Get(ctx, "src.txt")
	fmt.Println("src missing:", errors.Is(err, s2.ErrNotExist))

	got, _ := strg.Get(ctx, "dst.txt")
	rc, _ := got.Open()
	defer rc.Close()
	body, _ := io.ReadAll(rc)
	fmt.Println("dst body:", string(body))
}
Output:
src missing: true
dst body: hi

func RegisterNewStorageFunc

func RegisterNewStorageFunc(t Type, fn NewStorageFunc)

RegisterNewStorageFunc registers a new storage function.

func UnregisterNewStorageFunc

func UnregisterNewStorageFunc(t Type)

UnregisterNewStorageFunc unregisters a storage function. Primarily useful in tests that swap backends.

Types

type Config

type Config struct {
	// Type is the type of storage.
	Type Type `json:"type"`
	// Root is the root path of the storage.
	// If Type is TypeOSFS, Root is the root path of the file system.
	// If Type is TypeMemFS, Root is not used.
	// If Type is TypeS3, Root is the S3 bucket name. The string following / in the bucket name is treated as a prefix.
	Root string `json:"root"`
	// SignedURL is the presign URL of the storage. This is used for TypeOSFS and TypeMemFS.
	SignedURL string `json:"signed_url,omitempty"`
	// S3 holds S3-specific settings. Only used when Type is TypeS3.
	S3 *S3Config `json:"s3,omitempty"`
}

Config is a configuration for a storage.

type ListOptions added in v0.2.0

type ListOptions struct {
	// Prefix restricts the listing to objects whose names begin with Prefix.
	Prefix string
	// After is an opaque continuation token returned by a previous call as
	// ListResult.NextAfter; pass it to fetch the next page. Empty for the
	// first page.
	After string
	// Limit caps the number of returned Objects. Zero means no limit.
	Limit int
	// Recursive, when true, walks subdirectories and returns no
	// CommonPrefixes; when false, the listing stops at the first "/" past
	// Prefix and "directory-like" entries are surfaced via CommonPrefixes.
	Recursive bool
}

ListOptions controls a Storage.List call.

All fields are optional. The zero value lists the entire flat namespace of the storage.

type ListResult added in v0.2.0

type ListResult struct {
	// Objects are the objects matching the request, in lexicographic order.
	// Their metadata may be unset depending on the backend; use Storage.Get
	// to fetch full metadata.
	Objects []Object
	// CommonPrefixes are the directory-like grouping prefixes (only populated
	// when ListOptions.Recursive is false).
	CommonPrefixes []string
	// NextAfter is an opaque continuation token. When empty, the listing is
	// exhausted.
	NextAfter string
}

ListResult is the response from Storage.List.

type Metadata

type Metadata map[string]string

Metadata holds object metadata as case-sensitive key/value pairs.

Metadata mirrors the http.Header / url.Values pattern: it is a named map type with helper methods, so callers can use the standard map operations (range, len, indexing, literal initialization) directly when convenient and the methods when ergonomics matter.

The zero value is nil and is read-safe; writes to a nil Metadata panic. Use make(s2.Metadata) or a literal s2.Metadata{...} to obtain a writable instance.

func (Metadata) Clone added in v0.2.0

func (m Metadata) Clone() Metadata

Clone returns a deep copy of the metadata. The result is independent of the receiver: mutating one does not affect the other. Cloning a nil Metadata returns nil.

func (Metadata) Delete added in v0.2.0

func (m Metadata) Delete(key string)

Delete removes the entry for key. Calling Delete on a missing key is a no-op.

func (Metadata) Get

func (m Metadata) Get(key string) (string, bool)

Get returns the value associated with key. The boolean reports whether the key is present, distinguishing a missing entry from an empty value.

func (Metadata) Set added in v0.2.0

func (m Metadata) Set(key, value string)

Set assigns value to key, overwriting any existing entry.

type Mover added in v0.2.0

type Mover interface {
	Move(ctx context.Context, src, dst string) error
}

Mover is an optional interface that a Storage implementation may satisfy to provide a move operation that is more efficient — and possibly atomic — than the default Copy + Delete fallback. The osfs backend implements Mover via a filesystem rename, for example.

Storage implementations are not required to satisfy Mover; the free function Move falls back to Copy + Delete when they do not.

type NewStorageFunc

type NewStorageFunc func(ctx context.Context, cfg Config) (Storage, error)

NewStorageFunc is a function that creates a new storage.

type Object

type Object interface {
	// Name returns the name of the object.
	Name() string
	// Open opens the object for reading and returns the reader stream.
	// The caller is responsible for closing the returned io.ReadCloser.
	Open() (io.ReadCloser, error)
	// OpenRange opens the object for reading the specified range and returns the reader stream.
	// The caller is responsible for closing the returned io.ReadCloser.
	OpenRange(offset, length uint64) (io.ReadCloser, error)
	// Length returns the length of the object in bytes.
	Length() uint64
	// LastModified returns the last modified time of the object.
	LastModified() time.Time
	// Metadata returns the metadata of the object.
	//
	// Note: Depending on the storage implementation (e.g., S3), objects
	// returned by List operations may not contain metadata. Use Storage.Get
	// to fetch the complete metadata.
	Metadata() Metadata
}

Object is an interface that represents an object in a storage.

func NewObjectBytes

func NewObjectBytes(name string, body []byte, opts ...ObjectOption) Object

NewObjectBytes creates new object from byte slice.

func NewObjectFromFile added in v0.2.0

func NewObjectFromFile(ctx context.Context, name string, opts ...ObjectOption) (Object, error)

NewObjectFromFile creates a new Object backed by a file on the local filesystem. The file at name must exist and not be a directory; otherwise the returned error wraps ErrNotExist. The Object's Length and LastModified are populated from os.Stat; metadata can be supplied via WithMetadata.

Reads via Open are performed lazily by re-opening the underlying file. The supplied ctx is currently unused but reserved for future cancellation.

func NewObjectReader

func NewObjectReader(name string, body io.ReadCloser, length uint64, opts ...ObjectOption) Object

NewObjectReader creates new object from io.ReadCloser.

type ObjectOption

type ObjectOption func(*object)

ObjectOption is a functional option for configuring objects created by NewObject, NewObjectReader, and NewObjectBytes.

func WithLastModified

func WithLastModified(t time.Time) ObjectOption

WithLastModified sets the last modified time on the object.

func WithMetadata

func WithMetadata(md Metadata) ObjectOption

WithMetadata sets the metadata on the object.

type S3Config

type S3Config struct {
	// EndpointURL is a custom S3-compatible endpoint (e.g. "http://localhost:9000/s3api").
	EndpointURL string `json:"endpoint_url,omitempty"`
	// Region is the AWS region (e.g. "ap-northeast-1").
	Region string `json:"region,omitempty"`
	// AccessKeyID is the AWS access key ID.
	AccessKeyID string `json:"access_key_id,omitempty"`
	// SecretAccessKey is the AWS secret access key.
	SecretAccessKey string `json:"secret_access_key,omitempty"`
}

S3Config holds S3-specific configuration. When fields are empty, the AWS SDK defaults (environment variables, shared credentials, IAM role, etc.) are used.

type SignedURLMethod added in v0.2.0

type SignedURLMethod string

SignedURLMethod is the HTTP method that a presigned URL is authorized for.

const (
	// SignedURLGet authorizes a GET request (download).
	SignedURLGet SignedURLMethod = "GET"
	// SignedURLPut authorizes a PUT request (upload).
	SignedURLPut SignedURLMethod = "PUT"
)

type SignedURLOptions added in v0.2.0

type SignedURLOptions struct {
	// Name is the object name to sign.
	Name string
	// Method is the HTTP method to authorize. Defaults to GET when empty.
	Method SignedURLMethod
	// TTL is how long the URL remains valid.
	TTL time.Duration
}

SignedURLOptions controls a Storage.SignedURL call.

type Storage

type Storage interface {
	// Type returns the type of the storage.
	Type() Type
	// Sub returns a new storage scoped to the given prefix. The returned
	// storage shares the parent's lifetime.
	Sub(ctx context.Context, prefix string) (Storage, error)
	// List returns the objects (and, when non-recursive, common prefixes)
	// matching opts.
	List(ctx context.Context, opts ListOptions) (ListResult, error)
	// Get returns the object identified by name, including its metadata.
	// If no object exists at name, the returned error wraps ErrNotExist.
	Get(ctx context.Context, name string) (Object, error)
	// Exists reports whether an object exists at name.
	Exists(ctx context.Context, name string) (bool, error)
	// Put writes obj to the storage atomically per object. Any metadata on
	// obj is persisted as part of the same call.
	Put(ctx context.Context, obj Object) error
	// PutMetadata replaces the metadata of an existing object without
	// rewriting its body. It is intended for hash- or ETag-style metadata
	// that can only be computed after the body is written. Note: PutMetadata
	// is NOT atomic with Put; a crash between the two leaves the object on
	// disk with whatever metadata Put itself wrote. Replaces (does not merge)
	// any existing metadata.
	PutMetadata(ctx context.Context, name string, metadata Metadata) error
	// Copy duplicates src to dst. The semantics are backend-defined: the s3
	// backend uses server-side copy, while file-backed backends stream the
	// body.
	Copy(ctx context.Context, src, dst string) error
	// Delete removes the object at name. Deleting a non-existent object is
	// a no-op and does not return an error.
	Delete(ctx context.Context, name string) error
	// DeleteRecursive removes every object whose name begins with prefix.
	// The operation is best-effort and not atomic across objects.
	DeleteRecursive(ctx context.Context, prefix string) error
	// SignedURL returns a presigned URL for the object identified by opts.
	// Backends that do not support presigning return an error.
	SignedURL(ctx context.Context, opts SignedURLOptions) (string, error)
}

Storage is a simple object storage abstraction. Implementations are expected to be safe for concurrent use by multiple goroutines.

Errors that report a missing object wrap ErrNotExist; detect them with errors.Is(err, s2.ErrNotExist).

func NewStorage

func NewStorage(ctx context.Context, cfg Config) (Storage, error)

NewStorage creates a new storage from the given configuration. If no plugin is registered for cfg.Type, the returned error wraps ErrUnknownType.

Example

ExampleNewStorage shows how to construct an in-memory Storage. The blank import of github.com/mojatter/s2/fs registers both osfs and memfs.

package main

import (
	"context"
	"fmt"

	"github.com/mojatter/s2"
	_ "github.com/mojatter/s2/fs"
)

func main() {
	ctx := context.Background()

	strg, err := s2.NewStorage(ctx, s2.Config{Type: s2.TypeMemFS})
	if err != nil {
		panic(err)
	}
	fmt.Println(strg.Type())
}
Output:
memfs

type Type

type Type string
const (
	TypeOSFS  Type = "osfs"
	TypeMemFS Type = "memfs"
	TypeS3    Type = "s3"
)

func KnownTypes added in v0.2.0

func KnownTypes() []Type

KnownTypes returns the list of storage Types that are known to s2. The returned slice is a fresh copy; mutating it does not affect future calls. Note that this only enumerates compiled-in types; whether a given type is *registered* depends on whether the corresponding backend package has been imported (e.g. _ "github.com/mojatter/s2/fs").

Directories

Path Synopsis
cmd
s2-server command
internal
numconv
Package numconv contains internal helpers for converting between Go's signed and unsigned integer types when dealing with stdlib APIs that disagree on signedness (notably os.FileInfo.Size and io.Seeker offsets).
Package numconv contains internal helpers for converting between Go's signed and unsigned integer types when dealing with stdlib APIs that disagree on signedness (notably os.FileInfo.Size and io.Seeker offsets).

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL