storage

package module
v0.2.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 11, 2026 License: MIT Imports: 3 Imported by: 0

README

storage logo

One storage API for local disks, object stores, and remote filesystems.

Go Reference CI Go version Latest tag Go Report Card Unit tests (executed count) Integration tests (executed count)

Why

Applications often need to store files in different places:

  • Local disks during development
  • Object storage like S3 or GCS in production
  • Remote filesystems like SFTP or FTP
  • Cloud providers or custom remotes

Each backend has its own API and client library.

storage provides a small, consistent interface so your application code doesn't have to change when the backend changes.

Driver Matrix

Each driver is thoroughly tested against the shared test suite using testcontainers or emulators where appropriate.

Driver Kind Notes
local Local filesystem Good default for local development and tests.
memory In-memory Best zero-dependency backend for tests and ephemeral workflows.
redis Distributed memory Good for temporary distributed blob storage with explicit size and durability tradeoffs.
ftp Remote filesystem Embedded integration fixture in the shared matrix.
sftp Remote filesystem Container-backed integration coverage in the shared matrix.
s3 Object storage MinIO-backed integration coverage in the shared matrix.
gcs Object storage Emulator-backed integration coverage via fake-gcs-server.
dropbox Object storage Returns temporary links; external integration strategy still open.
rclone Breadth driver Depends on the underlying rclone remote; see the rclone storage systems overview.

Install

Root module:

go get github.com/goforj/storage

Then add the driver modules you need, for example:

go get github.com/goforj/storage/driver/localstorage
go get github.com/goforj/storage/driver/memorystorage
go get github.com/goforj/storage/driver/redisstorage
go get github.com/goforj/storage/driver/ftpstorage
go get github.com/goforj/storage/driver/sftpstorage
go get github.com/goforj/storage/driver/s3storage
go get github.com/goforj/storage/driver/gcsstorage
go get github.com/goforj/storage/driver/dropboxstorage
go get github.com/goforj/storage/driver/rclonestorage

Usage

Choose the construction style that fits your application:

  • Use a driver constructor like localstorage.New(...) when you want a single backend directly.
  • Use storage.Build(...) when you want one backend through the shared storage API.
  • Use storage.New(...) when you want multiple named disks managed from config.

All storage operations also expose *Context equivalents for deadlines and cancellation. The default methods use context.Background().

Common operations

package main

import (
    "errors"
    "fmt"
    "log"

    "github.com/goforj/storage"
    "github.com/goforj/storage/driver/localstorage"
)

func main() {
    disk, err := storage.Build(localstorage.Config{
        Root: "/tmp/storage",
    })
    if err != nil {
        log.Fatal(err)
    }

    // Put a file.
    if err := disk.Put("docs/readme.txt", []byte("hello")); err != nil {
        log.Fatal(err)
    }

    // Check whether the file exists.
    ok, err := disk.Exists("docs/readme.txt")
    if err != nil {
        log.Fatal(err)
    }
    fmt.Println(ok)
    // Output: true

    // Read the file back.
    data, err := disk.Get("docs/readme.txt")
    if err != nil {
        log.Fatal(err)
    }
    fmt.Println(string(data))
    // Output: hello

    // List the parent directory.
    entries, err := disk.List("docs")
    if err != nil {
        log.Fatal(err)
    }
    fmt.Println(entries[0].Path)
    // Output: docs/readme.txt

    // Delete the file.
    if err := disk.Delete("docs/readme.txt"); err != nil {
        log.Fatal(err)
    }

    // Ask the backend for an access URL when supported.
    url, err := disk.URL("docs/readme.txt")
    switch {
    case err == nil:
        fmt.Println(url)
    case errors.Is(err, storage.ErrUnsupported):
        fmt.Println("url generation unsupported")
        // Output: url generation unsupported
    default:
        log.Fatal(err)
    }
}

Single-backend construction

package main

import (
    "log"

    "github.com/goforj/storage"
    "github.com/goforj/storage/driver/localstorage"
)

func main() {
    // Build one disk through the shared storage API.
    built, err := storage.Build(localstorage.Config{
        Root: "/tmp/storage",
        Prefix: "scratch",
    })
    if err != nil {
        log.Fatal(err)
    }

    // Or construct the driver directly.
    direct, err := localstorage.New(localstorage.Config{
        Root: "/tmp/storage",
        Prefix: "scratch",
    })
    if err != nil {
        log.Fatal(err)
    }

    _, _ = built, direct
}

Manager and named disks

package main

import (
    "log"

    "github.com/goforj/storage"
    "github.com/goforj/storage/driver/localstorage"
    "github.com/goforj/storage/driver/s3storage"
)

func main() {
    // Build a manager with multiple named disks.
    mgr, err := storage.New(storage.Config{
        Default: "assets",
        Disks: map[storage.DiskName]storage.DriverConfig{
            "assets": localstorage.Config{
                Root: "/tmp/storage",
                Prefix: "assets",
            },
            "uploads": s3storage.Config{
                Bucket:          "app-uploads",
                Region:          "us-east-1",
                Endpoint:        "http://localhost:9000",
                AccessKeyID:     "minioadmin",
                SecretAccessKey: "minioadmin",
                UsePathStyle:    true,
                Prefix:          "uploads",
            },
        },
    })
    if err != nil {
        log.Fatal(err)
    }

    // Resolve a disk by name.
    disk, err := mgr.Disk("assets")
    if err != nil {
        log.Fatal(err)
    }

    // Put a file into the disk.
    if err := disk.Put("hello.txt", []byte("hello")); err != nil {
        log.Fatal(err)
    }

    // Read the file back.
    data, err := disk.Get("hello.txt")
    if err != nil {
        log.Fatal(err)
    }

    _ = data // []byte("hello")
}

Rclone

Use rclonestorage when you want to access rclone-backed remotes through the storage interface.

package main

import (
    "log"

    "github.com/goforj/storage/driver/rclonestorage"
)

const rcloneConfig = `
[localdisk]
type = local
`

func main() {
    // Build an rclone-backed disk from inline rclone config.
    disk, err := rclonestorage.New(rclonestorage.Config{
        Remote:           "localdisk:/tmp/storage",
        Prefix:           "sandbox",
        RcloneConfigData: rcloneConfig,
    })
    if err != nil {
        log.Fatal(err)
    }

    // Put a file through rclone.
    if err := disk.Put("rclone.txt", []byte("hello")); err != nil {
        log.Fatal(err)
    }

    // List files from the disk root.
    entries, err := disk.List("")
    if err != nil {
        log.Fatal(err)
    }

    _ = entries // rclone.txt
}

See examples for runnable examples.

Testing with fakes

package main

import (
    "testing"

    "github.com/goforj/storage"
    "github.com/goforj/storage/driver/memorystorage"
    "github.com/goforj/storage/storagetest"
)

func TestUpload(t *testing.T) {
    // Create one fake disk.
    disk := storagetest.Fake(t)
    _ = disk.Put("photo.jpg", []byte("ok"))

    // Or create a fake manager with named in-memory disks.
    mgr := storagetest.FakeManager(t, "photos", map[storage.DiskName]memorystorage.Config{
        "photos":  {Prefix: "photos"},
        "avatars": {Prefix: "avatars"},
    })

    photos, _ := mgr.Disk("photos")
    _ = photos.Put("one.jpg", []byte("ok"))
}

Benchmarks

Benchmarks are rendered from docs/bench and compare the shared storage contract across representative backends.

Run the renderer with:

cd docs/bench
go test -tags benchrender . -run TestRenderBenchmarks -count=1 -v

Each chart sample uses a fixed measurement window per driver, so the ops chart remains meaningful without unbounded benchmark calibration.

Notes:

  • gcs uses fake-gcs-server.
  • ftp is included by default and now reuses a logged-in control connection per storage instance during the benchmark run.
  • redis, s3, and sftp use testcontainers; include them with BENCH_WITH_DOCKER=1 or by explicitly setting BENCH_DRIVER.
  • rclone_local measures rclone overhead on top of a local filesystem remote.

Latency (ns/op)

Storage benchmark latency chart

Iterations (N)

Storage benchmark iteration chart

Allocated Bytes (B/op)

Storage benchmark bytes chart

Allocations (allocs/op)

Storage benchmark allocs chart

Capability Matrix

Driver Stat Copy Move Walk URL Context
local
memory
redis
ftp
sftp
s3
gcs ~
dropbox
rclone ~

~ indicates backend- or environment-dependent behavior. For example, GCS URL generation is unavailable in emulator mode and rclone URL support depends on the underlying remote.

API reference

The API section below is autogenerated; do not edit between the markers.

API Index

Group Functions
Config rclonestorage.LocalRemote rclonestorage.MustRenderLocal rclonestorage.MustRenderS3 rclonestorage.RenderLocal rclonestorage.RenderS3 rclonestorage.S3Remote
Construction Build DriverConfig DriverFactory ResolvedConfig
Context BuildContext ContextStorage ContextStorage.CopyContext ContextStorage.DeleteContext ContextStorage.ExistsContext ContextStorage.GetContext ContextStorage.ListContext ContextStorage.MoveContext ContextStorage.PutContext ContextStorage.StatContext ContextStorage.URLContext ContextStorage.WalkContext
Core DiskName Entry Storage Storage.Copy Storage.Delete Storage.Exists Storage.Get Storage.List Storage.Move Storage.Put Storage.Stat Storage.URL Storage.Walk
Driver Config dropboxstorage.Config ftpstorage.Config gcsstorage.Config localstorage.Config memorystorage.Config rclonestorage.Config s3storage.Config sftpstorage.Config
Driver Constructors dropboxstorage.New ftpstorage.New gcsstorage.New localstorage.New memorystorage.New rclonestorage.New s3storage.New sftpstorage.New
Manager Config Manager Manager.Default Manager.Disk New RegisterDriver
Paths JoinPrefix NormalizePath

Config

rclonestorage.LocalRemote

LocalRemote defines a local backend configuration.

Example: define a local remote

remote := rclonestorage.LocalRemote{Name: "local"}
fmt.Println(remote.Name)
// Output: local

Example: define a local remote with all fields

remote := rclonestorage.LocalRemote{
	Name: "local",
}
fmt.Println(remote.Name)
// Output: local

rclonestorage.MustRenderLocal

MustRenderLocal panics on error.

cfg := rclonestorage.MustRenderLocal(rclonestorage.LocalRemote{Name: "local"})
fmt.Println(cfg)
// Output:
// [local]
// type = local

rclonestorage.MustRenderS3

MustRenderS3 panics on error.

cfg := rclonestorage.MustRenderS3(rclonestorage.S3Remote{
	Name:            "assets",
	Region:          "us-east-1",
	AccessKeyID:     "key",
	SecretAccessKey: "secret",
})
fmt.Println(cfg)
// Output:
// [assets]
// type = s3
// provider = AWS
// access_key_id = key
// secret_access_key = secret
// region = us-east-1

rclonestorage.RenderLocal

RenderLocal returns ini-formatted rclone config for a local backend.

cfg, _ := rclonestorage.RenderLocal(rclonestorage.LocalRemote{Name: "local"})
fmt.Println(cfg)
// Output:
// [local]
// type = local

rclonestorage.RenderS3

RenderS3 returns ini-formatted rclone config content for a single S3 remote.

cfg, _ := rclonestorage.RenderS3(rclonestorage.S3Remote{
	Name:            "assets",
	Region:          "us-east-1",
	AccessKeyID:     "key",
	SecretAccessKey: "secret",
})
fmt.Println(cfg)
// Output:
// [assets]
// type = s3
// provider = AWS
// access_key_id = key
// secret_access_key = secret
// region = us-east-1

rclonestorage.S3Remote

S3Remote defines parameters for constructing an rclone S3 remote.

Example: define an s3 remote

remote := rclonestorage.S3Remote{
	Name:            "assets",
	Region:          "us-east-1",
	AccessKeyID:     "key",
	SecretAccessKey: "secret",
}
fmt.Println(remote.Name)
// Output: assets

Example: define an s3 remote with all fields

remote := rclonestorage.S3Remote{
	Name:               "assets",
	Endpoint:           "http://localhost:9000", // default: ""
	Region:             "us-east-1",
	AccessKeyID:        "key",
	SecretAccessKey:    "secret",
	Provider:           "AWS",    // default: "AWS"
	PathStyle:          false,    // default: false
	BucketACL:          "private", // default: ""
	UseUnsignedPayload: false,    // default: false
}
fmt.Println(remote.Name)
// Output: assets

Construction

Build

Build constructs a single storage backend from a typed driver config without a Manager.

fs, _ := storage.Build(localstorage.Config{
	Root: "/tmp/storage-example",
	Prefix: "assets",
})

DriverConfig

DriverConfig is implemented by typed driver configs such as local.Config or s3storage.Config. It is the public config boundary for Manager and Build.

var cfg storage.DriverConfig = localstorage.Config{
	Root: "/tmp/storage-config",
}

DriverFactory

DriverFactory constructs a Storage for a given normalized disk configuration.

factory := storage.DriverFactory(func(ctx context.Context, cfg storage.ResolvedConfig) (storage.Storage, error) {
	return nil, nil
})

ResolvedConfig

ResolvedConfig is the normalized internal config passed to registered drivers. Users should prefer typed driver configs and treat this as registry adapter glue, not the primary construction API.

factory := storage.DriverFactory(func(ctx context.Context, cfg storage.ResolvedConfig) (storage.Storage, error) {
	fmt.Println(cfg.Driver)
	// Output: memory
	return nil, nil
})

_, _ = factory(context.Background(), storage.ResolvedConfig{Driver: "memory"})

Context

BuildContext

BuildContext constructs a single storage backend from a typed driver config using the caller-provided context.

ContextStorage

ContextStorage exposes context-aware storage operations for cancellation and deadlines. Use Storage for the common path and type-assert to ContextStorage when you need caller-provided context.

ContextStorage.CopyContext

CopyContext copies the object at src to dst using the caller-provided context.

ContextStorage.DeleteContext

DeleteContext removes the object at path using the caller-provided context.

ContextStorage.ExistsContext

ExistsContext reports whether an object exists at path using the caller-provided context.

ContextStorage.GetContext

GetContext reads the object at path using the caller-provided context.

disk, _ := storage.Build(localstorage.Config{
	Root: "/tmp/storage-get-context",
})
_ = disk.Put("docs/readme.txt", []byte("hello"))

ctx, cancel := context.WithTimeout(context.Background(), time.Second)
defer cancel()

cs := disk.(storage.ContextStorage)
data, _ := cs.GetContext(ctx, "docs/readme.txt")
fmt.Println(string(data))
// Output: hello

ContextStorage.ListContext

ListContext returns the immediate children under path using the caller-provided context.

ContextStorage.MoveContext

MoveContext moves the object at src to dst using the caller-provided context.

ContextStorage.PutContext

PutContext writes an object at path using the caller-provided context.

ContextStorage.StatContext

StatContext returns the entry at path using the caller-provided context.

ContextStorage.URLContext

URLContext returns a usable access URL using the caller-provided context.

ContextStorage.WalkContext

WalkContext visits entries recursively using the caller-provided context.

Core

DiskName

DiskName is a typed identifier for configured disks.

const uploads storage.DiskName = "uploads"
fmt.Println(uploads)
// Output: uploads

Entry

Entry represents an item returned by List.

Path is relative to the storage namespace, not an OS-native path. Directory-like entries are listing artifacts, not a promise of POSIX-style storage semantics.

entry := storage.Entry{
	Path:  "docs/readme.txt",
	Size:  5,
	IsDir: false,
}
fmt.Println(entry.Path, entry.IsDir)
// Output: docs/readme.txt false

Storage

Storage is the public interface for interacting with a storage backend.

Semantics:

  • Put overwrites an existing object at the same path.
  • List is one-level and non-recursive.
  • List with an empty path lists from the disk root or prefix root.
  • Walk is recursive.
  • URL returns a usable access URL when the driver supports it.
  • Copy overwrites the destination object when the backend supports copy semantics.
  • Move relocates an object and may be implemented as copy followed by delete.
  • Unsupported operations should return ErrUnsupported.
var disk storage.Storage
disk, _ = storage.Build(localstorage.Config{
	Root: "/tmp/storage-interface",
})

Storage.Copy

Copy copies the object at src to dst.

disk, _ := storage.Build(localstorage.Config{
	Root: "/tmp/storage-copy",
})
_ = disk.Put("docs/readme.txt", []byte("hello"))
_ = disk.Copy("docs/readme.txt", "docs/copy.txt")

data, _ := disk.Get("docs/copy.txt")
fmt.Println(string(data))
// Output: hello

Storage.Delete

Delete removes the object at path.

disk, _ := storage.Build(localstorage.Config{
	Root: "/tmp/storage-delete",
})
_ = disk.Put("docs/readme.txt", []byte("hello"))
_ = disk.Delete("docs/readme.txt")

ok, _ := disk.Exists("docs/readme.txt")
fmt.Println(ok)
// Output: false

Storage.Exists

Exists reports whether an object exists at path.

disk, _ := storage.Build(localstorage.Config{
	Root: "/tmp/storage-exists",
})
_ = disk.Put("docs/readme.txt", []byte("hello"))

ok, _ := disk.Exists("docs/readme.txt")
fmt.Println(ok)
// Output: true

Storage.Get

Get reads the object at path.

disk, _ := storage.Build(localstorage.Config{
	Root: "/tmp/storage-get",
})
_ = disk.Put("docs/readme.txt", []byte("hello"))

data, _ := disk.Get("docs/readme.txt")
fmt.Println(string(data))
// Output: hello

Storage.List

List returns the immediate children under path.

disk, _ := storage.Build(localstorage.Config{
	Root: "/tmp/storage-list",
})
_ = disk.Put("docs/readme.txt", []byte("hello"))

entries, _ := disk.List("docs")
fmt.Println(entries[0].Path)
// Output: docs/readme.txt

Storage.Move

Move moves the object at src to dst.

disk, _ := storage.Build(localstorage.Config{
	Root: "/tmp/storage-move",
})
_ = disk.Put("docs/readme.txt", []byte("hello"))
_ = disk.Move("docs/readme.txt", "docs/archive.txt")

ok, _ := disk.Exists("docs/readme.txt")
fmt.Println(ok)
// Output: false

Storage.Put

Put writes an object at path, overwriting any existing object.

disk, _ := storage.Build(localstorage.Config{
	Root: "/tmp/storage-put",
})
_ = disk.Put("docs/readme.txt", []byte("hello"))
fmt.Println("stored")
// Output: stored

Storage.Stat

Stat returns the entry at path.

disk, _ := storage.Build(localstorage.Config{
	Root: "/tmp/storage-stat",
})
_ = disk.Put("docs/readme.txt", []byte("hello"))

entry, _ := disk.Stat("docs/readme.txt")
fmt.Println(entry.Path, entry.Size)
// Output: docs/readme.txt 5

Storage.URL

URL returns a usable access URL when the driver supports it.

Example: request an object url

disk, _ := storage.Build(s3storage.Config{
	Bucket: "uploads",
	Region: "us-east-1",
})

url, _ := disk.URL("docs/readme.txt")

Example: handle unsupported url generation

disk, _ := storage.Build(localstorage.Config{
	Root: "/tmp/storage-url",
})

_, err := disk.URL("docs/readme.txt")
fmt.Println(errors.Is(err, storage.ErrUnsupported))
// Output: true

Storage.Walk

Walk visits entries recursively when the backend supports it.

disk, _ := storage.Build(localstorage.Config{
	Root: "/tmp/storage-walk",
})

err := disk.Walk("", func(entry storage.Entry) error {
	fmt.Println(entry.Path)
	return nil
})
fmt.Println(errors.Is(err, storage.ErrUnsupported))
// Output: true

Driver Config

dropboxstorage.Config

Config defines a Dropbox-backed storage disk.

Example: define dropbox storage config

cfg := dropboxstorage.Config{
	Token: "token",
}

Example: define dropbox storage config with all fields

cfg := dropboxstorage.Config{
	Token:  "token",
	Prefix: "uploads", // default: ""
}

ftpstorage.Config

Config defines an FTP-backed storage disk.

Example: define ftp storage config

cfg := ftpstorage.Config{
	Host:     "127.0.0.1",
	User:     "demo",
	Password: "secret",
}

Example: define ftp storage config with all fields

cfg := ftpstorage.Config{
	Host:               "127.0.0.1",
	Port:               21,        // default: 21
	User:               "demo",    // default: ""
	Password:           "secret",  // default: ""
	TLS:                false,     // default: false
	InsecureSkipVerify: false,     // default: false
	Prefix:             "uploads", // default: ""
}

gcsstorage.Config

Config defines a GCS-backed storage disk.

Example: define gcs storage config

cfg := gcsstorage.Config{
	Bucket: "uploads",
}

Example: define gcs storage config with all fields

cfg := gcsstorage.Config{
	Bucket:          "uploads",
	CredentialsJSON: "{...}",              // default: ""
	Endpoint:        "http://127.0.0.1:0", // default: ""
	Prefix:          "assets",             // default: ""
}

localstorage.Config

Config defines local storage rooted at a filesystem path.

Example: define local storage config

cfg := localstorage.Config{
	Root:   "/tmp/storage-local",
	Prefix: "sandbox",
}

Example: define local storage config with all fields

cfg := localstorage.Config{
	Root:   "/tmp/storage-local",
	Prefix: "sandbox", // default: ""
}

memorystorage.Config

Config defines an in-memory storage disk.

Example: define memory storage config

cfg := memorystorage.Config{}

Example: define memory storage config with all fields

cfg := memorystorage.Config{
	Prefix: "sandbox", // default: ""
}

rclonestorage.Config

Config defines an rclone-backed storage disk.

Example: define rclone storage config

cfg := rclonestorage.Config{
	Remote: "local:",
	Prefix: "sandbox",
}

Example: define rclone storage config with all fields

cfg := rclonestorage.Config{
	Remote:           "local:",
	Prefix:           "sandbox",                  // default: ""
	RcloneConfigPath: "/path/to/rclone.conf",     // default: ""
	RcloneConfigData: "[local]\ntype = local\n",  // default: ""
}

s3storage.Config

Config defines an S3-backed storage disk.

Example: define s3 storage config

cfg := s3storage.Config{
	Bucket: "uploads",
	Region: "us-east-1",
}

Example: define s3 storage config with all fields

cfg := s3storage.Config{
	Bucket:          "uploads",
	Endpoint:        "http://localhost:9000", // default: ""
	Region:          "us-east-1",
	AccessKeyID:     "minioadmin", // default: ""
	SecretAccessKey: "minioadmin", // default: ""
	UsePathStyle:    true,         // default: false
	UnsignedPayload: false,        // default: false
	Prefix:          "assets",     // default: ""
}

sftpstorage.Config

Config defines an SFTP-backed storage disk.

Example: define sftp storage config

cfg := sftpstorage.Config{
	Host:     "127.0.0.1",
	User:     "demo",
	Password: "secret",
}

Example: define sftp storage config with all fields

cfg := sftpstorage.Config{
	Host:                  "127.0.0.1",
	Port:                  22,            // default: 22
	User:                  "demo",        // default: "root"
	Password:              "secret",      // default: ""
	KeyPath:               "/path/id_ed25519",      // default: ""
	KnownHostsPath:        "/path/known_hosts",     // default: ""
	InsecureIgnoreHostKey: false,         // default: false
	Prefix:                "uploads",     // default: ""
}

Driver Constructors

dropboxstorage.New

New constructs Dropbox-backed storage using the official SDK.

fs, _ := dropboxstorage.New(dropboxstorage.Config{
	Token: "token",
})

ftpstorage.New

New constructs FTP-backed storage using jlaffaye/ftp.

fs, _ := ftpstorage.New(ftpstorage.Config{
	Host:     "127.0.0.1",
	User:     "demo",
	Password: "secret",
})

gcsstorage.New

New constructs GCS-backed storage using cloud.google.com/go/storage.

fs, _ := gcsstorage.New(gcsstorage.Config{
	Bucket: "uploads",
})

localstorage.New

New constructs local storage rooted at cfg.Root with an optional prefix.

fs, _ := localstorage.New(localstorage.Config{
	Root:   "/tmp/storage-local",
	Prefix: "sandbox",
})

memorystorage.New

New constructs in-memory storage.

fs, _ := memorystorage.New(memorystorage.Config{
	Prefix: "sandbox",
})

rclonestorage.New

New constructs an rclone-backed storage. All disks share a single config path.

Example: rclone storage

fs, _ := rclonestorage.New(rclonestorage.Config{
	Remote: "local:",
	Prefix: "sandbox",
})

Example: rclone storage with inline config

fs, _ := rclonestorage.New(rclonestorage.Config{
	Remote: "localdisk:/tmp/storage",
	RcloneConfigData: `

[localdisk]
type = local
`,

})

s3storage.New

New constructs S3-backed storage using AWS SDK v2.

fs, _ := s3storage.New(s3storage.Config{
	Bucket: "uploads",
	Region: "us-east-1",
})

sftpstorage.New

New constructs SFTP-backed storage using ssh and pkg/sftp.

fs, _ := sftpstorage.New(sftpstorage.Config{
	Host:     "127.0.0.1",
	User:     "demo",
	Password: "secret",
})

Manager

Config

Config defines named disks using typed driver configs.

cfg := storage.Config{
	Default: "local",
	Disks: map[storage.DiskName]storage.DriverConfig{
		"local": localstorage.Config{Root: "/tmp/storage-manager"},
	},
}

Manager

Manager holds named storage disks.

mgr, _ := storage.New(storage.Config{
	Default: "local",
	Disks: map[storage.DiskName]storage.DriverConfig{
		"local": localstorage.Config{Root: "/tmp/storage-manager"},
	},
})

Manager.Default

Default returns the default disk or panics if misconfigured.

mgr, _ := storage.New(storage.Config{
	Default: "local",
	Disks: map[storage.DiskName]storage.DriverConfig{
		"local": localstorage.Config{Root: "/tmp/storage-default"},
	},
})

fs := mgr.Default()
fmt.Println(fs != nil)
// Output: true

Manager.Disk

Disk returns a named disk or an error if it does not exist.

mgr, _ := storage.New(storage.Config{
	Default: "local",
	Disks: map[storage.DiskName]storage.DriverConfig{
		"local":   localstorage.Config{Root: "/tmp/storage-default"},
		"uploads": localstorage.Config{Root: "/tmp/storage-uploads"},
	},
})

fs, _ := mgr.Disk("uploads")
fmt.Println(fs != nil)
// Output: true

New

New constructs a Manager and eagerly initializes all disks.

mgr, _ := storage.New(storage.Config{
	Default: "local",
	Disks: map[storage.DiskName]storage.DriverConfig{
		"local":  localstorage.Config{Root: "/tmp/storage-local"},
		"assets": localstorage.Config{Root: "/tmp/storage-assets", Prefix: "public"},
	},
})

RegisterDriver

RegisterDriver makes a driver available to the Manager. It panics on duplicate registrations.

storage.RegisterDriver("memory", func(ctx context.Context, cfg storage.ResolvedConfig) (storage.Storage, error) {
	return nil, nil
})

Paths

JoinPrefix

JoinPrefix combines a disk prefix with a path using slash separators.

fmt.Println(storage.JoinPrefix("assets", "logo.svg"))
// Output: assets/logo.svg

NormalizePath

NormalizePath cleans a user path, normalizes separators, and rejects attempts to escape the disk root or prefix root.

The empty string and root-like inputs normalize to the logical root.

p, _ := storage.NormalizePath(" /avatars//user-1.png ")
fmt.Println(p)
// Output: avatars/user-1.png

Contributing

Shared contract tests live in storagetest.

Centralized integration coverage lives in integration and runs the same contract across supported backends. That centralized matrix is the authoritative integration path for the repository.

Current fixture types in the centralized matrix:

  • testcontainers: s3, sftp
  • emulator: gcs
  • embedded/local fixtures: local, ftp, rclone_local

Common contributor commands:

go test ./...
cd integration
go test -tags=integration ./all -count=1

Run a single integration backend:

cd integration
INTEGRATION_DRIVER=gcs go test -tags=integration ./all -count=1

Make targets:

make test
make examples-test
make coverage
make integration
make integration-driver gcs

Documentation

Overview

Package storage provides a small abstraction over file-like objects stored in local, cloud, and remote backends.

The package is Go-native in API shape: explicit drivers, named disks, and small interfaces with documented semantics.

Preferred construction paths:

  • direct use: call a driver module's New(ctx, Config)
  • named disks: pass typed driver configs to storage.New
  • single disk from generic orchestration: pass a typed driver config to Build

Core semantics:

  • List is one-level and non-recursive.
  • List with an empty path lists from the disk root or prefix root.
  • URL returns a usable access URL when the driver supports it.
  • Unsupported operations should return ErrUnsupported.
  • Missing objects should be detectable with errors.Is(err, ErrNotFound).
  • Path normalization rejects traversal attempts with ErrForbidden.

Driver registration is opt-in. Import the driver modules you need for manager-based construction, or call driver constructors directly.

Index

Constants

This section is empty.

Variables

View Source
var (
	ErrNotFound    = storagecore.ErrNotFound
	ErrForbidden   = storagecore.ErrForbidden
	ErrUnsupported = storagecore.ErrUnsupported
)

Functions

func JoinPrefix

func JoinPrefix(prefix, p string) string

JoinPrefix combines a disk prefix with a path using slash separators. @group Paths

Example: join a disk prefix and path

fmt.Println(storage.JoinPrefix("assets", "logo.svg"))
// Output: assets/logo.svg

func NormalizePath

func NormalizePath(p string) (string, error)

NormalizePath cleans a user path, normalizes separators, and rejects attempts to escape the disk root or prefix root.

The empty string and root-like inputs normalize to the logical root. @group Paths

Example: normalize a user path

p, _ := storage.NormalizePath(" /avatars//user-1.png ")
fmt.Println(p)
// Output: avatars/user-1.png

func RegisterDriver

func RegisterDriver(name string, factory DriverFactory)

RegisterDriver makes a driver available to the Manager. It panics on duplicate registrations. @group Manager

Example: register a custom driver

storage.RegisterDriver("memory", func(ctx context.Context, cfg storage.ResolvedConfig) (storage.Storage, error) {
	return nil, nil
})

Types

type Config

type Config struct {
	Default DiskName
	Disks   map[DiskName]DriverConfig
}

Config defines named disks using typed driver configs. @group Manager

Example: define manager config

cfg := storage.Config{
	Default: "local",
	Disks: map[storage.DiskName]storage.DriverConfig{
		"local": localstorage.Config{Root: "/tmp/storage-manager"},
	},
}
_ = cfg

type ContextStorage

type ContextStorage interface {
	// GetContext reads the object at path using the caller-provided context.
	//
	// Example: read an object with a timeout
	//
	//	disk, _ := storage.Build(localstorage.Config{
	//		Root: "/tmp/storage-get-context",
	//	})
	//	_ = disk.Put("docs/readme.txt", []byte("hello"))
	//
	//	ctx, cancel := context.WithTimeout(context.Background(), time.Second)
	//	defer cancel()
	//
	//	cs := disk.(storage.ContextStorage)
	//	data, _ := cs.GetContext(ctx, "docs/readme.txt")
	//	fmt.Println(string(data))
	//	// Output: hello
	GetContext(ctx context.Context, p string) ([]byte, error)
	// PutContext writes an object at path using the caller-provided context.
	PutContext(ctx context.Context, p string, contents []byte) error
	// DeleteContext removes the object at path using the caller-provided context.
	DeleteContext(ctx context.Context, p string) error
	// StatContext returns the entry at path using the caller-provided context.
	StatContext(ctx context.Context, p string) (Entry, error)
	// ExistsContext reports whether an object exists at path using the caller-provided context.
	ExistsContext(ctx context.Context, p string) (bool, error)
	// ListContext returns the immediate children under path using the caller-provided context.
	ListContext(ctx context.Context, p string) ([]Entry, error)
	// WalkContext visits entries recursively using the caller-provided context.
	WalkContext(ctx context.Context, p string, fn func(Entry) error) error
	// CopyContext copies the object at src to dst using the caller-provided context.
	CopyContext(ctx context.Context, src, dst string) error
	// MoveContext moves the object at src to dst using the caller-provided context.
	MoveContext(ctx context.Context, src, dst string) error
	// URLContext returns a usable access URL using the caller-provided context.
	URLContext(ctx context.Context, p string) (string, error)
}

ContextStorage exposes context-aware storage operations for cancellation and deadlines. Use Storage for the common path and type-assert to ContextStorage when you need caller-provided context. @group Context

type DiskName

type DiskName = storagecore.DiskName

DiskName is a typed identifier for configured disks. @group Core

Example: declare a disk name

const uploads storage.DiskName = "uploads"
fmt.Println(uploads)
// Output: uploads

type DriverConfig

type DriverConfig interface {
	DriverName() string
	ResolvedConfig() ResolvedConfig
}

DriverConfig is implemented by typed driver configs such as local.Config or s3storage.Config. It is the public config boundary for Manager and Build. @group Construction

Example: pass a typed driver config

var cfg storage.DriverConfig = localstorage.Config{
	Root: "/tmp/storage-config",
}
_ = cfg

type DriverFactory

type DriverFactory func(ctx context.Context, cfg ResolvedConfig) (Storage, error)

DriverFactory constructs a Storage for a given normalized disk configuration. @group Construction

Example: declare a driver factory

factory := storage.DriverFactory(func(ctx context.Context, cfg storage.ResolvedConfig) (storage.Storage, error) {
	return nil, nil
})
_ = factory

type Entry

type Entry = storagecore.Entry

Entry represents an item returned by List.

Path is relative to the storage namespace, not an OS-native path. Directory-like entries are listing artifacts, not a promise of POSIX-style storage semantics. @group Core

Example: inspect a listed entry

entry := storage.Entry{
	Path:  "docs/readme.txt",
	Size:  5,
	IsDir: false,
}
fmt.Println(entry.Path, entry.IsDir)
// Output: docs/readme.txt false

type Manager

type Manager struct {
	// contains filtered or unexported fields
}

Manager holds named storage disks. @group Manager

Example: keep a manager for later disk lookups

mgr, _ := storage.New(storage.Config{
	Default: "local",
	Disks: map[storage.DiskName]storage.DriverConfig{
		"local": localstorage.Config{Root: "/tmp/storage-manager"},
	},
})
_ = mgr

func New

func New(cfg Config) (*Manager, error)

New constructs a Manager and eagerly initializes all disks. @group Manager

Example: build a manager with named disks

mgr, _ := storage.New(storage.Config{
	Default: "local",
	Disks: map[storage.DiskName]storage.DriverConfig{
		"local":  localstorage.Config{Root: "/tmp/storage-local"},
		"assets": localstorage.Config{Root: "/tmp/storage-assets", Prefix: "public"},
	},
})
_ = mgr

func (*Manager) Default

func (m *Manager) Default() Storage

Default returns the default disk or panics if misconfigured. @group Manager

Example: get the default disk

mgr, _ := storage.New(storage.Config{
	Default: "local",
	Disks: map[storage.DiskName]storage.DriverConfig{
		"local": localstorage.Config{Root: "/tmp/storage-default"},
	},
})

fs := mgr.Default()
fmt.Println(fs != nil)
// Output: true

func (*Manager) Disk

func (m *Manager) Disk(name DiskName) (Storage, error)

Disk returns a named disk or an error if it does not exist. @group Manager

Example: get a named disk

mgr, _ := storage.New(storage.Config{
	Default: "local",
	Disks: map[storage.DiskName]storage.DriverConfig{
		"local":   localstorage.Config{Root: "/tmp/storage-default"},
		"uploads": localstorage.Config{Root: "/tmp/storage-uploads"},
	},
})

fs, _ := mgr.Disk("uploads")
fmt.Println(fs != nil)
// Output: true

type ResolvedConfig

type ResolvedConfig = storagecore.ResolvedConfig

ResolvedConfig is the normalized internal config passed to registered drivers. Users should prefer typed driver configs and treat this as registry adapter glue, not the primary construction API. @group Construction

Example: inspect a resolved config in a driver factory

factory := storage.DriverFactory(func(ctx context.Context, cfg storage.ResolvedConfig) (storage.Storage, error) {
	fmt.Println(cfg.Driver)
	// Output: memory
	return nil, nil
})

_, _ = factory(context.Background(), storage.ResolvedConfig{Driver: "memory"})

type Storage

type Storage interface {
	// Get reads the object at path.
	//
	// Example: read an object
	//
	//	disk, _ := storage.Build(localstorage.Config{
	//		Root: "/tmp/storage-get",
	//	})
	//	_ = disk.Put("docs/readme.txt", []byte("hello"))
	//
	//	data, _ := disk.Get("docs/readme.txt")
	//	fmt.Println(string(data))
	//	// Output: hello
	Get(p string) ([]byte, error)

	// Put writes an object at path, overwriting any existing object.
	//
	// Example: write an object
	//
	//	disk, _ := storage.Build(localstorage.Config{
	//		Root: "/tmp/storage-put",
	//	})
	//	_ = disk.Put("docs/readme.txt", []byte("hello"))
	//	fmt.Println("stored")
	//	// Output: stored
	Put(p string, contents []byte) error

	// Delete removes the object at path.
	//
	// Example: delete an object
	//
	//	disk, _ := storage.Build(localstorage.Config{
	//		Root: "/tmp/storage-delete",
	//	})
	//	_ = disk.Put("docs/readme.txt", []byte("hello"))
	//	_ = disk.Delete("docs/readme.txt")
	//
	//	ok, _ := disk.Exists("docs/readme.txt")
	//	fmt.Println(ok)
	//	// Output: false
	Delete(p string) error

	// Stat returns the entry at path.
	//
	// Example: stat an object
	//
	//	disk, _ := storage.Build(localstorage.Config{
	//		Root: "/tmp/storage-stat",
	//	})
	//	_ = disk.Put("docs/readme.txt", []byte("hello"))
	//
	//	entry, _ := disk.Stat("docs/readme.txt")
	//	fmt.Println(entry.Path, entry.Size)
	//	// Output: docs/readme.txt 5
	Stat(p string) (Entry, error)

	// Exists reports whether an object exists at path.
	//
	// Example: check for an object
	//
	//	disk, _ := storage.Build(localstorage.Config{
	//		Root: "/tmp/storage-exists",
	//	})
	//	_ = disk.Put("docs/readme.txt", []byte("hello"))
	//
	//	ok, _ := disk.Exists("docs/readme.txt")
	//	fmt.Println(ok)
	//	// Output: true
	Exists(p string) (bool, error)

	// List returns the immediate children under path.
	//
	// Example: list a directory
	//
	//	disk, _ := storage.Build(localstorage.Config{
	//		Root: "/tmp/storage-list",
	//	})
	//	_ = disk.Put("docs/readme.txt", []byte("hello"))
	//
	//	entries, _ := disk.List("docs")
	//	fmt.Println(entries[0].Path)
	//	// Output: docs/readme.txt
	List(p string) ([]Entry, error)

	// Walk visits entries recursively when the backend supports it.
	//
	// Example: walk a backend when supported
	//
	//	disk, _ := storage.Build(localstorage.Config{
	//		Root: "/tmp/storage-walk",
	//	})
	//
	//	err := disk.Walk("", func(entry storage.Entry) error {
	//		fmt.Println(entry.Path)
	//		return nil
	//	})
	//	fmt.Println(errors.Is(err, storage.ErrUnsupported))
	//	// Output: true
	Walk(p string, fn func(Entry) error) error

	// Copy copies the object at src to dst.
	//
	// Example: copy an object
	//
	//	disk, _ := storage.Build(localstorage.Config{
	//		Root: "/tmp/storage-copy",
	//	})
	//	_ = disk.Put("docs/readme.txt", []byte("hello"))
	//	_ = disk.Copy("docs/readme.txt", "docs/copy.txt")
	//
	//	data, _ := disk.Get("docs/copy.txt")
	//	fmt.Println(string(data))
	//	// Output: hello
	Copy(src, dst string) error

	// Move moves the object at src to dst.
	//
	// Example: move an object
	//
	//	disk, _ := storage.Build(localstorage.Config{
	//		Root: "/tmp/storage-move",
	//	})
	//	_ = disk.Put("docs/readme.txt", []byte("hello"))
	//	_ = disk.Move("docs/readme.txt", "docs/archive.txt")
	//
	//	ok, _ := disk.Exists("docs/readme.txt")
	//	fmt.Println(ok)
	//	// Output: false
	Move(src, dst string) error

	// URL returns a usable access URL when the driver supports it.
	//
	// Example: request an object url
	//
	//	disk, _ := storage.Build(s3storage.Config{
	//		Bucket: "uploads",
	//		Region: "us-east-1",
	//	})
	//
	//	url, _ := disk.URL("docs/readme.txt")
	//	_ = url
	//
	// Example: handle unsupported url generation
	//
	//	disk, _ := storage.Build(localstorage.Config{
	//		Root: "/tmp/storage-url",
	//	})
	//
	//	_, err := disk.URL("docs/readme.txt")
	//	fmt.Println(errors.Is(err, storage.ErrUnsupported))
	//	// Output: true
	URL(p string) (string, error)
}

Storage is the public interface for interacting with a storage backend.

Semantics:

  • Put overwrites an existing object at the same path.
  • List is one-level and non-recursive.
  • List with an empty path lists from the disk root or prefix root.
  • Walk is recursive.
  • URL returns a usable access URL when the driver supports it.
  • Copy overwrites the destination object when the backend supports copy semantics.
  • Move relocates an object and may be implemented as copy followed by delete.
  • Unsupported operations should return ErrUnsupported.

@group Core

Example: use the storage interface

var disk storage.Storage
disk, _ = storage.Build(localstorage.Config{
	Root: "/tmp/storage-interface",
})
_ = disk

func Build

func Build(cfg DriverConfig) (Storage, error)

Build constructs a single storage backend from a typed driver config without a Manager. @group Construction

Example: build a single disk

fs, _ := storage.Build(localstorage.Config{
	Root: "/tmp/storage-example",
	Prefix: "assets",
})
_ = fs

func BuildContext

func BuildContext(ctx context.Context, cfg DriverConfig) (Storage, error)

BuildContext constructs a single storage backend from a typed driver config using the caller-provided context. @group Context

Directories

Path Synopsis
driver
ftpstorage module
gcsstorage module
localstorage module
memorystorage module
rclonestorage module
redisstorage module
s3storage module
sftpstorage module
storagecore module
storagetest module

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL