handler

package
v2.0.0-...-a21236f Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 14, 2023 License: MIT Imports: 18 Imported by: 0

Documentation

Overview

Package handler provides ways to accept tus 1.0 calls using HTTP.

tus is a protocol based on HTTP for resumable file uploads. Resumable means that an upload can be interrupted at any moment and can be resumed without re-uploading the previous data again. An interruption may happen willingly, if the user wants to pause, or by accident in case of an network issue or server outage (http://tus.io).

The basics of tusd

tusd was designed in way which allows an flexible and customizable usage. We wanted to avoid binding this package to a specific storage system – particularly a proprietary third-party software. Therefore tusd is an abstract layer whose only job is to accept incoming HTTP requests, validate them according to the specification and finally passes them to the data store.

The data store is another important component in tusd's architecture whose purpose is to do the actual file handling. It has to write the incoming upload to a persistent storage system and retrieve information about an upload's current state. Therefore it is the only part of the system which communicates directly with the underlying storage system, whether it be the local disk, a remote FTP server or cloud providers such as AWS S3.

Using a store composer

The only hard requirements for a data store can be found in the DataStore interface. It contains methods for creating uploads (NewUpload), writing to them (WriteChunk) and retrieving their status (GetInfo). However, there are many more features which are not mandatory but may still be used. These are contained in their own interfaces which all share the *DataStore suffix. For example, GetReaderDataStore which enables downloading uploads or TerminaterDataStore which allows uploads to be terminated.

The store composer offers a way to combine the basic data store - the core - implementation and these additional extensions:

composer := tusd.NewStoreComposer()
composer.UseCore(dataStore) // Implements DataStore
composer.UseTerminater(terminater) // Implements TerminaterDataStore
composer.UseLocker(locker) // Implements LockerDataStore

The corresponding methods for adding an extension to the composer are prefixed with Use* followed by the name of the corresponding interface. However, most data store provide multiple extensions and adding all of them manually can be tedious and error-prone. Therefore, all data store distributed with tusd provide an UseIn() method which does this job automatically. For example, this is the S3 store in action (see S3Store.UseIn):

store := s3store.New(…)
locker := memorylocker.New()
composer := tusd.NewStoreComposer()
store.UseIn(composer)
locker.UseIn(composer)

Finally, once you are done with composing your data store, you can pass it inside the Config struct in order to create create a new tusd HTTP handler:

config := tusd.Config{
  StoreComposer: composer,
  BasePath: "/files/",
}
handler, err := tusd.NewHandler(config)

This handler can then be mounted to a specific path, e.g. /files:

http.Handle("/files/", http.StripPrefix("/files/", handler))

Index

Examples

Constants

View Source
const UploadLengthDeferred = "1"

Variables

View Source
var (
	ErrUnsupportedVersion               = NewError("ERR_UNSUPPORTED_VERSION", "missing, invalid or unsupported Tus-Resumable header", http.StatusPreconditionFailed)
	ErrMaxSizeExceeded                  = NewError("ERR_MAX_SIZE_EXCEEDED", "maximum size exceeded", http.StatusRequestEntityTooLarge)
	ErrInvalidContentType               = NewError("ERR_INVALID_CONTENT_TYPE", "missing or invalid Content-Type header", http.StatusBadRequest)
	ErrInvalidUploadLength              = NewError("ERR_INVALID_UPLOAD_LENGTH", "missing or invalid Upload-Length header", http.StatusBadRequest)
	ErrInvalidOffset                    = NewError("ERR_INVALID_OFFSET", "missing or invalid Upload-Offset header", http.StatusBadRequest)
	ErrNotFound                         = NewError("ERR_UPLOAD_NOT_FOUND", "upload not found", http.StatusNotFound)
	ErrFileLocked                       = NewError("ERR_UPLOAD_LOCKED", "file currently locked", http.StatusLocked)
	ErrLockTimeout                      = NewError("ERR_LOCK_TIMEOUT", "failed to acquire lock before timeout", http.StatusInternalServerError)
	ErrMismatchOffset                   = NewError("ERR_MISMATCHED_OFFSET", "mismatched offset", http.StatusConflict)
	ErrSizeExceeded                     = NewError("ERR_UPLOAD_SIZE_EXCEEDED", "upload's size exceeded", http.StatusRequestEntityTooLarge)
	ErrNotImplemented                   = NewError("ERR_NOT_IMPLEMENTED", "feature not implemented", http.StatusNotImplemented)
	ErrUploadNotFinished                = NewError("ERR_UPLOAD_NOT_FINISHED", "one of the partial uploads is not finished", http.StatusBadRequest)
	ErrInvalidConcat                    = NewError("ERR_INVALID_CONCAT", "invalid Upload-Concat header", http.StatusBadRequest)
	ErrModifyFinal                      = NewError("ERR_MODIFY_FINAL", "modifying a final upload is not allowed", http.StatusForbidden)
	ErrUploadLengthAndUploadDeferLength = NewError("ERR_AMBIGUOUS_UPLOAD_LENGTH", "provided both Upload-Length and Upload-Defer-Length", http.StatusBadRequest)
	ErrInvalidUploadDeferLength         = NewError("ERR_INVALID_UPLOAD_LENGTH_DEFER", "invalid Upload-Defer-Length header", http.StatusBadRequest)
	ErrUploadStoppedByServer            = NewError("ERR_UPLOAD_STOPPED", "upload has been stopped by server", http.StatusBadRequest)
	ErrUploadRejectedByServer           = NewError("ERR_UPLOAD_REJECTED", "upload creation has been rejected by server", http.StatusBadRequest)
	ErrUploadInterrupted                = NewError("ERR_UPLOAD_INTERRUPTED", "upload has been interrupted by another request for this upload resource", http.StatusBadRequest)
	ErrServerShutdown                   = NewError("ERR_SERVER_SHUTDOWN", "request has been interrupted because the server is shutting down", http.StatusServiceUnavailable)
	ErrOriginNotAllowed                 = NewError("ERR_ORIGIN_NOT_ALLOWED", "request origin is not allowed", http.StatusForbidden)

	// These two responses are 500 for backwards compatability. Clients might receive a timeout response
	// when the upload got interrupted. Most clients will not retry 4XX but only 5XX, so we responsd with 500 here.
	ErrReadTimeout     = NewError("ERR_READ_TIMEOUT", "timeout while reading request body", http.StatusInternalServerError)
	ErrConnectionReset = NewError("ERR_CONNECTION_RESET", "TCP connection reset by peer", http.StatusInternalServerError)
)
View Source
var DefaultCorsConfig = CorsConfig{
	Disable:          false,
	AllowOrigin:      regexp.MustCompile(".*"),
	AllowCredentials: false,
	AllowMethods:     "POST, HEAD, PATCH, OPTIONS, GET, DELETE",
	AllowHeaders:     "Authorization, Origin, X-Requested-With, X-Request-ID, X-HTTP-Method-Override, Content-Type, Upload-Length, Upload-Offset, Tus-Resumable, Upload-Metadata, Upload-Defer-Length, Upload-Concat, Upload-Incomplete, Upload-Draft-Interop-Version",
	MaxAge:           "86400",
	ExposeHeaders:    "Upload-Offset, Location, Upload-Length, Tus-Version, Tus-Resumable, Tus-Max-Size, Tus-Extension, Upload-Metadata, Upload-Defer-Length, Upload-Concat, Upload-Incomplete, Upload-Draft-Interop-Version",
}

DefaultCorsConfig is the configuration that will be used in none is provided.

Functions

func ParseMetadataHeader

func ParseMetadataHeader(header string) map[string]string

ParseMetadataHeader parses the Upload-Metadata header as defined in the File Creation extension. e.g. Upload-Metadata: name bHVucmpzLnBuZw==,type aW1hZ2UvcG5n

func SerializeMetadataHeader

func SerializeMetadataHeader(meta map[string]string) string

SerializeMetadataHeader serializes a map of strings into the Upload-Metadata header format used in the response for HEAD requests. e.g. Upload-Metadata: name bHVucmpzLnBuZw==,type aW1hZ2UvcG5n

Types

type ConcatableUpload

type ConcatableUpload interface {
	// ConcatUploads concatenates the content from the provided partial uploads
	// and writes the result in the destination upload.
	// The caller (usually the handler) must and will ensure that this
	// destination upload has been created before with enough space to hold all
	// partial uploads. The order, in which the partial uploads are supplied,
	// must be respected during concatenation.
	ConcatUploads(ctx context.Context, partialUploads []Upload) error
}

type ConcaterDataStore

type ConcaterDataStore interface {
	AsConcatableUpload(upload Upload) ConcatableUpload
}

ConcaterDataStore is the interface required to be implemented if the Concatenation extension should be enabled. Only in this case, the handler will parse and respect the Upload-Concat header.

type Config

type Config struct {
	// StoreComposer points to the store composer from which the core data store
	// and optional dependencies should be taken. May only be nil if DataStore is
	// set.
	StoreComposer *StoreComposer
	// MaxSize defines how many bytes may be stored in one single upload. If its
	// value is is 0 or smaller no limit will be enforced.
	MaxSize int64
	// BasePath defines the URL path used for handling uploads, e.g. "/files/".
	// If no trailing slash is presented it will be added. You may specify an
	// absolute URL containing a scheme, e.g. "http://tus.io"
	BasePath string

	// EnableExperimentalProtocol controls whether the new resumable upload protocol draft
	// from the IETF's HTTP working group is accepted next to the current tus v1 protocol.
	// See https://datatracker.ietf.org/doc/draft-ietf-httpbis-resumable-upload/
	EnableExperimentalProtocol bool
	// DisableDownload indicates whether the server will refuse downloads of the
	// uploaded file, by not mounting the GET handler.
	DisableDownload bool
	// DisableTermination indicates whether the server will refuse termination
	// requests of the uploaded file, by not mounting the DELETE handler.
	DisableTermination bool
	// Cors can be used to customize the handling of Cross-Origin Resource Sharing (CORS).
	// See the CorsConfig struct for more details.
	// Defaults to DefaultCorsConfig.
	Cors *CorsConfig
	// NotifyCompleteUploads indicates whether sending notifications about
	// completed uploads using the CompleteUploads channel should be enabled.
	NotifyCompleteUploads bool
	// NotifyTerminatedUploads indicates whether sending notifications about
	// terminated uploads using the TerminatedUploads channel should be enabled.
	NotifyTerminatedUploads bool
	// NotifyUploadProgress indicates whether sending notifications about
	// the upload progress using the UploadProgress channel should be enabled.
	NotifyUploadProgress bool
	// NotifyCreatedUploads indicates whether sending notifications about
	// the upload having been created using the CreatedUploads channel should be enabled.
	NotifyCreatedUploads bool
	// UploadProgressInterval specifies the interval at which the upload progress
	// notifications are sent to the UploadProgress channel, if enabled.
	// Defaults to 1s.
	UploadProgressInterval time.Duration
	// Logger is the logger to use internally, mostly for printing requests.
	Logger *slog.Logger
	// Respect the X-Forwarded-Host, X-Forwarded-Proto and Forwarded headers
	// potentially set by proxies when generating an absolute URL in the
	// response to POST requests.
	RespectForwardedHeaders bool
	// PreUploadCreateCallback will be invoked before a new upload is created, if the
	// property is supplied. If the callback returns no error, the upload will be created
	// and optional values from HTTPResponse will be contained in the HTTP response.
	// If the error is non-nil, the upload will not be created. This can be used to implement
	// validation of upload metadata etc. Furthermore, HTTPResponse will be ignored and
	// the error value can contain values for the HTTP response.
	// If the error is nil, FileInfoChanges can be filled out to specify individual properties
	// that should be overwriten before the upload is create. See its type definition for
	// more details on its behavior. If you do not want to make any changes, return an empty struct.
	PreUploadCreateCallback func(hook HookEvent) (HTTPResponse, FileInfoChanges, error)
	// PreFinishResponseCallback will be invoked after an upload is completed but before
	// a response is returned to the client. This can be used to implement post-processing validation.
	// If the callback returns no error, optional values from HTTPResponse will be contained in the HTTP response.
	// If the error is non-nil, the error will be forwarded to the client. Furthermore,
	// HTTPResponse will be ignored and the error value can contain values for the HTTP response.
	PreFinishResponseCallback func(hook HookEvent) (HTTPResponse, error)
	// GracefulRequestCompletionTimeout is the timeout for operations to complete after an HTTP
	// request has ended (successfully or by error). For example, if an HTTP request is interrupted,
	// instead of stopping immediately, the handler and data store will be given some additional
	// time to wrap up their operations and save any uploaded data. GracefulRequestCompletionTimeout
	// controls this time.
	// See HookEvent.Context for more details.
	// Defaults to 10s.
	GracefulRequestCompletionTimeout time.Duration
	// AcquireLockTimeout is the duration that a request handler will wait to acquire a lock for
	// an upload. If the timeout is reached, it will stop waiting and send an error response to the
	// client.
	// Defaults to 20s.
	AcquireLockTimeout time.Duration
	// NetworkTimeout is the timeout for individual read operations on the request body. If the
	// read operation succeeds in this time window, the handler will continue consuming the body.
	// If a read operation times out, the handler will stop reading and close the request.
	// This ensures that an upload is consumed while data is being transmitted, while also closing
	// dead connections.
	// Under the hood, this is passed to ResponseController.SetReadDeadline
	// Defaults to 60s
	NetworkTimeout time.Duration
	// contains filtered or unexported fields
}

Config provides a way to configure the Handler depending on your needs.

type CorsConfig

type CorsConfig struct {
	// Disable instructs the handler to ignore all CORS-related headers and never set a
	// CORS-related header in a response. This is useful if CORS is already handled by a proxy.
	Disable bool
	// AllowOrigin is a regular expression used to check if a request is allowed to participate in the
	// CORS protocol. If the request's Origin header matches the regular expression, CORS is allowed.
	// If not, a 403 Forbidden response is sent, rejecting the CORS request.
	AllowOrigin *regexp.Regexp
	// AllowCredentials defines whether the `Access-Control-Allow-Credentials: true` header should be
	// included in CORS responses. This allows clients to share credentials using the Cookie and
	// Authorization header
	AllowCredentials bool
	// AllowMethods defines the value for the `Access-Control-Allow-Methods` header in the response to
	// preflight requests. You can add custom methods here, but make sure that all tus-specific methods
	// from DefaultConfig.AllowMethods are included as well.
	AllowMethods string
	// AllowHeaders defines the value for the `Access-Control-Allow-Headers` header in the response to
	// preflight requests. You can add custom headers here, but make sure that all tus-specific header
	// from DefaultConfig.AllowHeaders are included as well.
	AllowHeaders string
	// MaxAge defines the value for the `Access-Control-Max-Age` header in the response to preflight
	// requests.
	MaxAge string
	// ExposeHeaders defines the value for the `Access-Control-Expose-Headers` header in the response to
	// actual requests. You can add custom headers here, but make sure that all tus-specific header
	// from DefaultConfig.ExposeHeaders are included as well.
	ExposeHeaders string
}

CorsConfig provides a way to customize the the handling of Cross-Origin Resource Sharing (CORS). More details about CORS are available at https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS.

type DataStore

type DataStore interface {
	// Create a new upload using the size as the file's length. The method must
	// return an unique id which is used to identify the upload. If no backend
	// (e.g. Riak) specifes the id you may want to use the uid package to
	// generate one. The properties Size and MetaData will be filled.
	NewUpload(ctx context.Context, info FileInfo) (upload Upload, err error)

	// GetUpload fetches the upload with a given ID. If no such upload can be found,
	// ErrNotFound must be returned.
	GetUpload(ctx context.Context, id string) (upload Upload, err error)
}

DataStore is the base interface for storages to implement. It provides functions to create new uploads and fetch existing ones.

Note: the context values passed to all functions is not the request's context, but a similar context. See HookEvent.Context for more details.

type Error

type Error struct {
	ErrorCode    string
	Message      string
	HTTPResponse HTTPResponse
}

Error represents an error with the intent to be sent in the HTTP response to the client. Therefore, it also contains a HTTPResponse, next to an error code and error message.

func NewError

func NewError(errCode string, message string, statusCode int) Error

NewError constructs a new Error object with the given error code and message. The corresponding HTTP response will have the provided status code and a body consisting of the error details. responses. See the net/http package for standardized status codes.

func (Error) Error

func (e Error) Error() string

func (Error) Is

func (e1 Error) Is(target error) bool

type ErrorsTotalMap

type ErrorsTotalMap struct {
	// contains filtered or unexported fields
}

ErrorsTotalMap stores the counters for the different HTTP errors.

func (*ErrorsTotalMap) Load

func (e *ErrorsTotalMap) Load() map[ErrorsTotalMapEntry]*uint64

Load retrieves the map of the counter pointers atomically

type ErrorsTotalMapEntry

type ErrorsTotalMapEntry struct {
	ErrorCode  string
	StatusCode int
}

type FileInfo

type FileInfo struct {
	// ID is the unique identifier of the upload resource.
	ID string
	// Total file size in bytes specified in the NewUpload call
	Size int64
	// Indicates whether the total file size is deferred until later
	SizeIsDeferred bool
	// Offset in bytes (zero-based)
	Offset   int64
	MetaData MetaData
	// Indicates that this is a partial upload which will later be used to form
	// a final upload by concatenation. Partial uploads should not be processed
	// when they are finished since they are only incomplete chunks of files.
	IsPartial bool
	// Indicates that this is a final upload
	IsFinal bool
	// If the upload is a final one (see IsFinal) this will be a non-empty
	// ordered slice containing the ids of the uploads of which the final upload
	// will consist after concatenation.
	PartialUploads []string
	// Storage contains information about where the data storage saves the upload,
	// for example a file path. The available values vary depending on what data
	// store is used. This map may also be nil.
	Storage map[string]string
	// contains filtered or unexported fields
}

FileInfo contains information about a single upload resource.

func (FileInfo) StopUpload

func (f FileInfo) StopUpload(response HTTPResponse)

StopUpload interrupts a running upload from the server-side. This means that the current request body is closed, so that the data store does not get any more data. Furthermore, a response is sent to notify the client of the interrupting and the upload is terminated (if supported by the data store), so the upload cannot be resumed anymore. The response to the client can be optionally modified by providing values in the HTTPResponse struct.

type FileInfoChanges

type FileInfoChanges struct {
	// If ID is not empty, it will be passed to the data store, allowing
	// hooks to influence the upload ID. Be aware that a data store is not required to
	// respect a pre-defined upload ID and might overwrite or modify it. However,
	// all data stores in the github.com/tus/tusd package do respect pre-defined IDs.
	ID string

	// If MetaData is not nil, it replaces the entire user-defined meta data from
	// the upload creation request. You can add custom meta data fields this way
	// or ensure that only certain fields from the user-defined meta data are saved.
	// If you want to retain only specific entries from the user-defined meta data, you must
	// manually copy them into this MetaData field.
	// If you do not want to store any meta data, set this field to an empty map (`MetaData{}`).
	// If you want to keep the entire user-defined meta data, set this field to nil.
	MetaData MetaData

	// If Storage is not nil, it is passed to the data store to allow for minor adjustments
	// to the upload storage (e.g. destination file name). The details are specific for each
	// data store and should be looked up in their respective documentation.
	// Please be aware that this behavior is currently not supported by any data store in
	// the github.com/tus/tusd package.
	Storage map[string]string
}

FileInfoChanges collects changes the should be made to a FileInfo struct. This can be done using the PreUploadCreateCallback to modify certain properties before an upload is created. Properties which should not be modified (e.g. Size or Offset) are intentionally left out here.

type HTTPHeader

type HTTPHeader map[string]string

type HTTPRequest

type HTTPRequest struct {
	// Method is the HTTP method, e.g. POST or PATCH.
	Method string
	// URI is the full HTTP request URI, e.g. /files/fooo.
	URI string
	// RemoteAddr contains the network address that sent the request.
	RemoteAddr string
	// Header contains all HTTP headers as present in the HTTP request.
	Header http.Header
}

HTTPRequest contains basic details of an incoming HTTP request.

type HTTPResponse

type HTTPResponse struct {
	// StatusCode is status code, e.g. 200 or 400.
	StatusCode int
	// Body is the response body.
	Body string
	// Header contains additional HTTP headers for the response.
	Header HTTPHeader
}

HTTPResponse contains basic details of an outgoing HTTP response.

func (HTTPResponse) MergeWith

func (resp1 HTTPResponse) MergeWith(resp2 HTTPResponse) HTTPResponse

MergeWith returns a copy of resp1, where non-default values from resp2 overwrite values from resp1.

type Handler

type Handler struct {
	*UnroutedHandler
	http.Handler
}

Handler is a ready to use handler with routing (using pat)

func NewHandler

func NewHandler(config Config) (*Handler, error)

NewHandler creates a routed tus protocol handler. This is the simplest way to use tusd but may not be as configurable as you require. If you are integrating this into an existing app you may like to use tusd.NewUnroutedHandler instead. Using tusd.NewUnroutedHandler allows the tus handlers to be combined into your existing router (aka mux) directly. It also allows the GET and DELETE endpoints to be customized. These are not part of the protocol so can be changed depending on your needs.

type HookEvent

type HookEvent struct {
	// Context provides access to the context from the HTTP request. This context is
	// not the exact value as the request context from http.Request.Context() but
	// a similar context that retains the same values as the request context. In
	// addition, Context will be cancelled after a short delay when the request context
	// is done. This delay is controlled by Config.GracefulRequestCompletionTimeout.
	//
	// The reason is that we want stores to be able to continue processing a request after
	// its context has been cancelled. For example, assume a PATCH request is incoming. If
	// the end-user pauses the upload, the connection is closed causing the request context
	// to be cancelled immediately. However, we want the store to be able to save the last
	// few bytes that were transmitted before the request was aborted. To allow this, we
	// copy the request context but cancel it with a brief delay to give the data store
	// time to finish its operations.
	Context context.Context `json:"-"`
	// Upload contains information about the upload that caused this hook
	// to be fired.
	Upload FileInfo
	// HTTPRequest contains details about the HTTP request that reached
	// tusd.
	HTTPRequest HTTPRequest
}

HookEvent represents an event from tusd which can be handled by the application.

type LengthDeclarableUpload

type LengthDeclarableUpload interface {
	DeclareLength(ctx context.Context, length int64) error
}

type LengthDeferrerDataStore

type LengthDeferrerDataStore interface {
	AsLengthDeclarableUpload(upload Upload) LengthDeclarableUpload
}

LengthDeferrerDataStore is the interface that must be implemented if the creation-defer-length extension should be enabled. The extension enables a client to upload files when their total size is not yet known. Instead, the client must send the total size as soon as it becomes known.

type Lock

type Lock interface {
	// Lock attempts to obtain an exclusive lock for the upload specified
	// by its id.
	// If the lock can be acquired, it will return without error. The requestUnlock
	// callback is invoked when another caller attempts to create a lock. In this
	// case, the holder of the lock should attempt to release the lock as soon
	// as possible
	// If the lock is already held, the holder's requestUnlock function will be
	// invoked to request the lock to be released. If the context is cancelled before
	// the lock can be acquired, ErrLockTimeout will be returned without acquiring
	// the lock.
	Lock(ctx context.Context, requestUnlock func()) error
	// Unlock releases an existing lock for the given upload.
	Unlock() error
}

Lock is the interface for a lock as returned from a Locker.

type Locker

type Locker interface {
	// NewLock creates a new unlocked lock object for the given upload ID.
	NewLock(id string) (Lock, error)
}

Locker is the interface required for custom lock persisting mechanisms. Common ways to store this information is in memory, on disk or using an external service, such as Redis. When multiple processes are attempting to access an upload, whether it be by reading or writing, a synchronization mechanism is required to prevent data corruption, especially to ensure correct offset values and the proper order of chunks inside a single upload.

type MetaData

type MetaData map[string]string

type Metrics

type Metrics struct {
	// RequestTotal counts the number of incoming requests per method
	RequestsTotal map[string]*uint64
	// ErrorsTotal counts the number of returned errors by their message
	ErrorsTotal       *ErrorsTotalMap
	BytesReceived     *uint64
	UploadsFinished   *uint64
	UploadsCreated    *uint64
	UploadsTerminated *uint64
}

Metrics provides numbers about the usage of the tusd handler. Since these may be accessed from multiple goroutines, it is necessary to read and modify them atomically using the functions exposed in the sync/atomic package, such as atomic.LoadUint64. In addition the maps must not be modified to prevent data races.

type StoreComposer

type StoreComposer struct {
	Core DataStore

	UsesTerminater     bool
	Terminater         TerminaterDataStore
	UsesLocker         bool
	Locker             Locker
	UsesConcater       bool
	Concater           ConcaterDataStore
	UsesLengthDeferrer bool
	LengthDeferrer     LengthDeferrerDataStore
}

StoreComposer represents a composable data store. It consists of the core data store and optional extensions. Please consult the package's overview for a more detailed introduction in how to use this structure.

func NewStoreComposer

func NewStoreComposer() *StoreComposer

NewStoreComposer creates a new and empty store composer.

Example
package main

import (
	"github.com/Korpenter/tusd/v2/pkg/filestore"
	"github.com/Korpenter/tusd/v2/pkg/handler"
	"github.com/Korpenter/tusd/v2/pkg/memorylocker"
)

func main() {
	composer := handler.NewStoreComposer()

	fs := filestore.New("./data")
	fs.UseIn(composer)

	ml := memorylocker.New()
	ml.UseIn(composer)

	config := handler.Config{
		StoreComposer: composer,
	}

	_, _ = handler.NewHandler(config)
}
Output:

func (*StoreComposer) Capabilities

func (store *StoreComposer) Capabilities() string

Capabilities returns a string representing the provided extensions in a human-readable format meant for debugging.

func (*StoreComposer) UseConcater

func (store *StoreComposer) UseConcater(ext ConcaterDataStore)

func (*StoreComposer) UseCore

func (store *StoreComposer) UseCore(core DataStore)

UseCore will set the used core data store. If the argument is nil, the property will be unset.

func (*StoreComposer) UseLengthDeferrer

func (store *StoreComposer) UseLengthDeferrer(ext LengthDeferrerDataStore)

func (*StoreComposer) UseLocker

func (store *StoreComposer) UseLocker(ext Locker)

func (*StoreComposer) UseTerminater

func (store *StoreComposer) UseTerminater(ext TerminaterDataStore)

type TerminatableUpload

type TerminatableUpload interface {
	// Terminate an upload so any further requests to the upload resource will
	// return the ErrNotFound error.
	Terminate(ctx context.Context) error
}

type TerminaterDataStore

type TerminaterDataStore interface {
	AsTerminatableUpload(upload Upload) TerminatableUpload
}

TerminaterDataStore is the interface which must be implemented by DataStores if they want to receive DELETE requests using the Handler. If this interface is not implemented, no request handler for this method is attached.

type UnroutedHandler

type UnroutedHandler struct {

	// CompleteUploads is used to send notifications whenever an upload is
	// completed by a user. The HookEvent will contain information about this
	// upload after it is completed. Sending to this channel will only
	// happen if the NotifyCompleteUploads field is set to true in the Config
	// structure. Notifications will also be sent for completions using the
	// Concatenation extension.
	CompleteUploads chan HookEvent
	// TerminatedUploads is used to send notifications whenever an upload is
	// terminated by a user. The HookEvent will contain information about this
	// upload gathered before the termination. Sending to this channel will only
	// happen if the NotifyTerminatedUploads field is set to true in the Config
	// structure.
	TerminatedUploads chan HookEvent
	// UploadProgress is used to send notifications about the progress of the
	// currently running uploads. For each open PATCH request, every second
	// a HookEvent instance will be send over this channel with the Offset field
	// being set to the number of bytes which have been transfered to the server.
	// Please be aware that this number may be higher than the number of bytes
	// which have been stored by the data store! Sending to this channel will only
	// happen if the NotifyUploadProgress field is set to true in the Config
	// structure.
	UploadProgress chan HookEvent
	// CreatedUploads is used to send notifications about the uploads having been
	// created. It triggers post creation and therefore has all the HookEvent incl.
	// the ID available already. It facilitates the post-create hook. Sending to
	// this channel will only happen if the NotifyCreatedUploads field is set to
	// true in the Config structure.
	CreatedUploads chan HookEvent
	// Metrics provides numbers of the usage for this handler.
	Metrics Metrics
	// contains filtered or unexported fields
}

UnroutedHandler exposes methods to handle requests as part of the tus protocol, such as PostFile, HeadFile, PatchFile and DelFile. In addition the GetFile method is provided which is, however, not part of the specification.

func NewUnroutedHandler

func NewUnroutedHandler(config Config) (*UnroutedHandler, error)

NewUnroutedHandler creates a new handler without routing using the given configuration. It exposes the http handlers which need to be combined with a router (aka mux) of your choice. If you are looking for preconfigured handler see NewHandler.

func (*UnroutedHandler) DelFile

func (handler *UnroutedHandler) DelFile(w http.ResponseWriter, r *http.Request)

DelFile terminates an upload permanently.

func (*UnroutedHandler) GetFile

func (handler *UnroutedHandler) GetFile(w http.ResponseWriter, r *http.Request)

GetFile handles requests to download a file using a GET request. This is not part of the specification.

func (*UnroutedHandler) HeadFile

func (handler *UnroutedHandler) HeadFile(w http.ResponseWriter, r *http.Request)

HeadFile returns the length and offset for the HEAD request

func (*UnroutedHandler) Middleware

func (handler *UnroutedHandler) Middleware(h http.Handler) http.Handler

Middleware checks various aspects of the request and ensures that it conforms with the spec. Also handles method overriding for clients which cannot make PATCH AND DELETE requests. If you are using the tusd handlers directly you will need to wrap at least the POST and PATCH endpoints in this middleware.

func (*UnroutedHandler) PatchFile

func (handler *UnroutedHandler) PatchFile(w http.ResponseWriter, r *http.Request)

PatchFile adds a chunk to an upload. This operation is only allowed if enough space in the upload is left.

func (*UnroutedHandler) PostFile

func (handler *UnroutedHandler) PostFile(w http.ResponseWriter, r *http.Request)

PostFile creates a new file upload using the datastore after validating the length and parsing the metadata.

func (*UnroutedHandler) PostFileV2

func (handler *UnroutedHandler) PostFileV2(w http.ResponseWriter, r *http.Request)

PostFile creates a new file upload using the datastore after validating the length and parsing the metadata.

func (*UnroutedHandler) SupportedExtensions

func (handler *UnroutedHandler) SupportedExtensions() string

SupportedExtensions returns a comma-separated list of the supported tus extensions. The availability of an extension usually depends on whether the provided data store implements some additional interfaces.

type Upload

type Upload interface {
	// Write the chunk read from src into the file specified by the id at the
	// given offset. The handler will take care of validating the offset and
	// limiting the size of the src to not overflow the file's size.
	// The handler will also lock resources while they are written to ensure only one
	// write happens per time.
	// The function call must return the number of bytes written.
	WriteChunk(ctx context.Context, offset int64, src io.Reader) (int64, error)
	// Read the fileinformation used to validate the offset and respond to HEAD
	// requests.
	GetInfo(ctx context.Context) (FileInfo, error)
	// GetReader returns an io.ReadCloser which allows iterating of the content of an
	// upload. It should attempt to provide a reader even if the upload has not
	// been finished yet but it's not required.
	GetReader(ctx context.Context) (io.ReadCloser, error)
	// FinisherDataStore is the interface which can be implemented by DataStores
	// which need to do additional operations once an entire upload has been
	// completed. These tasks may include but are not limited to freeing unused
	// resources or notifying other services. For example, S3Store uses this
	// interface for removing a temporary object.
	FinishUpload(ctx context.Context) error
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL