Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
var ( // ErrSkipped means the repository was skipped because there was nothing to backup ErrSkipped = errors.New("repository skipped") // ErrDoesntExist means that the data was not found. ErrDoesntExist = errors.New("doesn't exist") )
Functions ¶
This section is empty.
Types ¶
type CreatePipeline ¶
type CreatePipeline interface { Create(context.Context, *CreateRequest) Done() error }
CreatePipeline is a pipeline that only handles creating backups
type CreateRequest ¶
type CreateRequest struct { Server storage.ServerInfo Repository *gitalypb.Repository }
CreateRequest is the request to create a backup
type FilesystemSink ¶ added in v14.2.0
type FilesystemSink struct {
// contains filtered or unexported fields
}
FilesystemSink is a sink for creating and restoring backups from the local filesystem.
func NewFilesystemSink ¶ added in v14.2.0
func NewFilesystemSink(path string) *FilesystemSink
NewFilesystemSink returns a sink that uses a local filesystem to work with data.
func (*FilesystemSink) GetReader ¶ added in v14.2.0
func (fs *FilesystemSink) GetReader(ctx context.Context, relativePath string) (io.ReadCloser, error)
GetReader returns a reader of the requested file path. It's the caller's responsibility to Close returned reader once it is not needed anymore. If relativePath doesn't exist the ErrDoesntExist is returned.
type Manager ¶ added in v14.2.0
type Manager struct {
// contains filtered or unexported fields
}
Manager manages process of the creating/restoring backups.
func NewManager ¶ added in v14.2.0
NewManager creates and returns initialized *Manager instance.
type ParallelCreatePipeline ¶
type ParallelCreatePipeline struct {
// contains filtered or unexported fields
}
ParallelCreatePipeline is a pipeline that creates backups in parallel
func NewParallelCreatePipeline ¶
func NewParallelCreatePipeline(next CreatePipeline, n int) *ParallelCreatePipeline
NewParallelCreatePipeline creates a new ParallelCreatePipeline where `next` is the pipeline called to create the backups and `n` is the number of parallel backups that will run.
func (*ParallelCreatePipeline) Create ¶
func (p *ParallelCreatePipeline) Create(ctx context.Context, req *CreateRequest)
Create queues a call to `next.Create` which will be run in parallel
func (*ParallelCreatePipeline) Done ¶
func (p *ParallelCreatePipeline) Done() error
Done waits for any in progress calls to `Create` to complete then reports any accumulated errors
type ParallelStorageCreatePipeline ¶ added in v14.1.0
type ParallelStorageCreatePipeline struct {
// contains filtered or unexported fields
}
ParallelStorageCreatePipeline is a pipeline that creates backups in parallel limited per storage
func NewParallelStorageCreatePipeline ¶ added in v14.1.0
func NewParallelStorageCreatePipeline(next CreatePipeline, n int) *ParallelStorageCreatePipeline
NewParallelStorageCreatePipeline creates a new ParallelStorageCreatePipeline where `next` is the pipeline called to create the backups and `n` is the number of parallel backups that will run per storage. Since the number of storages is unknown at initialisation, workers are created lazily as new storage names are encountered.
func (*ParallelStorageCreatePipeline) Create ¶ added in v14.1.0
func (p *ParallelStorageCreatePipeline) Create(ctx context.Context, req *CreateRequest)
Create queues a request to create a backup. Requests are processed by n-workers per storage.
func (*ParallelStorageCreatePipeline) Done ¶ added in v14.1.0
func (p *ParallelStorageCreatePipeline) Done() error
Done waits for any in progress calls to `Create` to complete then reports any accumulated errors
type Pipeline ¶
type Pipeline struct {
// contains filtered or unexported fields
}
Pipeline handles a series of requests to create/restore backups. Pipeline encapsulates error handling for the caller.
func NewPipeline ¶
func NewPipeline(log logrus.FieldLogger, strategy Strategy) *Pipeline
NewPipeline creates a new pipeline
func (*Pipeline) Create ¶
func (p *Pipeline) Create(ctx context.Context, req *CreateRequest)
Create requests that a repository backup be created
type RestoreRequest ¶
type RestoreRequest struct { Server storage.ServerInfo Repository *gitalypb.Repository AlwaysCreate bool }
RestoreRequest is the request to restore from a backup
type Sink ¶ added in v14.2.0
type Sink interface { // Write saves all the data from the r by relativePath. Write(ctx context.Context, relativePath string, r io.Reader) error // GetReader returns a reader that servers the data stored by relativePath. // If relativePath doesn't exists the ErrDoesntExist will be returned. GetReader(ctx context.Context, relativePath string) (io.ReadCloser, error) }
Sink is an abstraction over the real storage used for storing/restoring backups.
type StorageServiceSink ¶ added in v14.2.0
type StorageServiceSink struct {
// contains filtered or unexported fields
}
StorageServiceSink uses a storage engine that can be defined by the construction url on creation.
func NewStorageServiceSink ¶ added in v14.2.0
func NewStorageServiceSink(ctx context.Context, url string) (*StorageServiceSink, error)
NewStorageServiceSink returns initialized instance of StorageServiceSink instance. The storage engine is chosen based on the provided url value and a set of pre-registered blank imports in that file. It is the caller's responsibility to provide all required environment variables in order to get properly initialized storage engine driver.
func (*StorageServiceSink) Close ¶ added in v14.2.0
func (s *StorageServiceSink) Close() error
Close releases resources associated with the bucket communication.
func (*StorageServiceSink) GetReader ¶ added in v14.2.0
func (s *StorageServiceSink) GetReader(ctx context.Context, relativePath string) (io.ReadCloser, error)
GetReader returns a reader to consume the data from the configured bucket. It is the caller's responsibility to Close the reader after usage.
type Strategy ¶
type Strategy interface { Create(context.Context, *CreateRequest) error Restore(context.Context, *RestoreRequest) error }
Strategy used to create/restore backups