mahi

package module
v0.1.4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 2, 2020 License: MIT Imports: 10 Imported by: 0

README

Mahi Go Report Card GoDoc

Mahi is an all-in-one HTTP service for file uploading, processing, serving, and storage. Mahi supports chunked, resumable, and concurrent uploads. Mahi uses Libvips behind the scenes making it extremely fast and memory efficient.

Mahi currently supports any s3 compatible storage, which includes (AWS s3, DO Spaces, Wasabi, Backblaze B2). The specific storage engine can be set when creating an application.

Mahi supports different databases for storing file meta-data and analytics. Currently, the 2 supported databased are PostgreSQL and BoltDB. The database of choice can be provided via the config file.

Features

Install

Libvips must be installed on your machine.

Ubuntu
sudo apt install libvips libvips-dev libvips-tools
MacOS
brew install vips

For other systems check out instructions here.

Installing mahid server.

go get -u github.com/threeaccents/mahi/...

This will install the mahid command in your $GOPATH/bin folder.

Usage

mahid -config=/path/to/config.toml

If no config is passed Mahi will look for a mahi.toml file in the current directory.

Applications

Mahi has the concept of applications. Each application houses specific files and the storage engine for those files. This makes Mahi extremely flexible to use for different projects. If on one project you decide to use s3 as your storage engine and another DO Spaces, Mahi easily handles it for you.

Applications can be created via our Web API.

Uploads

Files are uploaded to Mahi via multipart/form-data requests. Along with passing in the file data, you must also provide the application_id. Mahi will handle processing and storing the file blob in the application's storage engine along with storing the file meta-data in the database. To view an example upload response check out the Web API

Large File Uploads

When dealing with large files, it is best to split the file into small chunks and upload each chunk separately. Mahi easily handles chunked uploads storing each chunk and then re-building the whole file. Once the whole file is re-built Mahi uploads the file to the application's storage engine. To view an example upload response check out the Web API

Other benefits of chunking up files are the ability to resume uploads and uploading multiple chunks concurrently. Mahi handles both scenarios for you with ease.

File Transformations (More Coming Soon)

Mahi supports file transformations via URL query params. Currently, the supported operations are:

  • Resize (width, height) ?width=100&height=100
  • Smart Crop ?crop=true
  • Flip ?flip=true
  • Flop ?flop=true
  • Zoom ?zoom=2
  • Black and White ?bw=true
  • Quality(JPEG), Compression(PNG) ?quality=100 ?compression=10
  • Format conversion format is based on the file extension. To transform a png to webp, just use the .webp extension.

All queries can be used together. For example, to resize the width, make the image black and white, and change the format to webp the params would look like this:

https://yourdomain.com/myimage.webp?width=100&bw=true

Stats

Mahi currently tracks these stats for both specific applications and the service as a hole:

  • Transformations: Total transformations
  • Unique Transformations: Unique transformations per file.
  • Bandwidth: Bytes served.
  • Storage: Bytes stored.
  • File Count: Total files.

These stats can be retrieved via our Web API.

Config

Mahi's is configured via a toml file. Here are toml config examples. Configuration options include:

  • db_engine:string(default: bolt) The main database for mahi. Valid options are postgres and bolt. This is not to be confused with the storage engine. Storage engine is set per application via the Web API.
  • http
    • port:int(default: 4200) the port to run mahi on.
    • https:boolean(default: false) configures server to accept https requests.
    • ssl_cert_path:string path to ssl certificate. Only required if https is set to true.
    • ssl_key_path:string path to ssl key. Only required if https is set to true.
  • security
    • auth_token:string token for authenticating requests
    • aes_key:string key for use with AES-256 encryption. This is used to encrypt storage secrets.
  • upload
    • chunk_upload_dir:string(default: ./data/chunks) directory for storing chunks while an upload is happening. Once an upload is completed, the chunks are deleted.
    • full_file_dir:string(default: ./data/files) full_files are temp files used while building chunks or downloading files from the storage engine. These temp files are removed once the request is completed.
    • max_chunk_size:int64(default: 10MB) max size of a file chunk in bytes.
    • max_file_size_upload:int64(default: 50MB) max size of a file for a regular upload in bytes.
    • max_transform_file_size:int64(default: 50MB) max size of a file that can be transformed in bytes.
  • bolt(only used if db_engine is set to bolt
    • dir:string(default: ./data/mahi/mahi.db) directory for bolt db file.
  • postgresql(only used if db_engine is set to postgres)
    • database:string(default: mahi) name of database.
    • host:string(default: localhost) host of database.
    • port:int(default: 5432) port of database.
    • user:string(default: mahi) username of database.
    • password:string(default: ) password of database.
    • max_conns:int(default: 10 connections per CPU) maximum connections for database pool.

Postgres

To use Postgres the necessary data tables must be created. SQL files are located in the migrations folder. In the future, Mahi will come with a migrate command that will automatically handle creating the necessary tables for you. For now, you have 2 options. Install tern, cd into the migrations folder, and run tern migrate. The second option is just to copy and paste the SQL provided directly in a GUI or command line instance of Postgres.

Documentation

Index

Constants

View Source
const (
	StorageEngineWasabi       = "wasabi"
	StorageEngineDigitalOcean = "digital_ocean"
	StorageEngineS3           = "s3"
	StorageEngineB2           = "b2"
	StorageEngineAzureBlob    = "azure_blob"
)
View Source
const (
	ErrUnauthorized = Error("unauthorized")
	ErrInternal     = Error("internal error")
	ErrInvalidDate  = Error("invalid date")
	ErrNotFound     = Error("resource not found")
	ErrBadRequest   = Error("bad request")
	ErrInvalidJSON  = Error("invalid json")
)

General errors.

View Source
const (
	ErrApplicationNotFound  = Error("application not found")
	ErrApplicationNameTaken = Error("application name taken")
)

Application errors

View Source
const (
	ErrBucketNotFound    = Error("bucket does not exist")
	ErrInvalidBucketName = Error("invalid bucket name")
	ErrInvalidStorageKey = Error("invalid api keys or api keys do not have correct permissions")
)

Storage errors

View Source
const (
	ErrFileNotFound            = Error("file not found")
	ErrFileToLargeToTransform  = Error("file is to large to transform")
	ErrTransformationNotUnique = Error("transformation is not unique")
)

File errors

View Source
const DateLayout = "2006-01-02"
View Source
const DefaultFilePaginationLimit = 25
View Source
const (
	ErrAuthorizationHeaderMissing = Error("authorization header is missing")
)
View Source
const (
	ErrUsageNotFound = Error("usage not found")
)

Usage errors

Variables

Functions

func RandInt

func RandInt(min, max int) int

func RandStr

func RandStr(length int) string

Types

type Application

type Application struct {
	ID               string
	Name             string
	Description      string
	StorageAccessKey string
	StorageSecretKey string
	StorageBucket    string
	StorageEndpoint  string
	StorageRegion    string
	StorageEngine    string
	DeliveryURL      string
	CreatedAt        time.Time
	UpdatedAt        time.Time
}

type ApplicationService

type ApplicationService interface {
	Create(ctx context.Context, n *NewApplication) (*Application, error)
	Application(ctx context.Context, id string) (*Application, error)
	Applications(ctx context.Context, sinceID string, limit int) ([]*Application, error)
	Delete(ctx context.Context, id string) error
	Update(ctx context.Context, u *UpdateApplication) (*Application, error)
	FileBlobStorage(engine, accessKey, secretKey, region, endpoint string) (FileBlobStorage, error)
}

ApplicationService defines the business logic for dealing with all aspects of an application.

type ApplicationStorage

type ApplicationStorage interface {
	Store(ctx context.Context, n *NewApplication) (*Application, error)
	Application(ctx context.Context, id string) (*Application, error)
	Applications(ctx context.Context, sinceID string, limit int) ([]*Application, error)
	Delete(ctx context.Context, id string) error
	Update(ctx context.Context, u *UpdateApplication) (*Application, error)
}

ApplicationStorage handles communication with the database for handling applications.

type EncryptionService

type EncryptionService interface {
	Encrypt(plaintext []byte) ([]byte, error)
	EncryptToString(plaintext []byte) (string, error)
	Decrypt(cipherText []byte) ([]byte, error)
	DecryptString(cipherText string) ([]byte, error)
}

EncryptionService manages the encrypting and decrypting of data

type Error

type Error string

Error represents a Mahi error.

func (Error) Error

func (e Error) Error() string

Error returns the error message.

type File

type File struct {
	ID            string
	ApplicationID string
	Filename      string
	FileBlobID    string
	Size          int64
	MIMEType      string
	MIMEValue     string
	Extension     string
	URL           string
	Hash          string
	Width         int
	Height        int
	CreatedAt     time.Time
	UpdatedAt     time.Time
}

File holds all the "meta" data of a file. From its type and size to who uploaded it and what application it was uploaded to.

func (*File) IsImage

func (f *File) IsImage() bool

func (*File) IsTransformable

func (f *File) IsTransformable() bool

type FileBlob

type FileBlob struct {
	ID        string
	Data      io.ReadCloser
	MIMEValue string
	Size      int64

	// TempFileName this is used to determine if we need to delete the temp file after using the FileBlob
	TempFileName string
}

FileBlob is holds the properties needed for the blob of a file.

func (*FileBlob) Bytes

func (b *FileBlob) Bytes(p []byte) (int, error)

Bytes transforms the data of the FileBlob into a byte array

func (*FileBlob) Close

func (b *FileBlob) Close() error

Close closes the io.ReadCloser and deletes the temp file if one was used

func (*FileBlob) IsImage

func (b *FileBlob) IsImage() bool

func (*FileBlob) IsTransformable

func (b *FileBlob) IsTransformable() bool

type FileBlobStorage

type FileBlobStorage interface {
	Upload(ctx context.Context, bucket string, b *FileBlob) error
	CreateBucket(ctx context.Context, bucket string) error
	FileBlob(ctx context.Context, bucket, id, tempDir string) (*FileBlob, error)
}

type FileServeService

type FileServeService interface {
	Serve(ctx context.Context, path *url.URL, opts TransformationOption) (*FileBlob, error)
}

FileServeService handles serving the file over http

type FileService

type FileService interface {
	Create(ctx context.Context, n *NewFile) (*File, error)
	File(ctx context.Context, id string) (*File, error)
	ApplicationFiles(ctx context.Context, applicationID, sinceID string, limit int) ([]*File, error)
	Delete(ctx context.Context, id string) error
}

FileService defines the business logic for dealing with all aspects of a file.

type FileStorage

type FileStorage interface {
	Store(ctx context.Context, n *NewFile) (*File, error)
	File(ctx context.Context, id string) (*File, error)
	FileByFileBlobID(ctx context.Context, fileBlobID string) (*File, error)
	ApplicationFiles(ctx context.Context, applicationID, sinceID string, limit int) ([]*File, error)
	Delete(ctx context.Context, id string) error
}

FileStorage handles communication with the database for handling files.

type NewApplication

type NewApplication struct {
	Name             string
	Description      string
	StorageAccessKey string
	StorageSecretKey string
	StorageBucket    string
	StorageEndpoint  string
	StorageRegion    string
	StorageEngine    string
	DeliveryURL      string
}

type NewFile

type NewFile struct {
	ApplicationID string
	Filename      string
	FileBlobID    string
	Size          int64
	MIMEType      string
	MIMEValue     string
	Extension     string
	URL           string
	Hash          string
	Width         int
	Height        int
}

NewFile is a helper struct for creating a new file

func (*NewFile) IsImage

func (f *NewFile) IsImage() bool

type NewTransformation

type NewTransformation struct {
	ApplicationID string
	FileID        string
	Actions       TransformationOption
}

type NewUsage

type NewUsage struct {
	ApplicationID         string
	Transformations       int64
	UniqueTransformations int64
	Bandwidth             int64
	Storage               int64
	FileCount             int64
	StartDate             time.Time
	EndDate               time.Time
}

type TotalUsage

type TotalUsage struct {
	Transformations       int64
	UniqueTransformations int64
	Bandwidth             int64
	Storage               int64
	FileCount             int64
	StartDate             time.Time
	EndDate               time.Time
}

type TransformService

type TransformService interface {
	Transform(ctx context.Context, f *File, blob *FileBlob, opts TransformationOption) (*FileBlob, error)
}

type TransformStorage

type TransformStorage interface {
	Store(ctx context.Context, n *NewTransformation) (*Transformation, error)
}

type Transformation

type Transformation struct {
	ID            string
	ApplicationID string
	FileID        string
	Actions       TransformationOption
	CreatedAt     time.Time
	UpdatedAt     time.Time
}

type TransformationOption

type TransformationOption struct {
	// Width
	Width int
	// Height
	Height int
	// Format
	Format string
	// Quality the quality of the JPEG image defaults to 80.
	Quality int
	//  Compression compression for a PNG image defaults to 6.
	Compression int
	// Crop uses lib vips smart crop to crop image
	Crop bool
	// Rotate image rotation angle. Must be a multiple of 90
	Rotate int
	// Flip flips an image
	Flip bool
	// Flop flops an image
	Flop bool
	// Zoom
	Zoom int
	// black and white
	BW bool
}

type UpdateApplication

type UpdateApplication struct {
	ID          string
	Name        string
	Description string
}

type UpdateUsage

type UpdateUsage struct {
	ApplicationID         string
	Transformations       int64
	UniqueTransformations int64
	Bandwidth             int64
	Storage               int64
	FileCount             int64
	StartDate             time.Time
	EndDate               time.Time
}

type UploadService

type UploadService interface {
	Upload(ctx context.Context, r *multipart.Reader) (*File, error)
	ChunkUpload(ctx context.Context, r *multipart.Reader) error
	CompleteChunkUpload(ctx context.Context, applicationID, uploadID, filename string) (*File, error)
}

type Usage

type Usage struct {
	ID                    string
	ApplicationID         string
	Transformations       int64
	UniqueTransformations int64
	Bandwidth             int64
	Storage               int64
	FileCount             int64
	StartDate             time.Time
	EndDate               time.Time
	CreatedAt             time.Time
	UpdatedAt             time.Time
}

type UsageService

type UsageService interface {
	Update(ctx context.Context, u *UpdateUsage) error
	Usages(ctx context.Context, startDate, endDate time.Time) ([]*TotalUsage, error)
	ApplicationUsages(ctx context.Context, applicationID string, startTime, endTime time.Time) ([]*Usage, error)
}

type UsageStorage

type UsageStorage interface {
	Update(ctx context.Context, u *UpdateUsage) (*Usage, error)
	Usages(ctx context.Context, startDate, endDate time.Time) ([]*TotalUsage, error)
	ApplicationUsages(ctx context.Context, applicationID string, startTime, endTime time.Time) ([]*Usage, error)
}

Directories

Path Synopsis
adapter
aes
s3
cmd
core
libs
transport

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL