Documentation
¶
Overview ¶
Package moxio has common i/o functions.
Index ¶
- Variables
- func Base64Writer(w io.Writer) io.WriteCloser
- func IsStorageSpace(err error) bool
- func LinkOrCopy(log mlog.Log, dst, src string, srcReaderOpt io.Reader, fileSync bool) (rerr error)
- func SyncDir(log mlog.Log, dir string) error
- func TLSInfo(cs tls.ConnectionState) (version, ciphersuite string)
- type AtReader
- type Bufpool
- type FlateWriter
- type LimitAtReader
- type LimitReader
- type PrefixConn
- type TraceReader
- type TraceWriter
- type Work
- type WorkQueue
Constants ¶
This section is empty.
Variables ¶
var ErrLimit = errors.New("input exceeds maximum size") // Returned by LimitReader.
var ErrLineTooLong = errors.New("line from remote too long") // Returned by Bufpool.Readline.
Functions ¶
func Base64Writer ¶ added in v0.0.6
func Base64Writer(w io.Writer) io.WriteCloser
Base64Writer turns a writer for data into one that writes base64 content on \r\n separated lines of max 76+2 characters length.
func IsStorageSpace ¶
IsStorageSpace returns whether the error is for storage space issue. Like disk full, no inodes, quota reached.
func LinkOrCopy ¶ added in v0.0.6
LinkOrCopy attempts to make a hardlink dst. If that fails, it will try to do a regular file copy. If srcReaderOpt is not nil, it will be used for reading. If fileSync is true and the file is copied instead of hardlinked, fsync is called on the file after writing to ensure the file is flushed to disk. Callers should also sync the directory of the destination file, but may want to do that after linking/copying multiple files. If dst was created and an error occurred, it is removed.
func TLSInfo ¶ added in v0.0.9
func TLSInfo(cs tls.ConnectionState) (version, ciphersuite string)
TLSInfo returns human-readable strings about the TLS connection, for use in logging.
Types ¶
type Bufpool ¶
type Bufpool struct {
// contains filtered or unexported fields
}
Bufpool caches byte slices for reuse during parsing of line-terminated commands.
func NewBufpool ¶
NewBufpool makes a new pool, initially empty, but holding at most "max" buffers of "size" bytes each.
type FlateWriter ¶ added in v0.0.15
type FlateWriter struct {
// contains filtered or unexported fields
}
FlateWriter wraps a flate.Writer and ensures no Write/Flush/Close calls are made again on the underlying flate writer when a panic came out of the flate writer (e.g. raised by the destination writer of the flate writer). After a panic "through" a flate.Writer, its state is inconsistent and further calls could panic with out of bounds slice accesses.
func NewFlateWriter ¶ added in v0.0.15
func NewFlateWriter(w *flate.Writer) *FlateWriter
func (*FlateWriter) Close ¶ added in v0.0.15
func (w *FlateWriter) Close() error
func (*FlateWriter) Flush ¶ added in v0.0.15
func (w *FlateWriter) Flush() error
type LimitAtReader ¶
LimitAtReader is a reader at that returns ErrLimit if reads would extend beyond Limit.
type LimitReader ¶
LimitReader reads up to Limit bytes, returning an error if more bytes are read. LimitReader can be used to enforce a maximum input length.
type PrefixConn ¶
type PrefixConn struct { PrefixReader io.Reader // If not nil, reads are fulfilled from here. It is cleared when a read returns io.EOF. net.Conn }
PrefixConn is a net.Conn prefixed with a reader that is first drained. Used for STARTTLS where already did a buffered read of initial TLS data.
type TraceReader ¶
type TraceReader struct {
// contains filtered or unexported fields
}
func NewTraceReader ¶
NewTraceReader wraps reader "r" into a reader that logs all reads to "log" with log level trace, prefixed with "prefix".
func (*TraceReader) Read ¶
func (r *TraceReader) Read(buf []byte) (int, error)
Read does a single Read on its underlying reader, logs data of successful reads, and returns the data read.
func (*TraceReader) SetTrace ¶
func (r *TraceReader) SetTrace(level slog.Level)
type TraceWriter ¶
type TraceWriter struct {
// contains filtered or unexported fields
}
func NewTraceWriter ¶
NewTraceWriter wraps "w" into a writer that logs all writes to "log" with log level trace, prefixed with "prefix".
func (*TraceWriter) SetTrace ¶
func (w *TraceWriter) SetTrace(level slog.Level)
type WorkQueue ¶ added in v0.0.7
type WorkQueue[T, R any] struct { // contains filtered or unexported fields }
WorkQueue can be used to execute a work load where many items are processed with a slow step and where a pool of workers goroutines to execute the slow step helps. Reading messages from the database file is fast and cannot be easily done concurrently, but reading the message file from disk and parsing the headers is the bottleneck. The workqueue can manage the goroutines that read the message file from disk and parse.
func NewWorkQueue ¶ added in v0.0.7
func NewWorkQueue[T, R any](procs, size int, preparer func(in, out chan Work[T, R]), process func(T, R) error) *WorkQueue[T, R]
NewWorkQueue creates a new work queue with "procs" goroutines, and a total work queue size of "size" (e.g. 2*procs). The worker goroutines run "preparer", which should be a loop receiving work from "in" and sending the work result (with Err or Out set) on "out". The preparer function should return when the "in" channel is closed, the signal to stop. WorkQueue processes the results in the order they went in, so prepared work that was scheduled after earlier work that is not yet prepared will wait and be queued.
func (*WorkQueue[T, R]) Add ¶ added in v0.0.7
Add adds new work to be prepared to the queue. If the queue is full, it waits until space becomes available, i.e. when the head of the queue has work that becomes prepared. Add processes the prepared items to make space available.