cmn

package
v1.3.19 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 6, 2023 License: MIT Imports: 39 Imported by: 0

Documentation

Overview

Package cmn provides common constants, types, and utilities for AIS clients and AIStore.

  • Copyright (c) 2018-2023, NVIDIA CORPORATION. All rights reserved.

Package cmn provides common constants, types, and utilities for AIS clients and AIStore.

  • Copyright (c) 2018-2022, NVIDIA CORPORATION. All rights reserved.

Package cmn provides common constants, types, and utilities for AIS clients and AIStore.

  • Copyright (c) 2018-2023, NVIDIA CORPORATION. All rights reserved.

Package cmn provides common constants, types, and utilities for AIS clients and AIStore.

  • Copyright (c) 2018-2022, NVIDIA CORPORATION. All rights reserved.

Package cmn provides common constants, types, and utilities for AIS clients and AIStore.

  • Copyright (c) 2018-2023, NVIDIA CORPORATION. All rights reserved.

Package cmn provides common constants, types, and utilities for AIS clients and AIStore.

  • Copyright (c) 2018-2023, NVIDIA CORPORATION. All rights reserved.

Package cmn provides common constants, types, and utilities for AIS clients and AIStore.

  • Copyright (c) 2018-2022, NVIDIA CORPORATION. All rights reserved.

Package cmn provides common constants, types, and utilities for AIS clients and AIStore.

  • Copyright (c) 2018-2021, NVIDIA CORPORATION. All rights reserved.

Package cmn provides common constants, types, and utilities for AIS clients and AIStore.

  • Copyright (c) 2018-2022, NVIDIA CORPORATION. All rights reserved.

Package cmn provides common constants, types, and utilities for AIS clients and AIStore.

  • Copyright (c) 2018-2022, NVIDIA CORPORATION. All rights reserved.

Package cmn provides common constants, types, and utilities for AIS clients and AIStore.

  • Copyright (c) 2018-2021, NVIDIA CORPORATION. All rights reserved.

Package cmn provides common constants, types, and utilities for AIS clients and AIStore.

  • Copyright (c) 2018-2021, NVIDIA CORPORATION. All rights reserved.

Package cmn provides common constants, types, and utilities for AIS clients and AIStore.

  • Copyright (c) 2018-2023, NVIDIA CORPORATION. All rights reserved.

Package cmn provides common constants, types, and utilities for AIS clients and AIStore.

  • Copyright (c) 2018-2023, NVIDIA CORPORATION. All rights reserved.

msgp -file cmn/objlist.go -tests=false -marshal=false -unexported Code generated by the command above where msgp is tinylib/msgp; see docs/msgp.md. DO NOT EDIT.

Package cmn provides common constants, types, and utilities for AIS clients and AIStore.

  • Copyright (c) 2018-2022, NVIDIA CORPORATION. All rights reserved.

Package cmn provides common constants, types, and utilities for AIS clients and AIStore.

  • Copyright (c) 2018-2021, NVIDIA CORPORATION. All rights reserved.

Package cmn provides common constants, types, and utilities for AIS clients and AIStore.

  • Copyright (c) 2018-2023, NVIDIA CORPORATION. All rights reserved.

Package cmn provides common constants, types, and utilities for AIS clients and AIStore.

  • Copyright (c) 2022-2023, NVIDIA CORPORATION. All rights reserved.

Index

Constants

View Source
const (
	PropBucketAccessAttrs  = "access"             // Bucket access attributes.
	PropBucketVerEnabled   = "versioning.enabled" // Enable/disable object versioning in a bucket.
	PropBucketCreated      = "created"            // Bucket creation time.
	PropBackendBck         = "backend_bck"
	PropBackendBckName     = PropBackendBck + ".name"
	PropBackendBckProvider = PropBackendBck + ".provider"
)
View Source
const (
	IgnoreReaction = "ignore"
	WarnReaction   = "warn"
	AbortReaction  = "abort"
)

dsort

View Source
const (
	MinSliceCount = 1  // minimum number of data or parity slices
	MaxSliceCount = 32 // maximum --/--
)
View Source
const (
	KeepaliveHeartbeatType = "heartbeat"
	KeepaliveAverageType   = "average"
)
View Source
const (
	FmtErrIntegrity      = "[%s%d, for troubleshooting see %s/blob/master/docs/troubleshooting.md]"
	FmtErrUnmarshal      = "%s: failed to unmarshal %s (%s), err: %w"
	FmtErrMorphUnmarshal = "%s: failed to unmarshal %s (%T), err: %w"
	FmtErrUnknown        = "%s: unknown %s %q"
	FmtErrBackwardCompat = "%v (backward compatibility is supported only one version back, e.g. 3.9 => 3.10)"

	EmptyProtoSchemeForURL = "empty protocol scheme for URL path"

	BadSmapPrefix = "[bad cluster map]"
)
View Source
const (
	RetryLogVerbose = iota
	RetryLogQuiet
	RetryLogOff
)
View Source
const (
	NetPublic       = "PUBLIC"
	NetIntraControl = "INTRA-CONTROL"
	NetIntraData    = "INTRA-DATA"
)
View Source
const (
	DefaultMaxIdleConns        = 64
	DefaultMaxIdleConnsPerHost = 16
	DefaultIdleConnTimeout     = 8 * time.Second
	DefaultWriteBufferSize     = 64 * cos.KiB
	DefaultReadBufferSize      = 64 * cos.KiB
	DefaultSendRecvBufferSize  = 128 * cos.KiB
)
View Source
const (
	// source of the cold-GET and download; the values include all
	// 3rd party backend providers (remote AIS not including)
	SourceObjMD = "source"

	// downloader' source is "web"
	WebObjMD = "web"

	VersionObjMD = "version" // "generation" for GCP, "version" for AWS but only if the bucket is versioned, etc.
	CRC32CObjMD  = cos.ChecksumCRC32C
	MD5ObjMD     = cos.ChecksumMD5
	ETag         = cos.HdrETag

	OrigURLObjMD = "orig_url"

	// additional backend
	LastModified = "LastModified"
)

LOM custom metadata stored under `lomCustomMD`.

View Source
const (
	VersionAIStore = "3.17"
	VersionCLI     = "1.2"
	VersionLoader  = "1.6"
	VersionAuthN   = "1.0"
)
View Source
const (
	MetaverSmap  = 1 // Smap (cluster map) formatting version (jsp)
	MetaverBMD   = 2 // BMD (bucket metadata) --/-- (jsp)
	MetaverRMD   = 1 // Rebalance MD (jsp)
	MetaverVMD   = 1 // Volume MD (jsp)
	MetaverEtlMD = 1 // ETL MD (jsp)

	MetaverLOM = 1 // LOM

	MetaverConfig      = 2 // Global Configuration (jsp)
	MetaverAuthNConfig = 1 // Authn config (jsp) // ditto
	MetaverAuthTokens  = 1 // Authn tokens (jsp) // ditto

	MetaverMetasync = 1 // metasync over network formatting version (jsp)

	MetaverJSP = jsp.Metaver // `jsp` own encoding version
)
View Source
const AwsMultipartDelim = "-"
View Source
const GitHubHome = "https://github.com/artashesbalabekyan/aistore"
View Source
const MsgpLsoBufSize = 32 * cos.KiB
View Source
const (
	// NsGlobalUname is hardcoded here to avoid allocating it via Uname()
	// (the most common use case)
	NsGlobalUname = "@#"
)

Variables

View Source
var (
	// NsGlobal represents *this* cluster's global namespace that is used by default when
	// no specific namespace was defined or provided by the user.
	NsGlobal = Ns{}
	// NsAnyRemote represents any remote cluster. As such, NsGlobalRemote applies
	// exclusively to AIS (provider) given that other Backend providers are remote by definition.
	NsAnyRemote = Ns{UUID: string(apc.NsUUIDPrefix)}
)
View Source
var (
	ErrSkip             = errors.New("skip")
	ErrStartupTimeout   = errors.New("startup timeout")
	ErrQuiesceTimeout   = errors.New("timed out waiting for quiescence")
	ErrNotEnoughTargets = errors.New("not enough target nodes")
	ErrNoMountpaths     = errors.New("no mountpaths")

	// aborts
	ErrXactRenewAbort   = errors.New("renewal abort")
	ErrXactUserAbort    = errors.New("user abort")              // via apc.ActXactStop
	ErrXactICNotifAbort = errors.New("IC(notifications) abort") // ditto
	ErrXactNoErrAbort   = errors.New("no-error abort")
)
View Source
var BackendHelpers = struct {
	Amazon backendFuncs
	Azure  backendFuncs
	Google backendFuncs
	HDFS   backendFuncs
	HTTP   backendFuncs
}{
	Amazon: backendFuncs{
		EncodeVersion: func(v any) (string, bool) {
			switch x := v.(type) {
			case *string:
				if awsIsVersionSet(x) {
					return *x, true
				}
				return "", false
			case string:
				if awsIsVersionSet(&x) {
					return x, true
				}
				return x, false
			default:
				debug.FailTypeCast(v)
				return "", false
			}
		},
		EncodeCksum: func(v any) (string, bool) {
			switch x := v.(type) {
			case *string:
				if strings.Contains(*x, AwsMultipartDelim) {
					return *x, true
				}
				cksum, _ := strconv.Unquote(*x)
				return cksum, true
			case string:
				return x, true
			default:
				debug.FailTypeCast(v)
				return "", false
			}
		},
	},
	Azure: backendFuncs{
		EncodeVersion: func(v any) (string, bool) {
			switch x := v.(type) {
			case string:
				x = strings.Trim(x, "\"")
				return x, x != ""
			default:
				debug.FailTypeCast(v)
				return "", false
			}
		},
		EncodeCksum: func(v any) (string, bool) {
			switch x := v.(type) {
			case string:
				decoded, err := base64.StdEncoding.DecodeString(x)
				if err != nil {
					return "", false
				}
				return hex.EncodeToString(decoded), true
			case []byte:
				return hex.EncodeToString(x), true
			default:
				debug.FailTypeCast(v)
				return "", false
			}
		},
	},
	Google: backendFuncs{
		EncodeVersion: func(v any) (string, bool) {
			switch x := v.(type) {
			case string:
				return x, x != ""
			case int64:
				return strconv.FormatInt(x, 10), true
			default:
				debug.FailTypeCast(v)
				return "", false
			}
		},
		EncodeCksum: func(v any) (string, bool) {
			switch x := v.(type) {
			case string:
				decoded, err := base64.StdEncoding.DecodeString(x)
				if err != nil {
					return "", false
				}
				return hex.EncodeToString(decoded), true
			case []byte:
				return hex.EncodeToString(x), true
			case uint32:

				b := []byte{byte(x >> 24), byte(x >> 16), byte(x >> 8), byte(x)}
				return base64.StdEncoding.EncodeToString(b), true
			default:
				debug.FailTypeCast(v)
				return "", false
			}
		},
	},
	HDFS: backendFuncs{
		EncodeCksum: func(v any) (cksumValue string, isSet bool) {
			switch x := v.(type) {
			case []byte:
				return hex.EncodeToString(x), true
			default:
				debug.FailTypeCast(v)
				return "", false
			}
		},
	},
	HTTP: backendFuncs{
		EncodeVersion: func(v any) (string, bool) {
			switch x := v.(type) {
			case string:

				x = strings.TrimPrefix(x, "W/")
				x = strings.Trim(x, "\"")
				return x, x != ""
			default:
				debug.FailTypeCast(v)
				return "", false
			}
		},
	},
}
View Source
var ConfigRestartRequired = []string{"auth", "memsys", "net"}

assorted named fields that require (cluster | node) restart for changes to make an effect

View Source
var Features feat.Flags

read-mostly feature flags (ditto)

View Source
var GCO *globalConfigOwner

GCO (Global Config Owner) is responsible for updating and notifying listeners about any changes in the config. Global Config is loaded at startup and then can be accessed/updated by other services.

View Source
var Timeout = &timeout{
	cplane:    time.Second + time.Millisecond,
	keepalive: 2*time.Second + time.Millisecond,
}

Functions

func AppGloghdr

func AppGloghdr(hdr string)

func CustomMD2S

func CustomMD2S(md cos.StrKVs) string

func DelBckFromQuery

func DelBckFromQuery(query url.Values) url.Values

func DirHasOrIsPrefix

func DirHasOrIsPrefix(dirPath, prefix string) bool

Directory has to either: - include (or match) prefix, or - be contained in prefix - motivation: don't SkipDir a/b when looking for a/b/c An alternative name for this function could be smth. like SameBranch()

func FreeBuffer

func FreeBuffer(buf *bytes.Buffer)

func FreeHra

func FreeHra(a *HreqArgs)

func FreeHterr

func FreeHterr(a *ErrHTTP)

func IsErrAborted

func IsErrAborted(err error) bool

func IsErrBckNotFound

func IsErrBckNotFound(err error) bool

func IsErrBucketAlreadyExists

func IsErrBucketAlreadyExists(err error) bool

func IsErrBucketLevel

func IsErrBucketLevel(err error) bool

func IsErrBucketNought

func IsErrBucketNought(err error) bool

nought: not a thing

func IsErrCapacityExceeded

func IsErrCapacityExceeded(err error) bool

func IsErrLmetaCorrupted

func IsErrLmetaCorrupted(err error) bool

func IsErrLmetaNotFound

func IsErrLmetaNotFound(err error) bool

func IsErrMountpathNotFound

func IsErrMountpathNotFound(err error) bool

func IsErrObjLevel

func IsErrObjLevel(err error) bool

func IsErrObjNought

func IsErrObjNought(err error) bool

func IsErrRemoteBckNotFound

func IsErrRemoteBckNotFound(err error) bool

func IsErrSoft

func IsErrSoft(err error) bool

func IsErrStreamTerminated

func IsErrStreamTerminated(err error) bool

func IsErrXactNotFound

func IsErrXactNotFound(err error) bool

func IsErrXactUsePrev

func IsErrXactUsePrev(err error) bool

func IsFileAlreadyClosed

func IsFileAlreadyClosed(err error) bool

func IsNestedMpath

func IsNestedMpath(a string, la int, b string) (err error)

func IsNotExist

func IsNotExist(err error) bool

usage: everywhere where applicable (directories, xactions, nodes, ...) excluding _local_ LOM (where the above applies)

func IsObjNotExist

func IsObjNotExist(err error) bool

usage: lom.Load() (compare w/ IsNotExist)

func IsStatusBadGateway

func IsStatusBadGateway(err error) (yes bool)

func IsStatusGone

func IsStatusGone(err error) (yes bool)

func IsStatusNotFound

func IsStatusNotFound(err error) (yes bool)

func IsStatusServiceUnavailable

func IsStatusServiceUnavailable(err error) (yes bool)

func IterFields

func IterFields(v any, updf updateFunc, opts ...IterOpts) error

IterFields walks the struct and calls `updf` callback at every leaf field that it encounters. The (nested) names are created by joining the json tag with dot. Iteration supports reading another, custom tag `list` with values:

  • `tagOmitempty` - omit empty fields (only for read run)
  • `tagOmit` - omit field
  • `tagReadonly` - field cannot be updated (returns error on `SetValue`)

Examples of usages for tags can be found in `BucketProps` or `Config` structs.

Passing additional options with `IterOpts` can for example call callback also at the non-leaf structures.

func KeepaliveRetryDuration

func KeepaliveRetryDuration(cs ...*Config) time.Duration

func LoadConfig

func LoadConfig(globalConfPath, localConfPath, daeRole string, config *Config) error

is called at startup

func MakeRangeHdr

func MakeRangeHdr(start, length int64) (hdr http.Header)

(compare w/ htrange.contentRange)

func MatchItems

func MatchItems(unescapedPath string, itemsAfter int, splitAfter bool, items []string) ([]string, error)

MatchItems splits URL path at "/" and matches resulting items against the specified, if any. - splitAfter == true: strings.Split() the entire path; - splitAfter == false: strings.SplitN(len(items)+itemsAfter) Returns all items that follow the specified `items`.

func NetworkCallWithRetry

func NetworkCallWithRetry(args *RetryArgs) (err error)

func NetworkIsKnown

func NetworkIsKnown(net string) bool

func NewBuffer

func NewBuffer() (buf *bytes.Buffer)

func NewClient

func NewClient(args TransportArgs) *http.Client

func NewTransport

func NewTransport(args TransportArgs) *http.Transport

func NormalizeProvider

func NormalizeProvider(provider string) (p string, err error)

func ObjHasPrefix

func ObjHasPrefix(objName, prefix string) bool

func OrigURLBck2Name

func OrigURLBck2Name(origURLBck string) (bckName string)

func ParsePort

func ParsePort(p string) (int, error)

func ParseURLScheme

func ParseURLScheme(url string) (scheme, address string)

Splits url into [(scheme)://](address). It's not possible to use url.Parse as (from url.Parse() docs) 'Trying to parse a hostname and path without a scheme is invalid'

func PrependProtocol

func PrependProtocol(url string, protocol ...string) string

PrependProtocol prepends protocol in URL in case it is missing. By default it adds `http://` as prefix to the URL.

func PromotedObjDstName

func PromotedObjDstName(objfqn, dirfqn, givenObjName string) (objName string, err error)

promoted destination object's name

func PropToHeader

func PropToHeader(prop string) string

PropToHeader converts a property full name to an HTTP header tag name

func ReadBytes

func ReadBytes(r *http.Request) (b []byte, err error)

func ReadJSON

func ReadJSON(w http.ResponseWriter, r *http.Request, out any) (err error)

func SaveOverrideConfig

func SaveOverrideConfig(configDir string, toUpdate *ConfigToUpdate) error

func SetLogLevel

func SetLogLevel(loglevel string) (err error)

func SetNodeName

func SetNodeName(sname string)

func SortLso

func SortLso(bckEntries LsoEntries)

func ToHeader

func ToHeader(oah cos.OAH, hdr http.Header)

func TokenGreaterEQ

func TokenGreaterEQ(token, objName string) bool

Returns true if the continuation token >= object's name (in other words, the object is already listed and must be skipped). Note that string `>=` is lexicographic.

func UpdateFieldValue

func UpdateFieldValue(s any, name string, value any) error

UpdateFieldValue updates the field in the struct with given value. Returns error if the field was not found or could not be updated.

func ValidateMpath

func ValidateMpath(mpath string) (string, error)

common mountpath validation (NOTE: calls filepath.Clean() every time)

func ValidatePort

func ValidatePort(port int) (int, error)

func ValidateRemAlias

func ValidateRemAlias(alias string) (err error)

func WaitForFunc

func WaitForFunc(f func() error, timeLong time.Duration) error

WaitForFunc executes a function in goroutine and waits for it to finish. If the function runs longer than `timeLong` WaitForFunc notifies a user that the user should wait for the result

func WriteErr

func WriteErr(w http.ResponseWriter, r *http.Request, err error, opts ...int)

sends HTTP response header with the provided status (alloc/free via mem-pool)

func WriteErr405

func WriteErr405(w http.ResponseWriter, r *http.Request, methods ...string)

405 Method Not Allowed, see: * https://www.rfc-editor.org/rfc/rfc7231#section-6.5.5

func WriteErrJSON

func WriteErrJSON(w http.ResponseWriter, r *http.Request, out any, err error) error

func WriteErrMsg

func WriteErrMsg(w http.ResponseWriter, r *http.Request, msg string, opts ...int)

Create ErrHTTP (based on `msg` and `opts`) and write it into HTTP response.

Types

type AllBsummResults

type AllBsummResults []*BsummResult

func (AllBsummResults) Aggregate

func (s AllBsummResults) Aggregate(from *BsummResult) AllBsummResults

func (AllBsummResults) Finalize

func (s AllBsummResults) Finalize(dsize map[string]uint64, testingEnv bool)

func (AllBsummResults) Len

func (s AllBsummResults) Len() int

func (AllBsummResults) Less

func (s AllBsummResults) Less(i, j int) bool

func (AllBsummResults) Swap

func (s AllBsummResults) Swap(i, j int)

type ArchiveBckMsg

type ArchiveBckMsg struct {
	ToBck Bck `json:"tobck"`
	apc.ArchiveMsg
}

ArchiveBckMsg contains parameters to archive mutiple objects from the specified (source) bucket. Destination bucket may the same as the source or a different one. -------------------- NOTE on terminology: --------------------- "archive" is any (.tar, .tgz/.tar.gz, .zip, .tar.lz4) formatted object often also called "shard"

See also: apc.PutApndArchArgs

func (*ArchiveBckMsg) Cname

func (msg *ArchiveBckMsg) Cname() string

type AuthConf

type AuthConf struct {
	Secret  string `json:"secret"`
	Enabled bool   `json:"enabled"`
}

type AuthConfToUpdate

type AuthConfToUpdate struct {
	Secret  *string `json:"secret,omitempty"`
	Enabled *bool   `json:"enabled,omitempty"`
}

type BackendBckToUpdate

type BackendBckToUpdate struct {
	Name     *string `json:"name"`
	Provider *string `json:"provider"`
}

type BackendConf

type BackendConf struct {
	// provider implementation-dependent
	Conf map[string]any `json:"conf,omitempty"`
	// 3rd party Cloud(s) -- set during validation
	Providers map[string]Ns `json:"-"`
}

func (*BackendConf) EqualClouds

func (c *BackendConf) EqualClouds(o *BackendConf) bool

func (*BackendConf) EqualRemAIS

func (c *BackendConf) EqualRemAIS(o *BackendConf) bool

func (*BackendConf) Get

func (c *BackendConf) Get(provider string) (conf any)

func (*BackendConf) MarshalJSON

func (c *BackendConf) MarshalJSON() (data []byte, err error)

func (*BackendConf) Set

func (c *BackendConf) Set(provider string, newConf any)

func (*BackendConf) UnmarshalJSON

func (c *BackendConf) UnmarshalJSON(data []byte) error

func (*BackendConf) Validate

func (c *BackendConf) Validate() (err error)

type BackendConfAIS

type BackendConfAIS map[string][]string // cluster alias -> [urls...]

func (BackendConfAIS) String

func (c BackendConfAIS) String() (s string)

type BackendConfHDFS

type BackendConfHDFS struct {
	Addresses           []string `json:"addresses"`
	User                string   `json:"user"`
	UseDatanodeHostname bool     `json:"use_datanode_hostname"`
}

type Bck

type Bck struct {
	Props    *BucketProps `json:"-"`
	Name     string       `json:"name" yaml:"name"`
	Provider string       `json:"provider" yaml:"provider"` // NOTE: see api/apc/provider.go for supported enum
	Ns       Ns           `json:"namespace" yaml:"namespace" list:"omitempty"`
}

func ParseBckObjectURI

func ParseBckObjectURI(uri string, opts ParseURIOpts) (bck Bck, objName string, err error)

func ParseUname

func ParseUname(uname string) (b Bck, objName string)

unique name => Bck (use MakeUname above to perform the reverse translation)

func (*Bck) AddToQuery

func (b *Bck) AddToQuery(query url.Values) url.Values

func (*Bck) AddUnameToQuery

func (b *Bck) AddUnameToQuery(query url.Values, uparam string) url.Values

func (*Bck) Backend

func (b *Bck) Backend() *Bck

func (*Bck) Cname

func (b *Bck) Cname(objname string) (s string)

canonical name, with or without object

func (*Bck) Copy

func (b *Bck) Copy(src *Bck)

func (*Bck) DefaultProps

func (bck *Bck) DefaultProps(c *ClusterConfig) *BucketProps

By default, created buckets inherit their properties from the cluster (global) configuration. Global configuration, in turn, is protected versioned, checksummed, and replicated across the entire cluster.

* Bucket properties can be changed at any time via `api.SetBucketProps`. * In addition, `api.CreateBucket` allows to specify (non-default) properties at bucket creation time. * Inherited defaults include checksum, LRU, etc. configurations - see below. * By default, LRU is disabled for AIS (`ais://`) buckets.

See also:

  • github.com/artashesbalabekyan/aistore/blob/master/docs/bucket.md#default-bucket-properties
  • BucketPropsToUpdate (above)
  • ais.defaultBckProps()

func (*Bck) DisplayProvider

func (b *Bck) DisplayProvider() (p string)

translation from s3: gs: scheme back to aws, gcp, etc.

func (Bck) Equal

func (b Bck) Equal(other *Bck) bool

func (*Bck) HasProvider

func (b *Bck) HasProvider() bool

func (*Bck) IsAIS

func (b *Bck) IsAIS() bool

func (*Bck) IsCloud

func (b *Bck) IsCloud() bool

func (*Bck) IsEmpty

func (b *Bck) IsEmpty() bool

func (*Bck) IsHDFS

func (b *Bck) IsHDFS() bool

func (*Bck) IsHTTP

func (b *Bck) IsHTTP() bool

func (*Bck) IsQuery

func (b *Bck) IsQuery() bool

QueryBcks (see below) is a Bck that _can_ have an empty Name.

func (*Bck) IsRemote

func (b *Bck) IsRemote() bool

func (*Bck) IsRemoteAIS

func (b *Bck) IsRemoteAIS() bool

func (*Bck) Less

func (b *Bck) Less(other *Bck) bool

func (*Bck) MakeUname

func (b *Bck) MakeUname(objName string) string

Bck => unique name (use ParseUname below to translate back)

func (*Bck) RemoteBck

func (b *Bck) RemoteBck() *Bck

func (Bck) String

func (b Bck) String() (s string)

func (*Bck) Validate

func (b *Bck) Validate() (err error)

func (*Bck) ValidateName

func (b *Bck) ValidateName() (err error)

type Bcks

type Bcks []Bck

func (Bcks) Equal

func (bcks Bcks) Equal(other Bcks) bool

func (Bcks) Len

func (bcks Bcks) Len() int

func (Bcks) Less

func (bcks Bcks) Less(i, j int) bool

func (Bcks) Select

func (bcks Bcks) Select(query QueryBcks) (filtered Bcks)

func (Bcks) Swap

func (bcks Bcks) Swap(i, j int)

type BsummResult

type BsummResult struct {
	Bck
	apc.BsummResult
}

func NewBsummResult

func NewBsummResult(bck *Bck, totalDisksSize uint64) (bs *BsummResult)

type BucketProps

type BucketProps struct {
	BackendBck  Bck             `json:"backend_bck,omitempty"` // makes remote bucket out of a given ais bucket
	Extra       ExtraProps      `json:"extra,omitempty" list:"omitempty"`
	WritePolicy WritePolicyConf `json:"write_policy"`
	Provider    string          `json:"provider" list:"readonly"`       // backend provider
	Renamed     string          `list:"omit"`                           // non-empty if the bucket has been renamed
	Cksum       CksumConf       `json:"checksum"`                       // the bucket's checksum
	EC          ECConf          `json:"ec"`                             // erasure coding
	LRU         LRUConf         `json:"lru"`                            // LRU (watermarks and enabled/disabled)
	Mirror      MirrorConf      `json:"mirror"`                         // mirroring
	Access      apc.AccessAttrs `json:"access,string"`                  // access permissions
	BID         uint64          `json:"bid,string" list:"omit"`         // unique ID
	Created     int64           `json:"created,string" list:"readonly"` // creation timestamp
	Versioning  VersionConf     `json:"versioning"`                     // versioning (see "inherit" here and elsewhere)
}

func (*BucketProps) Apply

func (bp *BucketProps) Apply(propsToUpdate *BucketPropsToUpdate)

func (*BucketProps) Clone

func (bp *BucketProps) Clone() *BucketProps

func (*BucketProps) Equal

func (bp *BucketProps) Equal(other *BucketProps) (eq bool)

func (*BucketProps) SetProvider

func (bp *BucketProps) SetProvider(provider string)

func (*BucketProps) Validate

func (bp *BucketProps) Validate(targetCnt int) error

type BucketPropsToUpdate

type BucketPropsToUpdate struct {
	BackendBck  *BackendBckToUpdate      `json:"backend_bck,omitempty"`
	Versioning  *VersionConfToUpdate     `json:"versioning,omitempty"`
	Cksum       *CksumConfToUpdate       `json:"checksum,omitempty"`
	LRU         *LRUConfToUpdate         `json:"lru,omitempty"`
	Mirror      *MirrorConfToUpdate      `json:"mirror,omitempty"`
	EC          *ECConfToUpdate          `json:"ec,omitempty"`
	Access      *apc.AccessAttrs         `json:"access,string,omitempty"`
	WritePolicy *WritePolicyConfToUpdate `json:"write_policy,omitempty"`
	Extra       *ExtraToUpdate           `json:"extra,omitempty"`
	Force       bool                     `json:"force,omitempty" copy:"skip" list:"omit"`
}

Once validated, BucketPropsToUpdate are copied to BucketProps. The struct may have extra fields that do not exist in BucketProps. Add tag 'copy:"skip"' to ignore those fields when copying values.

func NewBucketPropsToUpdate

func NewBucketPropsToUpdate(nvs cos.StrKVs) (props *BucketPropsToUpdate, err error)

type CksumConf

type CksumConf struct {
	// (note that `ChecksumNone` ("none") disables checksumming)
	Type string `json:"type"`

	// validate the checksum of the object that we cold-GET
	// or download from remote location (e.g., cloud bucket)
	ValidateColdGet bool `json:"validate_cold_get"`

	// validate object's version (if exists and provided) and its checksum -
	// if either value fail to match, the object is removed from ais.
	//
	// NOTE: object versioning is backend-specific and is may _not_ be supported by a given
	// (supported) backends - see docs for details.
	ValidateWarmGet bool `json:"validate_warm_get"`

	// determines whether to validate checksums of objects
	// migrated or replicated within the cluster
	ValidateObjMove bool `json:"validate_obj_move"`

	// EnableReadRange: Return read range checksum otherwise return entire object checksum.
	EnableReadRange bool `json:"enable_read_range"`
}

func (*CksumConf) String

func (c *CksumConf) String() string

func (*CksumConf) Validate

func (c *CksumConf) Validate() (err error)

func (*CksumConf) ValidateAsProps

func (c *CksumConf) ValidateAsProps(...any) (err error)

type CksumConfToUpdate

type CksumConfToUpdate struct {
	Type            *string `json:"type,omitempty"`
	ValidateColdGet *bool   `json:"validate_cold_get,omitempty"`
	ValidateWarmGet *bool   `json:"validate_warm_get,omitempty"`
	ValidateObjMove *bool   `json:"validate_obj_move,omitempty"`
	EnableReadRange *bool   `json:"enable_read_range,omitempty"`
}

type ClientConf

type ClientConf struct {
	Timeout        cos.Duration `json:"client_timeout"`
	TimeoutLong    cos.Duration `json:"client_long_timeout"`
	ListObjTimeout cos.Duration `json:"list_timeout"`
}

func (*ClientConf) Validate

func (c *ClientConf) Validate() error

type ClientConfToUpdate

type ClientConfToUpdate struct {
	Timeout        *cos.Duration `json:"client_timeout,omitempty"` // readonly as far as intra-cluster
	TimeoutLong    *cos.Duration `json:"client_long_timeout,omitempty"`
	ListObjTimeout *cos.Duration `json:"list_timeout,omitempty"`
}

type ClusterConfig

type ClusterConfig struct {
	Ext        any            `json:"ext,omitempty"` // within meta-version extensions
	Backend    BackendConf    `json:"backend" allow:"cluster"`
	Mirror     MirrorConf     `json:"mirror" allow:"cluster"`
	EC         ECConf         `json:"ec" allow:"cluster"`
	Log        LogConf        `json:"log"`
	Periodic   PeriodConf     `json:"periodic"`
	Timeout    TimeoutConf    `json:"timeout"`
	Client     ClientConf     `json:"client"`
	Proxy      ProxyConf      `json:"proxy" allow:"cluster"`
	Space      SpaceConf      `json:"space"`
	LRU        LRUConf        `json:"lru"`
	Disk       DiskConf       `json:"disk"`
	Rebalance  RebalanceConf  `json:"rebalance" allow:"cluster"`
	Resilver   ResilverConf   `json:"resilver"`
	Cksum      CksumConf      `json:"checksum"`
	Versioning VersionConf    `json:"versioning" allow:"cluster"`
	Net        NetConf        `json:"net"`
	FSHC       FSHCConf       `json:"fshc"`
	Auth       AuthConf       `json:"auth"`
	Keepalive  KeepaliveConf  `json:"keepalivetracker"`
	Downloader DownloaderConf `json:"downloader"`
	DSort      DSortConf      `json:"distributed_sort"`
	Transport  TransportConf  `json:"transport"`
	Memsys     MemsysConf     `json:"memsys"`

	// Transform (offline) or Copy src Bucket => dst bucket
	TCB TCBConf `json:"tcb"`

	// metadata write policy: (immediate | delayed | never)
	WritePolicy WritePolicyConf `json:"write_policy"`

	// standalone enumerated features that can be configured
	// to flip assorted global defaults (see cmn/feat/feat.go)
	Features feat.Flags `json:"features,string" allow:"cluster"`

	// read-only
	LastUpdated string `json:"lastupdate_time"`       // timestamp
	UUID        string `json:"uuid"`                  // UUID
	Version     int64  `json:"config_version,string"` // version
}

global configuration

func (*ClusterConfig) Apply

func (c *ClusterConfig) Apply(updateConf *ConfigToUpdate, asType string) error

func (*ClusterConfig) JspOpts

func (*ClusterConfig) JspOpts() jsp.Options

func (*ClusterConfig) String

func (c *ClusterConfig) String() string

type Config

type Config struct {
	ClusterConfig `json:",inline"`
	LocalConfig   `json:",inline"`
	// contains filtered or unexported fields
}

Config contains all configuration values used by a given ais daemon. Naming convention for setting/getting specific values is defined as follows:

(parent json tag . child json tag)

E.g., to set/get `EC.Enabled` use `ec.enabled`. And so on. For details, see `IterFields`.

func (*Config) SetRole

func (c *Config) SetRole(role string)

func (*Config) TestingEnv

func (c *Config) TestingEnv() bool

TestingEnv returns true if config is set to a development environment where a single local filesystem is partitioned between all (locally running) targets and is used for both local and Cloud buckets

func (*Config) UpdateClusterConfig

func (c *Config) UpdateClusterConfig(updateConf *ConfigToUpdate, asType string) (err error)

func (*Config) Validate

func (c *Config) Validate() error

main config validator

type ConfigToUpdate

type ConfigToUpdate struct {
	// ClusterConfig
	Backend     *BackendConf             `json:"backend,omitempty"`
	Mirror      *MirrorConfToUpdate      `json:"mirror,omitempty"`
	EC          *ECConfToUpdate          `json:"ec,omitempty"`
	Log         *LogConfToUpdate         `json:"log,omitempty"`
	Periodic    *PeriodConfToUpdate      `json:"periodic,omitempty"`
	Timeout     *TimeoutConfToUpdate     `json:"timeout,omitempty"`
	Client      *ClientConfToUpdate      `json:"client,omitempty"`
	Space       *SpaceConfToUpdate       `json:"space,omitempty"`
	LRU         *LRUConfToUpdate         `json:"lru,omitempty"`
	Disk        *DiskConfToUpdate        `json:"disk,omitempty"`
	Rebalance   *RebalanceConfToUpdate   `json:"rebalance,omitempty"`
	Resilver    *ResilverConfToUpdate    `json:"resilver,omitempty"`
	Cksum       *CksumConfToUpdate       `json:"checksum,omitempty"`
	Versioning  *VersionConfToUpdate     `json:"versioning,omitempty"`
	Net         *NetConfToUpdate         `json:"net,omitempty"`
	FSHC        *FSHCConfToUpdate        `json:"fshc,omitempty"`
	Auth        *AuthConfToUpdate        `json:"auth,omitempty"`
	Keepalive   *KeepaliveConfToUpdate   `json:"keepalivetracker,omitempty"`
	Downloader  *DownloaderConfToUpdate  `json:"downloader,omitempty"`
	DSort       *DSortConfToUpdate       `json:"distributed_sort,omitempty"`
	Transport   *TransportConfToUpdate   `json:"transport,omitempty"`
	Memsys      *MemsysConfToUpdate      `json:"memsys,omitempty"`
	TCB         *TCBConfToUpdate         `json:"tcb,omitempty"`
	WritePolicy *WritePolicyConfToUpdate `json:"write_policy,omitempty"`
	Proxy       *ProxyConfToUpdate       `json:"proxy,omitempty"`
	Features    *feat.Flags              `json:"features,string,omitempty"`

	// LocalConfig
	FSP *FSPConf `json:"fspaths,omitempty"`
}

func (*ConfigToUpdate) FillFromKVS

func (ctu *ConfigToUpdate) FillFromKVS(kvs []string) (err error)

FillFromKVS populates `ConfigToUpdate` from key value pairs of the form `key=value`

func (*ConfigToUpdate) FillFromQuery

func (ctu *ConfigToUpdate) FillFromQuery(query url.Values) error

FillFromQuery populates ConfigToUpdate from URL query values

func (*ConfigToUpdate) JspOpts

func (*ConfigToUpdate) JspOpts() jsp.Options

func (*ConfigToUpdate) Merge

func (ctu *ConfigToUpdate) Merge(update *ConfigToUpdate)

type DSortConf

type DSortConf struct {
	DuplicatedRecords   string       `json:"duplicated_records"`
	MissingShards       string       `json:"missing_shards"`
	EKMMalformedLine    string       `json:"ekm_malformed_line"`
	EKMMissingKey       string       `json:"ekm_missing_key"`
	DefaultMaxMemUsage  string       `json:"default_max_mem_usage"`
	CallTimeout         cos.Duration `json:"call_timeout"`
	DSorterMemThreshold string       `json:"dsorter_mem_threshold"`
	Compression         string       `json:"compression"`       // {CompressAlways,...} in api/apc/compression.go
	SbundleMult         int          `json:"bundle_multiplier"` // stream-bundle multiplier: num to destination
}

func (*DSortConf) Validate

func (c *DSortConf) Validate() (err error)

func (*DSortConf) ValidateWithOpts

func (c *DSortConf) ValidateWithOpts(allowEmpty bool) (err error)

type DSortConfToUpdate

type DSortConfToUpdate struct {
	DuplicatedRecords   *string       `json:"duplicated_records,omitempty"`
	MissingShards       *string       `json:"missing_shards,omitempty"`
	EKMMalformedLine    *string       `json:"ekm_malformed_line,omitempty"`
	EKMMissingKey       *string       `json:"ekm_missing_key,omitempty"`
	DefaultMaxMemUsage  *string       `json:"default_max_mem_usage,omitempty"`
	CallTimeout         *cos.Duration `json:"call_timeout,omitempty"`
	DSorterMemThreshold *string       `json:"dsorter_mem_threshold,omitempty"`
	Compression         *string       `json:"compression,omitempty"`
	SbundleMult         *int          `json:"bundle_multiplier,omitempty"`
}

type DiskConf

type DiskConf struct {
	DiskUtilLowWM   int64        `json:"disk_util_low_wm"`  // no throttling below
	DiskUtilHighWM  int64        `json:"disk_util_high_wm"` // throttle longer when above
	DiskUtilMaxWM   int64        `json:"disk_util_max_wm"`
	IostatTimeLong  cos.Duration `json:"iostat_time_long"`
	IostatTimeShort cos.Duration `json:"iostat_time_short"`
}

func (*DiskConf) Validate

func (c *DiskConf) Validate() (err error)

type DiskConfToUpdate

type DiskConfToUpdate struct {
	DiskUtilLowWM   *int64        `json:"disk_util_low_wm,omitempty"`
	DiskUtilHighWM  *int64        `json:"disk_util_high_wm,omitempty"`
	DiskUtilMaxWM   *int64        `json:"disk_util_max_wm,omitempty"`
	IostatTimeLong  *cos.Duration `json:"iostat_time_long,omitempty"`
	IostatTimeShort *cos.Duration `json:"iostat_time_short,omitempty"`
}

type DownloaderConf

type DownloaderConf struct {
	Timeout cos.Duration `json:"timeout"`
}

func (*DownloaderConf) Validate

func (c *DownloaderConf) Validate() error

type DownloaderConfToUpdate

type DownloaderConfToUpdate struct {
	Timeout *cos.Duration `json:"timeout,omitempty"`
}

type ECConf

type ECConf struct {
	ObjSizeLimit int64  `json:"objsize_limit"`     // objects below this size are replicated instead of EC'ed
	Compression  string `json:"compression"`       // enum { CompressAlways, ... } in api/apc/compression.go
	SbundleMult  int    `json:"bundle_multiplier"` // stream-bundle multiplier: num streams to destination
	DataSlices   int    `json:"data_slices"`       // number of data slices
	ParitySlices int    `json:"parity_slices"`     // number of parity slices/replicas
	Enabled      bool   `json:"enabled"`           // EC is enabled
	DiskOnly     bool   `json:"disk_only"`         // if true, EC does not use SGL - data goes directly to drives
}

func (*ECConf) RequiredEncodeTargets

func (c *ECConf) RequiredEncodeTargets() int

func (*ECConf) RequiredRestoreTargets

func (c *ECConf) RequiredRestoreTargets() int

func (*ECConf) String

func (c *ECConf) String() string

func (*ECConf) Validate

func (c *ECConf) Validate() error

func (*ECConf) ValidateAsProps

func (c *ECConf) ValidateAsProps(arg ...any) (err error)

type ECConfToUpdate

type ECConfToUpdate struct {
	ObjSizeLimit *int64  `json:"objsize_limit,omitempty"`
	Compression  *string `json:"compression,omitempty"`
	SbundleMult  *int    `json:"bundle_multiplier,omitempty"`
	DataSlices   *int    `json:"data_slices,omitempty"`
	ParitySlices *int    `json:"parity_slices,omitempty"`
	Enabled      *bool   `json:"enabled,omitempty"`
	DiskOnly     *bool   `json:"disk_only,omitempty"`
}

type ETLErrCtx

type ETLErrCtx struct {
	TID     string
	ETLName string
	PodName string
	SvcName string
}

type ErrAborted

type ErrAborted struct {
	// contains filtered or unexported fields
}

func AsErrAborted

func AsErrAborted(err error) (errAborted *ErrAborted)

func NewErrAborted

func NewErrAborted(what, ctx string, err error) *ErrAborted

func (*ErrAborted) Error

func (e *ErrAborted) Error() (s string)

func (*ErrAborted) Unwrap

func (e *ErrAborted) Unwrap() (err error)

type ErrBckNotFound

type ErrBckNotFound struct {
	// contains filtered or unexported fields
}

func NewErrBckNotFound

func NewErrBckNotFound(bck *Bck) *ErrBckNotFound

func (*ErrBckNotFound) Error

func (e *ErrBckNotFound) Error() string

type ErrBucketAccessDenied

type ErrBucketAccessDenied struct {
	// contains filtered or unexported fields
}

func NewBucketAccessDenied

func NewBucketAccessDenied(bucket, oper string, aattrs apc.AccessAttrs) *ErrBucketAccessDenied

func (*ErrBucketAccessDenied) Error

func (e *ErrBucketAccessDenied) Error() string

func (*ErrBucketAccessDenied) String

func (e *ErrBucketAccessDenied) String() string

type ErrBucketAlreadyExists

type ErrBucketAlreadyExists struct {
	// contains filtered or unexported fields
}

func NewErrBckAlreadyExists

func NewErrBckAlreadyExists(bck *Bck) *ErrBucketAlreadyExists

func (*ErrBucketAlreadyExists) Error

func (e *ErrBucketAlreadyExists) Error() string

type ErrBucketIsBusy

type ErrBucketIsBusy struct {
	// contains filtered or unexported fields
}

func NewErrBckIsBusy

func NewErrBckIsBusy(bck *Bck) *ErrBucketIsBusy

func (*ErrBucketIsBusy) Error

func (e *ErrBucketIsBusy) Error() string

type ErrCapacityExceeded

type ErrCapacityExceeded struct {
	// contains filtered or unexported fields
}

func NewErrCapacityExceeded

func NewErrCapacityExceeded(highWM int64, totalBytesUsed, totalBytes uint64, usedPct int32, oos bool) *ErrCapacityExceeded

func (*ErrCapacityExceeded) Error

func (e *ErrCapacityExceeded) Error() string

type ErrETL

type ErrETL struct {
	Reason string
	ETLErrCtx
}

func NewErrETL

func NewErrETL(ctx *ETLErrCtx, format string, a ...any) *ErrETL

func (*ErrETL) Error

func (e *ErrETL) Error() string

func (*ErrETL) WithContext

func (e *ErrETL) WithContext(ctx *ETLErrCtx) *ErrETL

func (*ErrETL) WithPodName

func (e *ErrETL) WithPodName(name string) *ErrETL

type ErrFailedTo

type ErrFailedTo struct {
	// contains filtered or unexported fields
}

func NewErrFailedTo

func NewErrFailedTo(actor any, action string, what any, err error, errCode ...int) *ErrFailedTo

func (*ErrFailedTo) Error

func (e *ErrFailedTo) Error() string

func (*ErrFailedTo) Unwrap

func (e *ErrFailedTo) Unwrap() (err error)

type ErrHTTP

type ErrHTTP struct {
	TypeCode   string `json:"tcode,omitempty"`
	Message    string `json:"message"`
	Method     string `json:"method"`
	URLPath    string `json:"url_path"`
	RemoteAddr string `json:"remote_addr"`
	Caller     string `json:"caller"`
	Node       string `json:"node"`

	Status int `json:"status"`
	// contains filtered or unexported fields
}

Error structure for HTTP errors

func Err2HTTPErr

func Err2HTTPErr(err error) *ErrHTTP

func InitErrHTTP

func InitErrHTTP(r *http.Request, err error, errCode int) (e *ErrHTTP)

uses `allocHterr` to allocate - caller must free via `FreeHterr`

func NewErrHTTP

func NewErrHTTP(r *http.Request, err error, errCode int) (e *ErrHTTP)

func S2HTTPErr

func S2HTTPErr(r *http.Request, msg string, status int) *ErrHTTP

func (*ErrHTTP) Error

func (e *ErrHTTP) Error() (s string)

type ErrInitBackend

type ErrInitBackend struct {
	Provider string
}

func (*ErrInitBackend) Error

func (e *ErrInitBackend) Error() string

type ErrInvalidBackendProvider

type ErrInvalidBackendProvider struct {
	// contains filtered or unexported fields
}

func (*ErrInvalidBackendProvider) Error

func (e *ErrInvalidBackendProvider) Error() string

func (*ErrInvalidBackendProvider) Is

func (*ErrInvalidBackendProvider) Is(target error) bool

type ErrInvalidCksum

type ErrInvalidCksum struct {
	// contains filtered or unexported fields
}

func NewErrInvalidCksum

func NewErrInvalidCksum(eHash, aHash string) *ErrInvalidCksum

func (*ErrInvalidCksum) Error

func (e *ErrInvalidCksum) Error() string

func (*ErrInvalidCksum) Expected

func (e *ErrInvalidCksum) Expected() string

type ErrInvalidFSPathsConf

type ErrInvalidFSPathsConf struct {
	// contains filtered or unexported fields
}

func NewErrInvalidFSPathsConf

func NewErrInvalidFSPathsConf(err error) *ErrInvalidFSPathsConf

func (*ErrInvalidFSPathsConf) Error

func (e *ErrInvalidFSPathsConf) Error() string

func (*ErrInvalidFSPathsConf) Unwrap

func (e *ErrInvalidFSPathsConf) Unwrap() (err error)

type ErrInvalidMountpath

type ErrInvalidMountpath struct {
	// contains filtered or unexported fields
}

func NewErrInvalidaMountpath

func NewErrInvalidaMountpath(mpath, cause string) *ErrInvalidMountpath

func (*ErrInvalidMountpath) Error

func (e *ErrInvalidMountpath) Error() string

type ErrLimitedCoexistence

type ErrLimitedCoexistence struct {
	// contains filtered or unexported fields
}

func NewErrLimitedCoexistence

func NewErrLimitedCoexistence(node, xaction, action, detail string) *ErrLimitedCoexistence

func (*ErrLimitedCoexistence) Error

func (e *ErrLimitedCoexistence) Error() string

type ErrLmetaCorrupted

type ErrLmetaCorrupted struct {
	// contains filtered or unexported fields
}

func NewErrLmetaCorrupted

func NewErrLmetaCorrupted(err error) *ErrLmetaCorrupted

func (*ErrLmetaCorrupted) Error

func (e *ErrLmetaCorrupted) Error() string

func (*ErrLmetaCorrupted) Unwrap

func (e *ErrLmetaCorrupted) Unwrap() (err error)

type ErrLmetaNotFound

type ErrLmetaNotFound struct {
	// contains filtered or unexported fields
}

func NewErrLmetaNotFound

func NewErrLmetaNotFound(err error) *ErrLmetaNotFound

func (*ErrLmetaNotFound) Error

func (e *ErrLmetaNotFound) Error() string

func (*ErrLmetaNotFound) Unwrap

func (e *ErrLmetaNotFound) Unwrap() (err error)

type ErrMissingBackend

type ErrMissingBackend struct {
	Provider string
	Msg      string
}

func (*ErrMissingBackend) Error

func (e *ErrMissingBackend) Error() string

type ErrMountpathNotFound

type ErrMountpathNotFound struct {
	// contains filtered or unexported fields
}

func NewErrMountpathNotFound

func NewErrMountpathNotFound(mpath, fqn string, disabled bool) *ErrMountpathNotFound

func (*ErrMountpathNotFound) Error

func (e *ErrMountpathNotFound) Error() string

type ErrNoNodes

type ErrNoNodes struct {
	// contains filtered or unexported fields
}

func NewErrNoNodes

func NewErrNoNodes(role string, mmcount int) *ErrNoNodes

func (*ErrNoNodes) Error

func (e *ErrNoNodes) Error() (s string)

type ErrNotImpl

type ErrNotImpl struct {
	// contains filtered or unexported fields
}

func NewErrNotImpl

func NewErrNotImpl(action, what string) *ErrNotImpl

func (*ErrNotImpl) Error

func (e *ErrNotImpl) Error() string

type ErrObjDefunct

type ErrObjDefunct struct {
	// contains filtered or unexported fields
}

func NewErrObjDefunct

func NewErrObjDefunct(name string, d1, d2 uint64) *ErrObjDefunct

func (*ErrObjDefunct) Error

func (e *ErrObjDefunct) Error() string

type ErrObjectAccessDenied

type ErrObjectAccessDenied struct {
	// contains filtered or unexported fields
}

func NewObjectAccessDenied

func NewObjectAccessDenied(object, oper string, aattrs apc.AccessAttrs) *ErrObjectAccessDenied

func (*ErrObjectAccessDenied) Error

func (e *ErrObjectAccessDenied) Error() string

func (*ErrObjectAccessDenied) String

func (e *ErrObjectAccessDenied) String() string

type ErrRemoteBckNotFound

type ErrRemoteBckNotFound struct {
	// contains filtered or unexported fields
}

func NewErrRemoteBckNotFound

func NewErrRemoteBckNotFound(bck *Bck) *ErrRemoteBckNotFound

func (*ErrRemoteBckNotFound) Error

func (e *ErrRemoteBckNotFound) Error() string

type ErrRemoteBucketOffline

type ErrRemoteBucketOffline struct {
	// contains filtered or unexported fields
}

func NewErrRemoteBckOffline

func NewErrRemoteBckOffline(bck *Bck) *ErrRemoteBucketOffline

func (*ErrRemoteBucketOffline) Error

func (e *ErrRemoteBucketOffline) Error() string

type ErrSoft

type ErrSoft struct {
	// contains filtered or unexported fields
}

func NewErrSoft

func NewErrSoft(what string) *ErrSoft

func (*ErrSoft) Error

func (e *ErrSoft) Error() string

type ErrStreamTerminated

type ErrStreamTerminated struct {
	// contains filtered or unexported fields
}

func NewErrStreamTerminated

func NewErrStreamTerminated(stream string, err error, reason, detail string) *ErrStreamTerminated

func (*ErrStreamTerminated) Error

func (e *ErrStreamTerminated) Error() string

func (*ErrStreamTerminated) Unwrap

func (e *ErrStreamTerminated) Unwrap() (err error)

type ErrUnsupp

type ErrUnsupp struct {
	// contains filtered or unexported fields
}

func NewErrUnsupp

func NewErrUnsupp(action, what string) *ErrUnsupp

func (*ErrUnsupp) Error

func (e *ErrUnsupp) Error() string

type ErrXactNotFound

type ErrXactNotFound struct {
	// contains filtered or unexported fields
}

func NewErrXactNotFoundError

func NewErrXactNotFoundError(cause string) *ErrXactNotFound

func (*ErrXactNotFound) Error

func (e *ErrXactNotFound) Error() string

type ErrXactTgtInMaint

type ErrXactTgtInMaint struct {
	// contains filtered or unexported fields
}

func NewErrXactTgtInMaint

func NewErrXactTgtInMaint(xaction, tname string) *ErrXactTgtInMaint

func (*ErrXactTgtInMaint) Error

func (e *ErrXactTgtInMaint) Error() string

type ErrXactUsePrev

type ErrXactUsePrev struct {
	// contains filtered or unexported fields
}

func NewErrXactUsePrev

func NewErrXactUsePrev(xaction string) *ErrXactUsePrev

func (*ErrXactUsePrev) Error

func (e *ErrXactUsePrev) Error() string

type ExtraProps

type ExtraProps struct {
	AWS  ExtraPropsAWS  `json:"aws,omitempty" list:"omitempty"`
	HTTP ExtraPropsHTTP `json:"http,omitempty" list:"omitempty"`
	HDFS ExtraPropsHDFS `json:"hdfs,omitempty" list:"omitempty"`
}

func (*ExtraProps) ValidateAsProps

func (c *ExtraProps) ValidateAsProps(arg ...any) error

type ExtraPropsAWS

type ExtraPropsAWS struct {
	CloudRegion string `json:"cloud_region,omitempty"`

	// from https://github.com/aws/aws-sdk-go/blob/main/aws/config.go:
	//   "An optional endpoint URL (hostname only or fully qualified URI)
	//    that overrides the default generated endpoint."
	Endpoint string `json:"endpoint,omitempty"`

	AccessKeyID     string `json:"access_key_id,omitempty"`
	SecretAccessKey string `json:"secret_access_key,omitempty"`
}

type ExtraPropsAWSToUpdate

type ExtraPropsAWSToUpdate struct {
	CloudRegion *string `json:"cloud_region"`
	Endpoint    *string `json:"endpoint"`

	AccessKeyID     *string `json:"access_key_id"`
	SecretAccessKey *string `json:"secret_access_key"`
}

type ExtraPropsHDFS

type ExtraPropsHDFS struct {
	// Reference directory.
	RefDirectory string `json:"ref_directory,omitempty"`
}

type ExtraPropsHDFSToUpdate

type ExtraPropsHDFSToUpdate struct {
	RefDirectory *string `json:"ref_directory"`
}

type ExtraPropsHTTP

type ExtraPropsHTTP struct {
	// Original URL prior to hashing.
	OrigURLBck string `json:"original_url,omitempty" list:"readonly"`
}

type ExtraPropsHTTPToUpdate

type ExtraPropsHTTPToUpdate struct {
	OrigURLBck *string `json:"original_url"`
}

type ExtraToUpdate

type ExtraToUpdate struct {
	AWS  *ExtraPropsAWSToUpdate  `json:"aws"`
	HTTP *ExtraPropsHTTPToUpdate `json:"http"`
	HDFS *ExtraPropsHDFSToUpdate `json:"hdfs"`
}

type FSHCConf

type FSHCConf struct {
	TestFileCount int  `json:"test_files"`  // number of files to read/write
	ErrorLimit    int  `json:"error_limit"` // exceeding err limit causes disabling mountpath
	Enabled       bool `json:"enabled"`
}

type FSHCConfToUpdate

type FSHCConfToUpdate struct {
	TestFileCount *int  `json:"test_files,omitempty"`
	ErrorLimit    *int  `json:"error_limit,omitempty"`
	Enabled       *bool `json:"enabled,omitempty"`
}

type FSPConf

type FSPConf struct {
	Paths cos.StrSet `json:"paths,omitempty" list:"readonly"`
}

func (*FSPConf) MarshalJSON

func (c *FSPConf) MarshalJSON() (data []byte, err error)

func (*FSPConf) UnmarshalJSON

func (c *FSPConf) UnmarshalJSON(data []byte) (err error)

func (*FSPConf) Validate

func (c *FSPConf) Validate(contextConfig *Config) error

type HTTPBckObj

type HTTPBckObj struct {
	Bck        Bck
	ObjName    string
	OrigURLBck string // HTTP URL of the bucket (object name excluded)
}

Represents the AIS bucket, object and URL associated with a HTTP resource

func NewHTTPObj

func NewHTTPObj(u *url.URL) *HTTPBckObj

func NewHTTPObjPath

func NewHTTPObjPath(rawURL string) (*HTTPBckObj, error)

type HTTPConf

type HTTPConf struct {
	Proto           string `json:"-"`                 // http or https (set depending on `UseHTTPS`)
	Certificate     string `json:"server_crt"`        // HTTPS: openssl certificate
	Key             string `json:"server_key"`        // HTTPS: openssl key
	WriteBufferSize int    `json:"write_buffer_size"` // http.Transport.WriteBufferSize; zero defaults to 4KB
	ReadBufferSize  int    `json:"read_buffer_size"`  // http.Transport.ReadBufferSize; ditto
	UseHTTPS        bool   `json:"use_https"`         // use HTTPS instead of HTTP
	SkipVerify      bool   `json:"skip_verify"`       // skip HTTPS cert verification (used with self-signed certs)
	Chunked         bool   `json:"chunked_transfer"`  // NOTE: not used Feb 2023
}

type HTTPConfToUpdate

type HTTPConfToUpdate struct {
	Certificate     *string `json:"server_crt,omitempty"`
	Key             *string `json:"server_key,omitempty"`
	WriteBufferSize *int    `json:"write_buffer_size,omitempty" list:"readonly"`
	ReadBufferSize  *int    `json:"read_buffer_size,omitempty" list:"readonly"`
	UseHTTPS        *bool   `json:"use_https,omitempty"`
	SkipVerify      *bool   `json:"skip_verify,omitempty"`
	Chunked         *bool   `json:"chunked_transfer,omitempty"` // https://tools.ietf.org/html/rfc7230#page-36
}

type HreqArgs

type HreqArgs struct {
	BodyR    io.Reader
	Header   http.Header // request headers
	Query    url.Values  // query, e.g. ?a=x&b=y&c=z
	RawQuery string      // raw query
	Method   string
	Base     string // base URL, e.g. http://xyz.abc
	Path     string // path URL, e.g. /x/y/z
	Body     []byte
}

usage 1: initialize and fill out HTTP request. usage 2: intra-cluster control-plane (except streams) usage 3: PUT and APPEND API BodyR optimizes-out allocations - if non-nil and implements `io.Closer`, will always be closed by `client.Do`

func AllocHra

func AllocHra() (a *HreqArgs)

func (*HreqArgs) Req

func (u *HreqArgs) Req() (*http.Request, error)

func (*HreqArgs) ReqWithCancel

func (u *HreqArgs) ReqWithCancel() (*http.Request, context.Context, context.CancelFunc, error)

ReqWithCancel creates request with ability to cancel it.

func (*HreqArgs) ReqWithTimeout

func (u *HreqArgs) ReqWithTimeout(timeout time.Duration) (*http.Request, context.Context, context.CancelFunc, error)

func (*HreqArgs) URL

func (u *HreqArgs) URL() string

type IterField

type IterField interface {
	Value() any                          // returns the value
	String() string                      // string representation of the value
	SetValue(v any, force ...bool) error // `force` ignores `tagReadonly` (to be used with caution!)
}

Represents a single named field

type IterOpts

type IterOpts struct {
	// Skip fields based on allowed tag
	Allowed string
	// Visits all the fields, not only the leaves.
	VisitAll bool
	// Read-only walk is true by default (compare with `UpdateFieldValue`)
	// Note that `tagOmitempty` is limited to read-only - has no effect when `OnlyRead == false`.
	OnlyRead bool
}

type KeepaliveConf

type KeepaliveConf struct {
	Proxy       KeepaliveTrackerConf `json:"proxy"`  // how proxy tracks target keepalives
	Target      KeepaliveTrackerConf `json:"target"` // how target tracks primary proxies keepalives
	RetryFactor uint8                `json:"retry_factor"`
}

func (*KeepaliveConf) Validate

func (c *KeepaliveConf) Validate() (err error)

type KeepaliveConfToUpdate

type KeepaliveConfToUpdate struct {
	Proxy       *KeepaliveTrackerConfToUpdate `json:"proxy,omitempty"`
	Target      *KeepaliveTrackerConfToUpdate `json:"target,omitempty"`
	RetryFactor *uint8                        `json:"retry_factor,omitempty"`
}

type KeepaliveTrackerConf

type KeepaliveTrackerConf struct {
	Name     string       `json:"name"`     // "heartbeat", "average"
	Interval cos.Duration `json:"interval"` // keepalive interval
	Factor   uint8        `json:"factor"`   // only average
}

config for one keepalive tracker all type of trackers share the same struct, not all fields are used by all trackers

type KeepaliveTrackerConfToUpdate

type KeepaliveTrackerConfToUpdate struct {
	Interval *cos.Duration `json:"interval,omitempty"`
	Name     *string       `json:"name,omitempty" list:"readonly"`
	Factor   *uint8        `json:"factor,omitempty"`
}

type L4Conf

type L4Conf struct {
	Proto         string `json:"proto"`           // tcp, udp
	SndRcvBufSize int    `json:"sndrcv_buf_size"` // SO_RCVBUF and SO_SNDBUF
}

type LRUConf

type LRUConf struct {
	// DontEvictTimeStr denotes the period of time during which eviction of an object
	// is forbidden [atime, atime + DontEvictTime]
	DontEvictTime cos.Duration `json:"dont_evict_time"`

	// CapacityUpdTimeStr denotes the frequency at which AIStore updates local capacity utilization
	CapacityUpdTime cos.Duration `json:"capacity_upd_time"`

	// Enabled: LRU will only run when set to true
	Enabled bool `json:"enabled"`
}

func (*LRUConf) String

func (c *LRUConf) String() string

func (*LRUConf) Validate

func (c *LRUConf) Validate() (err error)

type LRUConfToUpdate

type LRUConfToUpdate struct {
	DontEvictTime   *cos.Duration `json:"dont_evict_time,omitempty"`
	CapacityUpdTime *cos.Duration `json:"capacity_upd_time,omitempty"`
	Enabled         *bool         `json:"enabled,omitempty"`
}

type LocalConfig

type LocalConfig struct {
	ConfigDir string         `json:"confdir"`
	LogDir    string         `json:"log_dir"`
	HostNet   LocalNetConfig `json:"host_net"`
	FSP       FSPConf        `json:"fspaths"`
	TestFSP   TestFSPConf    `json:"test_fspaths"`
}

func (*LocalConfig) AddPath

func (c *LocalConfig) AddPath(mpath string)

func (*LocalConfig) DelPath

func (c *LocalConfig) DelPath(mpath string)

func (*LocalConfig) JspOpts

func (*LocalConfig) JspOpts() jsp.Options

func (*LocalConfig) TestingEnv

func (c *LocalConfig) TestingEnv() bool

type LocalNetConfig

type LocalNetConfig struct {
	Hostname             string `json:"hostname"`
	HostnameIntraControl string `json:"hostname_intra_control"`
	HostnameIntraData    string `json:"hostname_intra_data"`
	Port                 int    `json:"port,string"`               // listening port
	PortIntraControl     int    `json:"port_intra_control,string"` // listening port for intra control network
	PortIntraData        int    `json:"port_intra_data,string"`    // listening port for intra data network
	// omit
	UseIntraControl bool `json:"-"`
	UseIntraData    bool `json:"-"`
}

Network config specific to node

func (*LocalNetConfig) Validate

func (c *LocalNetConfig) Validate(contextConfig *Config) (err error)

type LogConf

type LogConf struct {
	Level     string       `json:"level"`      // log level (aka verbosity)
	MaxSize   cos.SizeIEC  `json:"max_size"`   // exceeding this size triggers log rotation
	MaxTotal  cos.SizeIEC  `json:"max_total"`  // (sum individual log sizes); exceeding this number triggers cleanup
	FlushTime cos.Duration `json:"flush_time"` // log flush interval
	StatsTime cos.Duration `json:"stats_time"` // log stats interval (must be a multiple of `PeriodConf.StatsTime`)
}

func (*LogConf) Validate

func (c *LogConf) Validate() error

type LogConfToUpdate

type LogConfToUpdate struct {
	Level     *string       `json:"level,omitempty"`
	MaxSize   *cos.SizeIEC  `json:"max_size,omitempty"`
	MaxTotal  *cos.SizeIEC  `json:"max_total,omitempty"`
	FlushTime *cos.Duration `json:"flush_time,omitempty"`
	StatsTime *cos.Duration `json:"stats_time,omitempty"`
}

type LsoEntries

type LsoEntries []*LsoEntry // separately from (code-generated) objlist* - no need to msgpack

type LsoEntry

type LsoEntry struct {
	Name     string `json:"name" msg:"n"`                            // object name
	Checksum string `json:"checksum,omitempty" msg:"cs,omitempty"`   // checksum
	Atime    string `json:"atime,omitempty" msg:"a,omitempty"`       // last access time; formatted as ListObjsMsg.TimeFormat
	Version  string `json:"version,omitempty" msg:"v,omitempty"`     // e.g., GCP int64 generation, AWS version (string), etc.
	Location string `json:"location,omitempty" msg:"t,omitempty"`    // [tnode:mountpath]
	Custom   string `json:"custom-md,omitempty" msg:"m,omitempty"`   // custom metadata: ETag, MD5, CRC, user-defined ...
	Size     int64  `json:"size,string,omitempty" msg:"s,omitempty"` // size in bytes
	Copies   int16  `json:"copies,omitempty" msg:"c,omitempty"`      // ## copies (NOTE: for non-replicated object copies == 1)
	Flags    uint16 `json:"flags,omitempty" msg:"f,omitempty"`
}

LsoEntry corresponds to a single entry in the LsoResult and contains file and directory metadata as per the ListObjsMsg `Flags` is a bit field where ibits 0-2 are reserved for object status (all statuses are mutually exclusive)

func DedupLso

func DedupLso(entries LsoEntries, maxSize uint) ([]*LsoEntry, string)

func (*LsoEntry) CheckExists

func (be *LsoEntry) CheckExists() bool

NOTE: the terms "cached" and "present" are interchangeable ("object is cached" == "is present" and vice versa)

func (*LsoEntry) CopyWithProps

func (be *LsoEntry) CopyWithProps(propsSet cos.StrSet) (ne *LsoEntry)

func (*LsoEntry) DecodeMsg

func (z *LsoEntry) DecodeMsg(dc *msgp.Reader) (err error)

DecodeMsg implements msgp.Decodable

func (*LsoEntry) EncodeMsg

func (z *LsoEntry) EncodeMsg(en *msgp.Writer) (err error)

EncodeMsg implements msgp.Encodable

func (*LsoEntry) IsInsideArch

func (be *LsoEntry) IsInsideArch() bool

func (*LsoEntry) IsStatusOK

func (be *LsoEntry) IsStatusOK() bool

func (*LsoEntry) Msgsize

func (z *LsoEntry) Msgsize() (s int)

Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message

func (*LsoEntry) SetPresent

func (be *LsoEntry) SetPresent()

func (*LsoEntry) Status

func (be *LsoEntry) Status() uint16

func (*LsoEntry) String

func (be *LsoEntry) String() string

type LsoResult

type LsoResult struct {
	UUID              string      `json:"uuid"`
	ContinuationToken string      `json:"continuation_token"`
	Entries           []*LsoEntry `json:"entries"`
	Flags             uint32      `json:"flags"`
}

LsoResult carries the results of `api.ListObjects`, `BackendProvider.ListObjects`, and friends

func ConcatLso

func ConcatLso(lists []*LsoResult, maxSize uint) (objs *LsoResult)

ConcatLso takes a slice of object lists and concatenates them: all lists are appended to the first one. If maxSize is greater than 0, the resulting list is sorted and truncated. Zero or negative maxSize means returning all objects.

func MergeLso

func MergeLso(lists []*LsoResult, maxSize uint) *LsoResult

MergeLso takes a few object lists and merges its content: properties of objects with the same name are merged. The function is used to merge eg. the requests from targets for a cloud bucket list: each target reads cloud list page and fills with available info. Then the proxy receives these lists that contains the same objects and merges them to get single list with merged information for each object. If maxSize is greater than 0, the resulting list is sorted and truncated. Zero or negative maxSize means returning all objects.

func (*LsoResult) DecodeMsg

func (z *LsoResult) DecodeMsg(dc *msgp.Reader) (err error)

DecodeMsg implements msgp.Decodable

func (*LsoResult) EncodeMsg

func (z *LsoResult) EncodeMsg(en *msgp.Writer) (err error)

EncodeMsg implements msgp.Encodable

func (*LsoResult) Msgsize

func (z *LsoResult) Msgsize() (s int)

Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message

type MemsysConf

type MemsysConf struct {
	MinFree        cos.SizeIEC  `json:"min_free"`
	DefaultBufSize cos.SizeIEC  `json:"default_buf"`
	SizeToGC       cos.SizeIEC  `json:"to_gc"`
	HousekeepTime  cos.Duration `json:"hk_time"`
	MinPctTotal    int          `json:"min_pct_total"`
	MinPctFree     int          `json:"min_pct_free"`
}

func (*MemsysConf) Validate

func (c *MemsysConf) Validate() (err error)

type MemsysConfToUpdate

type MemsysConfToUpdate struct {
	MinFree        *cos.SizeIEC  `json:"min_free,omitempty"`
	DefaultBufSize *cos.SizeIEC  `json:"default_buf,omitempty"`
	SizeToGC       *cos.SizeIEC  `json:"to_gc,omitempty"`
	HousekeepTime  *cos.Duration `json:"hk_time,omitempty"`
	MinPctTotal    *int          `json:"min_pct_total,omitempty"`
	MinPctFree     *int          `json:"min_pct_free,omitempty"`
}

type MirrorConf

type MirrorConf struct {
	Copies  int64 `json:"copies"`       // num copies
	Burst   int   `json:"burst_buffer"` // xaction channel (buffer) size
	Enabled bool  `json:"enabled"`      // enabled (to generate copies)
}

func (*MirrorConf) String

func (c *MirrorConf) String() string

func (*MirrorConf) Validate

func (c *MirrorConf) Validate() error

func (*MirrorConf) ValidateAsProps

func (c *MirrorConf) ValidateAsProps(...any) error

type MirrorConfToUpdate

type MirrorConfToUpdate struct {
	Copies  *int64 `json:"copies,omitempty"`
	Burst   *int   `json:"burst_buffer,omitempty"`
	Enabled *bool  `json:"enabled,omitempty"`
}

type NetConf

type NetConf struct {
	L4   L4Conf   `json:"l4"`
	HTTP HTTPConf `json:"http"`
}

func (*NetConf) Validate

func (c *NetConf) Validate() (err error)

type NetConfToUpdate

type NetConfToUpdate struct {
	HTTP *HTTPConfToUpdate `json:"http,omitempty"`
}

type Ns

type Ns struct {
	// UUID of other remote AIS cluster (for now only used for AIS). Note
	// that we can have different namespaces which refer to same UUID (cluster).
	// This means that in a sense UUID is a parent of the actual namespace.
	UUID string `json:"uuid" yaml:"uuid"`
	// Name uniquely identifies a namespace under the same UUID (which may
	// be empty) and is used in building FQN for the objects.
	Name string `json:"name" yaml:"name"`
}

Ns (or Namespace) adds additional layer for scoping the data under the same provider. It allows to have same dataset and bucket names under different namespaces what allows for easy data manipulation without affecting data in different namespaces.

func ParseNsUname

func ParseNsUname(s string) (n Ns)

Parses [@uuid][#namespace]. It does a little bit more than just parsing a string from `Uname` so that logic can be reused in different places.

func (Ns) IsAnyRemote

func (n Ns) IsAnyRemote() bool

func (Ns) IsGlobal

func (n Ns) IsGlobal() bool

func (Ns) IsRemote

func (n Ns) IsRemote() bool

func (Ns) String

func (n Ns) String() (res string)

func (Ns) Uname

func (n Ns) Uname() string

type OWT

type OWT int

Object Write Transaction (OWT) is used to control some of the aspects of creating new objects in the cluster. In particular, OwtGet* group below simultaneously specifies cold-GET variations (that all involve reading from a remote backend) and the associated locking (that will always reflect a tradeoff between consistency and parallelism)

const (
	OwtPut             OWT = iota // PUT
	OwtMigrate                    // migrate or replicate objects within cluster (e.g. global rebalance)
	OwtPromote                    // promote target-accessible files and directories
	OwtFinalize                   // finalize object archives
	OwtGetTryLock                 // if !try-lock(exclusive) { return error }; read from remote; ...
	OwtGetLock                    // lock(exclusive); read from remote; ...
	OwtGet                        // GET (with upgrading read-lock in the local-write path)
	OwtGetPrefetchLock            // (used for maximum parallelism when prefetching)
)

func (*OWT) FromS

func (owt *OWT) FromS(s string)

func (OWT) String

func (owt OWT) String() (s string)

func (OWT) ToS

func (owt OWT) ToS() (s string)

type ObjAttrs

type ObjAttrs struct {
	Cksum    *cos.Cksum `json:"checksum,omitempty"`  // object checksum (cloned)
	CustomMD cos.StrKVs `json:"custom-md,omitempty"` // custom metadata: ETag, MD5, CRC, user-defined ...
	Ver      string     `json:"version,omitempty"`   // object version
	Atime    int64      `json:"atime,omitempty"`     // access time (nanoseconds since UNIX epoch)
	Size     int64      `json:"size,omitempty"`      // object size (bytes)
}

see also apc.HdrObjAtime et al. @ api/apc/const.go (and note that naming must be consistent)

func (*ObjAttrs) AtimeUnix

func (oa *ObjAttrs) AtimeUnix() int64

func (*ObjAttrs) Checksum

func (oa *ObjAttrs) Checksum() *cos.Cksum

func (*ObjAttrs) CopyFrom

func (oa *ObjAttrs) CopyFrom(oah cos.OAH, skipCksum ...bool)

clone OAH => ObjAttrs (see also lom.CopyAttrs)

func (*ObjAttrs) DelCustomKeys

func (oa *ObjAttrs) DelCustomKeys(keys ...string)

func (*ObjAttrs) Equal

func (oa *ObjAttrs) Equal(rem cos.OAH) (equal bool)

local <=> remote equality in the context of cold-GET and download. This function decides whether we need to go ahead and re-read the object from its remote location.

Other than a "binary" size and version checks, rest logic goes as follows: objects are considered equal if they have a) the same version and at least one matching checksum, or b) the same remote "source" and at least one matching checksum, or c) two matching checksums. (See also note below.)

Note that mismatch in any given checksum type immediately renders inequality and return from the function.

func (*ObjAttrs) FromHeader

func (oa *ObjAttrs) FromHeader(hdr http.Header) (cksum *cos.Cksum)

NOTE: returning checksum separately for subsequent validation

func (*ObjAttrs) GetCustomKey

func (oa *ObjAttrs) GetCustomKey(key string) (val string, exists bool)

func (*ObjAttrs) GetCustomMD

func (oa *ObjAttrs) GetCustomMD() cos.StrKVs

func (*ObjAttrs) SetCksum

func (oa *ObjAttrs) SetCksum(ty, val string)

func (*ObjAttrs) SetCustomKey

func (oa *ObjAttrs) SetCustomKey(k, v string)

func (*ObjAttrs) SetCustomMD

func (oa *ObjAttrs) SetCustomMD(md cos.StrKVs)

func (*ObjAttrs) SetSize

func (oa *ObjAttrs) SetSize(size int64)

func (*ObjAttrs) SizeBytes

func (oa *ObjAttrs) SizeBytes(_ ...bool) int64

func (*ObjAttrs) String

func (oa *ObjAttrs) String() string

func (*ObjAttrs) Version

func (oa *ObjAttrs) Version(_ ...bool) string

type ObjectProps

type ObjectProps struct {
	Bck Bck `json:"bucket"`
	ObjAttrs
	Name     string `json:"name"`
	Location string `json:"location"` // see also `GetPropsLocation`
	Mirror   struct {
		Paths  []string `json:"paths,omitempty"`
		Copies int      `json:"copies,omitempty"`
	} `json:"mirror"`
	EC struct {
		Generation   int64 `json:"generation"`
		DataSlices   int   `json:"data"`
		ParitySlices int   `json:"parity"`
		IsECCopy     bool  `json:"replicated"`
	} `json:"ec"`
	Present bool `json:"present"`
}

object properties NOTE: embeds system `ObjAttrs` that in turn includes custom user-defined NOTE: compare with `apc.LsoMsg`

type ParseURIOpts

type ParseURIOpts struct {
	DefaultProvider string // If set the provider will be used as provider.
	IsQuery         bool   // Determines if the URI should be parsed as query.
}

type PeriodConf

type PeriodConf struct {
	StatsTime     cos.Duration `json:"stats_time"`      // collect and publish stats; other house-keeping
	RetrySyncTime cos.Duration `json:"retry_sync_time"` // metasync retry
	NotifTime     cos.Duration `json:"notif_time"`      // (IC notifications)
}

NOTE: StatsTime is a one important timer

func (*PeriodConf) Validate

func (c *PeriodConf) Validate() error

type PeriodConfToUpdate

type PeriodConfToUpdate struct {
	StatsTime     *cos.Duration `json:"stats_time,omitempty"`
	RetrySyncTime *cos.Duration `json:"retry_sync_time,omitempty"`
	NotifTime     *cos.Duration `json:"notif_time,omitempty"`
}

type PropsValidator

type PropsValidator interface {
	ValidateAsProps(arg ...any) error
}

type ProxyConf

type ProxyConf struct {
	PrimaryURL   string `json:"primary_url"`
	OriginalURL  string `json:"original_url"`
	DiscoveryURL string `json:"discovery_url"`
	NonElectable bool   `json:"non_electable"`
}

type ProxyConfToUpdate

type ProxyConfToUpdate struct {
	PrimaryURL   *string `json:"primary_url,omitempty"`
	OriginalURL  *string `json:"original_url,omitempty"`
	DiscoveryURL *string `json:"discovery_url,omitempty"`
	NonElectable *bool   `json:"non_electable,omitempty"`
}

type QueryBcks

type QueryBcks Bck

func (*QueryBcks) AddToQuery

func (qbck *QueryBcks) AddToQuery(query url.Values) url.Values

func (QueryBcks) Contains

func (qbck QueryBcks) Contains(other *Bck) bool

func (*QueryBcks) DisplayProvider

func (qbck *QueryBcks) DisplayProvider() string

func (QueryBcks) Equal

func (qbck QueryBcks) Equal(bck *Bck) bool

func (*QueryBcks) IsAIS

func (qbck *QueryBcks) IsAIS() bool

func (*QueryBcks) IsBucket

func (qbck *QueryBcks) IsBucket() bool

QueryBcks is a Bck that _can_ have an empty Name. (TODO: extend to support prefix and regex.)

func (*QueryBcks) IsCloud

func (qbck *QueryBcks) IsCloud() bool

func (*QueryBcks) IsEmpty

func (qbck *QueryBcks) IsEmpty() bool

func (*QueryBcks) IsHDFS

func (qbck *QueryBcks) IsHDFS() bool

func (*QueryBcks) IsHTTP

func (qbck *QueryBcks) IsHTTP() bool

func (*QueryBcks) IsRemoteAIS

func (qbck *QueryBcks) IsRemoteAIS() bool

func (QueryBcks) String

func (qbck QueryBcks) String() string

func (*QueryBcks) Validate

func (qbck *QueryBcks) Validate() (err error)

type RebalanceConf

type RebalanceConf struct {
	Compression   string       `json:"compression"`       // enum { CompressAlways, ... } in api/apc/compression.go
	DestRetryTime cos.Duration `json:"dest_retry_time"`   // max wait for ACKs & neighbors to complete
	SbundleMult   int          `json:"bundle_multiplier"` // stream-bundle multiplier: num streams to destination
	Enabled       bool         `json:"enabled"`           // true=auto-rebalance | manual rebalancing
}

func (*RebalanceConf) String

func (c *RebalanceConf) String() string

func (*RebalanceConf) Validate

func (c *RebalanceConf) Validate() error

type RebalanceConfToUpdate

type RebalanceConfToUpdate struct {
	DestRetryTime *cos.Duration `json:"dest_retry_time,omitempty"`
	Compression   *string       `json:"compression,omitempty"`
	SbundleMult   *int          `json:"bundle_multiplier"`
	Enabled       *bool         `json:"enabled,omitempty"`
}

type ResilverConf

type ResilverConf struct {
	Enabled bool `json:"enabled"` // true=auto-resilver | manual resilvering
}

func (*ResilverConf) String

func (c *ResilverConf) String() string

func (*ResilverConf) Validate

func (*ResilverConf) Validate() error

type ResilverConfToUpdate

type ResilverConfToUpdate struct {
	Enabled *bool `json:"enabled,omitempty"`
}

type RetryArgs

type RetryArgs struct {
	Call    func() (int, error)
	IsFatal func(error) bool

	Action string
	Caller string

	SoftErr uint // How many retires on ConnectionRefused or ConnectionReset error.
	HardErr uint // How many retries on any other error.
	Sleep   time.Duration

	Verbosity int  // Determine the verbosity level.
	BackOff   bool // If requests should be retried less and less often.
	IsClient  bool // true: client (e.g. dev tools, etc.)
}

type SpaceConf

type SpaceConf struct {
	// Storage Cleanup watermark: used capacity (%) that triggers cleanup
	// (deleted objects and buckets, extra copies, etc.)
	CleanupWM int64 `json:"cleanupwm"`

	// LowWM: used capacity low-watermark (% of total local storage capacity)
	LowWM int64 `json:"lowwm"`

	// HighWM: used capacity high-watermark (% of total local storage capacity)
	// - LRU starts evicting objects when the currently used capacity (used-cap) gets above HighWM
	// - and keeps evicting objects until the used-cap gets below LowWM
	// - while self-throttling itself in accordance with target utilization
	HighWM int64 `json:"highwm"`

	// Out-of-Space: if exceeded, the target starts failing new PUTs and keeps
	// failing them until its local used-cap gets back below HighWM (see above)
	OOS int64 `json:"out_of_space"`
}

func (*SpaceConf) String

func (c *SpaceConf) String() string

func (*SpaceConf) Validate

func (c *SpaceConf) Validate() (err error)

func (*SpaceConf) ValidateAsProps

func (c *SpaceConf) ValidateAsProps(...any) error

type SpaceConfToUpdate

type SpaceConfToUpdate struct {
	CleanupWM *int64 `json:"cleanupwm,omitempty"`
	LowWM     *int64 `json:"lowwm,omitempty"`
	HighWM    *int64 `json:"highwm,omitempty"`
	OOS       *int64 `json:"out_of_space,omitempty"`
}

type TCBConf

type TCBConf struct {
	Compression string `json:"compression"`       // enum { CompressAlways, ... } in api/apc/compression.go
	SbundleMult int    `json:"bundle_multiplier"` // stream-bundle multiplier: num streams to destination
}

func (*TCBConf) Validate

func (c *TCBConf) Validate() error

type TCBConfToUpdate

type TCBConfToUpdate struct {
	Compression *string `json:"compression,omitempty"`
	SbundleMult *int    `json:"bundle_multiplier,omitempty"`
}

type TCObjsMsg

type TCObjsMsg struct {
	ToBck Bck `json:"tobck"`
	apc.TCObjsMsg
}

Multi-object copy & transform (see also: TCBMsg)

type TestFSPConf

type TestFSPConf struct {
	Root     string `json:"root"`
	Count    int    `json:"count"`
	Instance int    `json:"instance"`
}

func (*TestFSPConf) Validate

func (c *TestFSPConf) Validate(contextConfig *Config) (err error)

validate root and (NOTE: testing only) generate and fill-in counted FSP.Paths

func (*TestFSPConf) ValidateMpath

func (c *TestFSPConf) ValidateMpath(p string) (err error)

type TimeoutConf

type TimeoutConf struct {
	CplaneOperation cos.Duration `json:"cplane_operation"` // readonly - change requires restart
	MaxKeepalive    cos.Duration `json:"max_keepalive"`    // ditto (see `timeout struct` below)
	MaxHostBusy     cos.Duration `json:"max_host_busy"`
	Startup         cos.Duration `json:"startup_time"`
	JoinAtStartup   cos.Duration `json:"join_startup_time"` // (join cluster at startup) timeout
	SendFile        cos.Duration `json:"send_file_time"`
}

maximum intra-cluster latencies (in the increasing order)

func (*TimeoutConf) Validate

func (c *TimeoutConf) Validate() error

type TimeoutConfToUpdate

type TimeoutConfToUpdate struct {
	CplaneOperation *cos.Duration `json:"cplane_operation,omitempty"`
	MaxKeepalive    *cos.Duration `json:"max_keepalive,omitempty"`
	MaxHostBusy     *cos.Duration `json:"max_host_busy,omitempty"`
	Startup         *cos.Duration `json:"startup_time,omitempty"`
	JoinAtStartup   *cos.Duration `json:"join_startup_time,omitempty"`
	SendFile        *cos.Duration `json:"send_file_time,omitempty"`
}

type TransportArgs

type TransportArgs struct {
	DialTimeout      time.Duration
	Timeout          time.Duration
	IdleConnTimeout  time.Duration
	IdleConnsPerHost int
	MaxIdleConns     int
	SndRcvBufSize    int
	WriteBufferSize  int
	ReadBufferSize   int
	UseHTTPS         bool
	UseHTTPProxyEnv  bool
	// For HTTPS mode only: if true, the client does not verify server's
	// certificate. It is useful for clusters with self-signed certificates.
	SkipVerify bool
}

Options to create a transport for HTTP client

func (*TransportArgs) ConnControl

func (args *TransportArgs) ConnControl(_ syscall.RawConn) (cntl func(fd uintptr))

type TransportConf

type TransportConf struct {
	MaxHeaderSize int `json:"max_header"`   // max transport header buffer (default=4K)
	Burst         int `json:"burst_buffer"` // num sends with no back pressure; see also AIS_STREAM_BURST_NUM
	// two no-new-transmissions durations:
	// * IdleTeardown: sender terminates the connection (to reestablish it upon the very first/next PDU)
	// * QuiesceTime:  safe to terminate or transition to the next (in re: rebalance) stage
	IdleTeardown cos.Duration `json:"idle_teardown"`
	QuiesceTime  cos.Duration `json:"quiescent"`
	// lz4
	// max uncompressed block size, one of [64K, 256K(*), 1M, 4M]
	// fastcompression.blogspot.com/2013/04/lz4-streaming-format-final.html
	LZ4BlockMaxSize  cos.SizeIEC `json:"lz4_block"`
	LZ4FrameChecksum bool        `json:"lz4_frame_checksum"`
}

func (*TransportConf) Validate

func (c *TransportConf) Validate() (err error)

NOTE: uncompressed block sizes - the enum currently supported by the github.com/pierrec/lz4

type TransportConfToUpdate

type TransportConfToUpdate struct {
	MaxHeaderSize    *int          `json:"max_header,omitempty" list:"readonly"`
	Burst            *int          `json:"burst_buffer,omitempty" list:"readonly"`
	IdleTeardown     *cos.Duration `json:"idle_teardown,omitempty"`
	QuiesceTime      *cos.Duration `json:"quiescent,omitempty"`
	LZ4BlockMaxSize  *cos.SizeIEC  `json:"lz4_block,omitempty"`
	LZ4FrameChecksum *bool         `json:"lz4_frame_checksum,omitempty"`
}

type Validator

type Validator interface {
	Validate() error
}

type VersionConf

type VersionConf struct {
	// Determines if the versioning is enabled.
	Enabled bool `json:"enabled"`

	// Validate object version upon warm GET.
	ValidateWarmGet bool `json:"validate_warm_get"`
}

func (*VersionConf) String

func (c *VersionConf) String() string

func (*VersionConf) Validate

func (c *VersionConf) Validate() error

type VersionConfToUpdate

type VersionConfToUpdate struct {
	Enabled         *bool `json:"enabled,omitempty"`
	ValidateWarmGet *bool `json:"validate_warm_get,omitempty"`
}

type WritePolicyConf

type WritePolicyConf struct {
	Data apc.WritePolicy `json:"data"`
	MD   apc.WritePolicy `json:"md"`
}

func (*WritePolicyConf) Validate

func (c *WritePolicyConf) Validate() (err error)

func (*WritePolicyConf) ValidateAsProps

func (c *WritePolicyConf) ValidateAsProps(...any) error

type WritePolicyConfToUpdate

type WritePolicyConfToUpdate struct {
	Data *apc.WritePolicy `json:"data,omitempty" list:"readonly"` // NOTE: NIY
	MD   *apc.WritePolicy `json:"md,omitempty"`
}

Directories

Path Synopsis
Package archive: write, read, copy, append, list primitives across all supported formats
Package archive: write, read, copy, append, list primitives across all supported formats
Package atomic provides simple wrappers around numerics to enforce atomic access.
Package atomic provides simple wrappers around numerics to enforce atomic access.
Package cos provides common low-level types and utilities for all aistore projects.
Package cos provides common low-level types and utilities for all aistore projects.
Package provides debug utilities
Package provides debug utilities
Package feat: global runtime-configurable feature flags
Package feat: global runtime-configurable feature flags
Package fname contains filename constants and common system directories
Package fname contains filename constants and common system directories
Package jsp (JSON persistence) provides utilities to store and load arbitrary JSON-encoded structures with optional checksumming and compression.
Package jsp (JSON persistence) provides utilities to store and load arbitrary JSON-encoded structures with optional checksumming and compression.
Package k8s provides utilities for communicating with Kubernetes cluster.
Package k8s provides utilities for communicating with Kubernetes cluster.
Package kvdb provides a local key/value database server for AIS.
Package kvdb provides a local key/value database server for AIS.
Package mono provides low-level monotonic time
Package mono provides low-level monotonic time
Package prob implements fully features dynamic probabilistic filter.
Package prob implements fully features dynamic probabilistic filter.
Package xoshiro256 implements the xoshiro256** RNG no-copyright
Package xoshiro256 implements the xoshiro256** RNG no-copyright

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL