nn

package
v0.2.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 14, 2021 License: Apache-2.0 Imports: 9 Imported by: 0

Documentation

Index

Constants

View Source
const SEP = "."

SEP is a separator to separate path elements in the tensor names.

Variables

This section is empty.

Functions

func BCELoss

func BCELoss(logits, target *ts.Tensor, opts ...LossFnOption) *ts.Tensor

BCELoss calculates a binary cross entropy loss.

- logits: tensor of shape [B, C, H, W] corresponding the raw output of the model. - target: ground truth tensor of shape [B, 1, H, W] - posWeight: scalar representing the weight attributed to positive class. This is especially useful for an imbalanced dataset

func BatchAccuracyForLogits

func BatchAccuracyForLogits(vs *VarStore, m ts.ModuleT, xs, ys *ts.Tensor, d gotch.Device, batchSize int) (retVal float64)

BatchAccuracyForLogits calculates average accuracy of test batches.

NOTE: Pytorch uses `NoGradGuard` which is a thread local scope and it sets a global flag that is checked by the backend whenever an op is done on a variable. The guard itself saved the current status and set it to false in the constructor. And restore the saved status in it’s destructor. That way it is similar to a with torch.no_grad(): block in python. This seems not working in Go. There 2 ways to get around. One is freeze VarStore, the other is set manually set AutoGrad at `loss` tensor. I.e., `loss = loss.MustSetRequiresGrad(true)`

func BatchAccuracyForLogitsIdx

func BatchAccuracyForLogitsIdx(vs *VarStore, m ts.ModuleT, xs, ys *ts.Tensor, d gotch.Device, batchSize int) (retVal float64)

BatchAccuracyForLogitIdx is an alternative of BatchAccuracyForLogits to calculate accuracy for specified batch on module weight. It uses tensor indexing instead of Iter2

func BatchAccuracyForLogitsOld

func BatchAccuracyForLogitsOld(vs *VarStore, m ts.ModuleT, xs, ys *ts.Tensor, d gotch.Device, batchSize int) (retVal float64)

func CrossEntropyLoss

func CrossEntropyLoss(logits, target *ts.Tensor, opts ...LossFnOption) *ts.Tensor

CrossEntropyLoss calculates cross entropy loss. Ref. https://github.com/pytorch/pytorch/blob/15be189f0de4addf4f68d18022500f67617ab05d/torch/nn/functional.py#L2012 - logits: tensor of shape [B, C, H, W] corresponding the raw output of the model. - target: ground truth tensor of shape [B, 1, H, W] - posWeight: scalar representing the weight attributed to positive class. This is especially useful for an imbalanced dataset

func NewConstInit

func NewConstInit(v float64) constInit

func NewGlorotNInit

func NewGlorotNInit() glorotNInit

func NewKaimingUniformInit

func NewKaimingUniformInit() kaimingUniformInit

func NewRandnInit

func NewRandnInit(mean, stdev float64) randnInit

func NewUniformInit

func NewUniformInit(lo, up float64) uniformInit

func WithUint8

func WithUint8(n uint8) func() uint8

WithUint8 returns an uint8 value option

Types

type AdamConfig

type AdamConfig struct {
	Beta1 float64
	Beta2 float64
	Wd    float64
}

func DefaultAdamConfig

func DefaultAdamConfig() *AdamConfig

DefaultAdamConfig creates AdamConfig with default values

func NewAdamConfig

func NewAdamConfig(beta1, beta2, wd float64) *AdamConfig

NewAdamConfig creates AdamConfig with specified values

func (*AdamConfig) Build

func (c *AdamConfig) Build(vs *VarStore, lr float64) (*Optimizer, error)

type AdamWConfig

type AdamWConfig struct {
	Beta1 float64
	Beta2 float64
	Wd    float64
}

func DefaultAdamWConfig

func DefaultAdamWConfig() *AdamWConfig

DefaultAdamWConfig creates AdamWConfig with default values

func NewAdamWConfig

func NewAdamWConfig(beta1, beta2, wd float64) *AdamWConfig

NewAdamWConfig creates AdamWConfig with specified values

func (*AdamWConfig) Build

func (c *AdamWConfig) Build(vs *VarStore, lr float64) (*Optimizer, error)

Build builds AdamW optimizer

type BatchNorm

type BatchNorm struct {
	RunningMean *ts.Tensor
	RunningVar  *ts.Tensor
	Ws          *ts.Tensor
	Bs          *ts.Tensor
	Nd          uint
	// contains filtered or unexported fields
}

A batch-normalization layer.

func BatchNorm1D

func BatchNorm1D(vs *Path, outDim int64, config *BatchNormConfig) *BatchNorm

Applies Batch Normalization over a three dimension input.

The input shape is assumed to be (N, C, L). Normalization is performed over the first batch dimension N.

func BatchNorm2D

func BatchNorm2D(vs *Path, outDim int64, config *BatchNormConfig) *BatchNorm

Applies Batch Normalization over a four dimension input.

The input shape is assumed to be (N, C, H, W). Normalization is performed over the first batch dimension N.

func BatchNorm3D

func BatchNorm3D(vs *Path, outDim int64, config *BatchNormConfig) *BatchNorm

Applies Batch Normalization over a five dimension input.

The input shape is assumed to be (N, C, D, H, W). Normalization is performed over the first batch dimension N.

func NewBatchNorm

func NewBatchNorm(vs *Path, nd uint, outDim int64, config *BatchNormConfig) *BatchNorm

NewBatchNorm creates a new BatchNorm layer

func (*BatchNorm) ForwardT

func (bn *BatchNorm) ForwardT(xs *ts.Tensor, train bool) (retVal *ts.Tensor)

type BatchNormConfig

type BatchNormConfig struct {
	CudnnEnable bool
	Eps         float64
	Momentum    float64
	WsInit      Init
	BsInit      Init
}

Batch-normalization config.

func DefaultBatchNormConfig

func DefaultBatchNormConfig() *BatchNormConfig

type Conv

type Conv interface{}

func NewConv

func NewConv(vs *Path, inDim, outDim int64, ksizes []int64, config interface{}) Conv

NewConv is a generic builder to build Conv1D, Conv2D, Conv3D. It returns an interface Conv which might need a type assertion for further use.

type Conv1D

type Conv1D struct {
	Ws     *ts.Tensor
	Bs     *ts.Tensor // optional
	Config *Conv1DConfig
}

func NewConv1D

func NewConv1D(vs *Path, inDim, outDim, k int64, cfg *Conv1DConfig) *Conv1D

func (*Conv1D) Forward

func (c *Conv1D) Forward(xs *ts.Tensor) *ts.Tensor

func (*Conv1D) ForwardT

func (c *Conv1D) ForwardT(xs *ts.Tensor, train bool) *ts.Tensor

type Conv1DConfig

type Conv1DConfig struct {
	Stride   []int64
	Padding  []int64
	Dilation []int64
	Groups   int64
	Bias     bool
	WsInit   Init
	BsInit   Init
}

func DefaultConv1DConfig

func DefaultConv1DConfig() *Conv1DConfig

DefaultConvConfig create a default 1D ConvConfig

type Conv2D

type Conv2D struct {
	Ws     *ts.Tensor
	Bs     *ts.Tensor // optional
	Config *Conv2DConfig
}

func NewConv2D

func NewConv2D(vs *Path, inDim, outDim int64, k int64, cfg *Conv2DConfig) *Conv2D

func (*Conv2D) Forward

func (c *Conv2D) Forward(xs *ts.Tensor) *ts.Tensor

func (*Conv2D) ForwardT

func (c *Conv2D) ForwardT(xs *ts.Tensor, train bool) *ts.Tensor

type Conv2DConfig

type Conv2DConfig struct {
	Stride   []int64
	Padding  []int64
	Dilation []int64
	Groups   int64
	Bias     bool
	WsInit   Init
	BsInit   Init
}

func DefaultConv2DConfig

func DefaultConv2DConfig() *Conv2DConfig

DefaultConvConfig2D creates a default 2D ConvConfig

type Conv3D

type Conv3D struct {
	Ws     *ts.Tensor
	Bs     *ts.Tensor // optional
	Config *Conv3DConfig
}

func NewConv3D

func NewConv3D(vs *Path, inDim, outDim, k int64, cfg *Conv3DConfig) *Conv3D

func (*Conv3D) Forward

func (c *Conv3D) Forward(xs *ts.Tensor) *ts.Tensor

func (*Conv3D) ForwardT

func (c *Conv3D) ForwardT(xs *ts.Tensor, train bool) *ts.Tensor

type Conv3DConfig

type Conv3DConfig struct {
	Stride   []int64
	Padding  []int64
	Dilation []int64
	Groups   int64
	Bias     bool
	WsInit   Init
	BsInit   Init
}

type ConvTranspose1D

type ConvTranspose1D struct {
	Ws     *ts.Tensor
	Bs     *ts.Tensor // optional
	Config *ConvTranspose1DConfig
}

func NewConvTranspose1D

func NewConvTranspose1D(vs *Path, inDim, outDim int64, ksizes []int64, cfg *ConvTranspose1DConfig) *ConvTranspose1D

func (*ConvTranspose1D) Forward

func (c *ConvTranspose1D) Forward(xs *ts.Tensor) *ts.Tensor

type ConvTranspose1DConfig

type ConvTranspose1DConfig struct {
	Stride        []int64
	Padding       []int64
	OutputPadding []int64
	Dilation      []int64
	Groups        int64
	Bias          bool
	WsInit        Init
	BsInit        Init
}

func DefaultConvTranspose1DConfig

func DefaultConvTranspose1DConfig() *ConvTranspose1DConfig

DefaultConvConfig create a default 1D ConvConfig

type ConvTranspose2D

type ConvTranspose2D struct {
	Ws     *ts.Tensor
	Bs     *ts.Tensor // optional
	Config *ConvTranspose2DConfig
}

func NewConvTranspose2D

func NewConvTranspose2D(vs *Path, inDim, outDim int64, ksizes []int64, cfg *ConvTranspose2DConfig) *ConvTranspose2D

func (*ConvTranspose2D) Forward

func (c *ConvTranspose2D) Forward(xs *ts.Tensor) *ts.Tensor

type ConvTranspose2DConfig

type ConvTranspose2DConfig struct {
	Stride        []int64
	Padding       []int64
	OutputPadding []int64
	Dilation      []int64
	Groups        int64
	Bias          bool
	WsInit        Init
	BsInit        Init
}

type ConvTranspose3D

type ConvTranspose3D struct {
	Ws     *ts.Tensor
	Bs     *ts.Tensor // optional
	Config *ConvTranspose3DConfig
}

func NewConvTranspose3D

func NewConvTranspose3D(vs *Path, inDim, outDim int64, ksizes []int64, cfg *ConvTranspose3DConfig) *ConvTranspose3D

func (*ConvTranspose3D) Forward

func (c *ConvTranspose3D) Forward(xs *ts.Tensor) *ts.Tensor

type ConvTranspose3DConfig

type ConvTranspose3DConfig struct {
	Stride        []int64
	Padding       []int64
	OutputPadding []int64
	Dilation      []int64
	Groups        int64
	Bias          bool
	WsInit        Init
	BsInit        Init
}

type CosineAnnealingLR

type CosineAnnealingLR struct {
	// contains filtered or unexported fields
}

CosineAnnealingLR set the learning rates of each optimizer parameter group by using a cosine annealing schedule where eta max is set to initial learning rate and Tcur is the number of epochs since the last restart in SGDR (Stochastic Gradient Descent with Warm Restarts).

NOTE. this implements only the cosine annealing part of SGDR, and not the starts. Ref. - https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler.CosineAnnealingLR - https://arxiv.org/abs/1608.03983

func NewCosineAnnealingLR

func NewCosineAnnealingLR(opt *Optimizer, tmax int, etaMin float64) *CosineAnnealingLR

NewConsineAnnealingLR creates a new ConsineAnnealingLR.

func (*CosineAnnealingLR) Build

func (ca *CosineAnnealingLR) Build() *LRScheduler

Build implements scheduler interface.

func (*CosineAnnealingLR) SetLRs

func (ca *CosineAnnealingLR) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

type CosineAnnealingWarmRestarts

type CosineAnnealingWarmRestarts struct {
	// contains filtered or unexported fields
}

CosineAnnealingWarmRestart sets the learning rate of each parameter group / using a cosine annealing schedule.

Source: Stochastic Gradient Descent with Warm Restarts: https://arxiv.org/abs/1608.03983

func (*CosineAnnealingWarmRestarts) Build

Build implement scheduler interface

func (*CosineAnnealingWarmRestarts) SetLRs

func (s *CosineAnnealingWarmRestarts) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

NOTE. scheduler.Step(epoch) could be called after every batch update

type CosineAnnealingWarmRestartsOption

type CosineAnnealingWarmRestartsOption func(*CosineAnnealingWarmRestartsOptions)

func WithCosineAnnealingLastEpoch

func WithCosineAnnealingLastEpoch(v int) CosineAnnealingWarmRestartsOption

type CosineAnnealingWarmRestartsOptions

type CosineAnnealingWarmRestartsOptions struct {
	TMult     int
	EtaMin    float64
	LastEpoch int
}

type CyclicLR

type CyclicLR struct {
	// contains filtered or unexported fields
}

CyclicLR sets the learning rate of each parameter group according to cyclical learning rate policy (CLR). The policy cycles the learning rate between two boundaries with a constant frequency, as detailed in the paper `Cyclical Learning Rates for Training Neural Networks`_. The distance between the two boundaries can be scaled on a per-iteration or per-cycle basis.

Cyclical learning rate policy changes the learning rate after every batch. `Step()` should be called after a batch has been used for training. This class has three built-in policies, as put forth in the paper: - "triangular": A basic triangular cycle without amplitude scaling. - "triangular2": A basic triangular cycle that scales initial amplitude by half each cycle. - "exp_range": A cycle that scales initial amplitude by :math:`\text{gamma}^{\text{cycle iterations}}` at each cycle iteration.

Source: - Cyclical Learning Rates for Training Neural Networks: https://arxiv.org/abs/1506.01186 - bckenstler/CLR: https://github.com/bckenstler/CLR

func NewCyclicLR

func NewCyclicLR(opt *Optimizer, baseLRs, maxLRs []float64, opts ...CyclicOption) *CyclicLR

func (*CyclicLR) Build

func (cyc *CyclicLR) Build() *LRScheduler

Build implements scheduler interface.

func (*CyclicLR) SetLRs

func (cyc *CyclicLR) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

It calculates the learning rate at batch index. This function treats `lastEpoch` as the last batch index. NOTE. If `cycleMomentum` is “true“, this function has a side effect of updating the optimizer's momentum.

type CyclicOption

type CyclicOption func(*CyclicOptions)

func WithCyclicBaseMomentum

func WithCyclicBaseMomentum(v float64) CyclicOption

func WithCyclicCycleMomentum

func WithCyclicCycleMomentum(v bool) CyclicOption

func WithCyclicGamma

func WithCyclicGamma(v float64) CyclicOption

func WithCyclicLastEpoch

func WithCyclicLastEpoch(v int) CyclicOption

func WithCyclicMaxMomentum

func WithCyclicMaxMomentum(v float64) CyclicOption

func WithCyclicMode

func WithCyclicMode(v string) CyclicOption

func WithCyclicScaleFn

func WithCyclicScaleFn(v func(x float64) float64) CyclicOption

func WithCyclicScaleMode

func WithCyclicScaleMode(v string) CyclicOption

func WithCyclicStepSizeDown

func WithCyclicStepSizeDown(v int) CyclicOption

func WithCyclicStepSizeUp

func WithCyclicStepSizeUp(v int) CyclicOption

type CyclicOptions

type CyclicOptions struct {
	StepSizeUp    int                     // 2000
	StepSizeDown  int                     // -1
	Mode          string                  // "triangular"
	Gamma         float64                 // 1.0
	ScaleFn       func(x float64) float64 // nil
	ScaleMode     string                  // "cycle"
	CycleMomentum bool                    // true
	BaseMomentum  float64                 // 0.8
	MaxMomentum   float64                 // 0.9
	LastEpoch     int                     // -1
}

type Embedding

type Embedding struct {
	Ws *ts.Tensor
	// contains filtered or unexported fields
}

An embedding layer.

An embedding layer acts as a simple lookup table that stores embeddings. This is commonly used to store word embeddings.

func NewEmbedding

func NewEmbedding(vs *Path, numEmbeddings int64, embeddingDim int64, config *EmbeddingConfig) *Embedding

NewEmbedding creates a new Embedding

func (*Embedding) Forward

func (e *Embedding) Forward(xs *ts.Tensor) *ts.Tensor

Forward implements Module interface for Embedding

func (*Embedding) ForwardT

func (e *Embedding) ForwardT(xs *ts.Tensor, train bool) *ts.Tensor

ForwardT implements ModuleT interface for Embedding

type EmbeddingConfig

type EmbeddingConfig struct {
	Sparse          bool
	ScaleGradByFreq bool
	WsInit          Init
	PaddingIdx      int64
}

Configuration option for an embedding layer.

func DefaultEmbeddingConfig

func DefaultEmbeddingConfig() *EmbeddingConfig

type Entry

type Entry struct {
	// contains filtered or unexported fields
}

Entry holds an entry corresponding to a given name in Path.

func (*Entry) OrKaimingUniform

func (e *Entry) OrKaimingUniform(dims []int64) *ts.Tensor

Returns the existing entry if, otherwise create a new variable.

func (*Entry) OrOnes

func (e *Entry) OrOnes(dims []int64) *ts.Tensor

OrOnes returns the existing entry if, otherwise create a new variable.

func (*Entry) OrOnesNoTrain

func (e *Entry) OrOnesNoTrain(dims []int64) *ts.Tensor

OrOnesNoTrain returns the existing entry if, otherwise create a new variable.

func (*Entry) OrRandn

func (e *Entry) OrRandn(dims []int64, mean, stdev float64) *ts.Tensor

OrRandn returns the existing entry if, otherwise create a new variable.

func (*Entry) OrRandnStandard

func (e *Entry) OrRandnStandard(dims []int64) *ts.Tensor

OrRandnStandard returns the existing entry if, otherwise create a new variable.

func (*Entry) OrUniform

func (e *Entry) OrUniform(dims []int64, lo, up float64) (retVal *ts.Tensor)

OrUniform returns the existing entry if, otherwise create a new variable.

func (*Entry) OrVar

func (e *Entry) OrVar(dims []int64, init Init) *ts.Tensor

OrVar returns the existing entry if, otherwise create a new variable.

If this entry name matches the name of a variables stored in the var store, the corresponding tensor is returned. Otherwise a new variable is added to the var-store with the entry name and is initialized according to the init parameter.

func (*Entry) OrVarCopy

func (e *Entry) OrVarCopy(tensor *ts.Tensor) *ts.Tensor

Returns the existing entry if, otherwise create a new variable.

func (*Entry) OrZeros

func (e *Entry) OrZeros(dims []int64) *ts.Tensor

OrZeros returns the existing entry if, otherwise create a new variable.

func (*Entry) OrZerosNoTrain

func (e *Entry) OrZerosNoTrain(dims []int64) *ts.Tensor

OrZerosNoTrain returns the existing entry if, otherwise create a new variable.

type ExponentialLR

type ExponentialLR struct {
	// contains filtered or unexported fields
}

ExponentialLR decays the learning rates of each optimizer parameter group by gamma every epochs.

func NewExponentialLR

func NewExponentialLR(opt *Optimizer, gamma float64) *ExponentialLR

NewExponentialLR creates a new ExponentialLR.

func (*ExponentialLR) Build

func (e *ExponentialLR) Build() *LRScheduler

Build implements scheduler interface.

func (*ExponentialLR) SetLRs

func (e *ExponentialLR) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

type ForwardTWith

type ForwardTWith func(*ts.Tensor, bool) *ts.Tensor

func (ForwardTWith) ForwardT

func (fw ForwardTWith) ForwardT(xs *ts.Tensor, train bool) *ts.Tensor

type ForwardWith

type ForwardWith func(*ts.Tensor) *ts.Tensor

ForwardWith is a handler function to implement Module interface for any (anonymous) function it wraps.

Ref. https://stackoverflow.com/a/42182987 NOTE: Specifically, `ForwardWith` is used to wrap anonymous function as input parameter of `AddFn` Sequential method.

func (ForwardWith) Forward

func (fw ForwardWith) Forward(xs *ts.Tensor) *ts.Tensor

type Func

type Func struct {
	// contains filtered or unexported fields
}

func NewFunc

func NewFunc(fn func(*ts.Tensor) *ts.Tensor) (retVal Func)

func (Func) Forward

func (fn Func) Forward(xs *ts.Tensor) (retVal *ts.Tensor)

Implement Module interface for Func: ====================================

func (Func) ForwardT

func (fn Func) ForwardT(xs *ts.Tensor, train bool) (retVal *ts.Tensor)

ForwardT implements ModuleT for Func object as well.

NOTE: train param will not be used.

type FuncT

type FuncT struct {
	// contains filtered or unexported fields
}

func NewFuncT

func NewFuncT(fn func(*ts.Tensor, bool) *ts.Tensor) (retVal FuncT)

func (FuncT) ForwardT

func (fn FuncT) ForwardT(xs *ts.Tensor, train bool) (retVal *ts.Tensor)

Implement Module interface for Func: ====================================

type GRU

type GRU struct {
	// contains filtered or unexported fields
}

A Gated Recurrent Unit (GRU) layer.

https://en.wikipedia.org/wiki/Gated_recurrent_unit

func NewGRU

func NewGRU(vs *Path, inDim, hiddenDim int64, cfg *RNNConfig) (retVal *GRU)

NewGRU create a new GRU layer

func (*GRU) Seq

func (g *GRU) Seq(input *ts.Tensor) (*ts.Tensor, State)

func (*GRU) SeqInit

func (g *GRU) SeqInit(input *ts.Tensor, inState State) (*ts.Tensor, State)

func (*GRU) Step

func (g *GRU) Step(input *ts.Tensor, inState State) State

func (*GRU) ZeroState

func (g *GRU) ZeroState(batchDim int64) State

type GRUState

type GRUState struct {
	Tensor *ts.Tensor
}

GRUState is a GRU state. It contains a single tensor.

func (*GRUState) Value

func (gs *GRUState) Value() *ts.Tensor

type Init

type Init interface {
	// creates a new tensor with specified initiation
	InitTensor(dims []int64, device gotch.Device) (retVal *ts.Tensor)

	// re-initializes (in-place) an existing tensor with the specified initiation
	Set(tensor *ts.Tensor)
}

type LRScheduler

type LRScheduler struct {
	// contains filtered or unexported fields
}

LRScheduler is a scheduler to update optimizer learning rates.

func (*LRScheduler) Step

func (s *LRScheduler) Step(opts ...SchedulerOption)

Step updates optimizer learning rate.

type LSTM

type LSTM struct {
	// contains filtered or unexported fields
}

A Long Short-Term Memory (LSTM) layer.

https://en.wikipedia.org/wiki/Long_short-term_memory

func NewLSTM

func NewLSTM(vs *Path, inDim, hiddenDim int64, cfg *RNNConfig) *LSTM

NewLSTM creates a LSTM layer.

func (*LSTM) Seq

func (l *LSTM) Seq(input *ts.Tensor) (*ts.Tensor, State)

func (*LSTM) SeqInit

func (l *LSTM) SeqInit(input *ts.Tensor, inState State) (*ts.Tensor, State)

func (*LSTM) Step

func (l *LSTM) Step(input *ts.Tensor, inState State) State

func (*LSTM) ZeroState

func (l *LSTM) ZeroState(batchDim int64) State

type LSTMState

type LSTMState struct {
	Tensor1 *ts.Tensor
	Tensor2 *ts.Tensor
}

The state for a LSTM network, this contains two tensors.

func (*LSTMState) C

func (ls *LSTMState) C() *ts.Tensor

The cell state vector.

func (*LSTMState) H

func (ls *LSTMState) H() *ts.Tensor

The hidden state vector, which is also the output of the LSTM.

type LambdaFn

type LambdaFn func(in interface{}) float64

type LambdaLR

type LambdaLR struct {
	// contains filtered or unexported fields
}

LamdaLR calculates new learning rate for each parameter group by applying Lambda function to the corresponding INITIAL learning rate.

func NewLambdaLR

func NewLambdaLR(opt *Optimizer, ldFns []LambdaFn) *LambdaLR

NewLambdaLRS creates a new LambdaLRS.

func (*LambdaLR) Build

func (l *LambdaLR) Build() *LRScheduler

Build implements scheduler interface.

func (*LambdaLR) SetLRs

func (l *LambdaLR) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

type LayerNorm

type LayerNorm struct {
	Config          *LayerNormConfig
	Ws              *ts.Tensor // optional
	Bs              *ts.Tensor // optional
	NormalizedShape []int64
}

A layer-normalization layer.

func NewLayerNorm

func NewLayerNorm(vs *Path, normalizedShape []int64, config *LayerNormConfig) *LayerNorm

func (*LayerNorm) Forward

func (ln *LayerNorm) Forward(xs *ts.Tensor) (retVal *ts.Tensor)

type LayerNormConfig

type LayerNormConfig struct {
	CudnnEnable       bool
	Eps               float64
	ElementwiseAffine bool
	WsInit            Init
	BsInit            Init
}

Layer-normalization config.

func DefaultLayerNormConfig

func DefaultLayerNormConfig() *LayerNormConfig

type Linear

type Linear struct {
	Ws *ts.Tensor
	Bs *ts.Tensor
}

Linear is a linear fully-connected layer

func NewLinear

func NewLinear(vs *Path, inDim, outDim int64, c *LinearConfig) *Linear

NewLinear creates a new linear layer y = x*wT + b inDim - input dimension (x) [input features - columns] outDim - output dimension (y) [output features - columns] NOTE: w will have shape{outDim, inDim}; b will have shape{outDim}

func (*Linear) Forward

func (l *Linear) Forward(xs *ts.Tensor) (retVal *ts.Tensor)

Forward proceeds input node through linear layer. NOTE: - It assumes that node has dimensions of 2 (matrix). To make it work for matrix multiplication, input node should has same number of **column** as number of **column** in `LinearLayer` `Ws` property as weights matrix will be transposed before multiplied to input node. (They are all used `inDim`) - Input node should have shape of `shape{batch size, input features}`. (shape{batchSize, inDim}). The input features is `inDim` while the output feature is `outDim` in `LinearConfig` struct.

Example:

inDim := 3
outDim := 2
batchSize := 4
weights: 2x3
[ 1 1 1
	1 1 1 ]

input node: 3x4
[ 1 1 1
  1 1 1
  1 1 1
	1 1 1 ]

func (*Linear) ForwardT

func (l *Linear) ForwardT(xs *ts.Tensor, train bool) (retVal *ts.Tensor)

ForwardT implements ModuleT interface for Linear layer.

NOTE: train param will not be used.

type LinearConfig

type LinearConfig struct {
	WsInit Init // iniital weights
	BsInit Init // optional initial bias
	Bias   bool
}

LinearConfig is a configuration for a linear layer

func DefaultLinearConfig

func DefaultLinearConfig() *LinearConfig

DefaultLinearConfig creates default LinearConfig with weights initiated using KaimingUniform and Bias is set to true

type LossFnOption

type LossFnOption func(*lossFnOptions)

func WithLossFnIgnoreIndex

func WithLossFnIgnoreIndex(val int64) LossFnOption

func WithLossFnPosWeight

func WithLossFnPosWeight(val int64) LossFnOption

func WithLossFnReduction

func WithLossFnReduction(val int64) LossFnOption

func WithLossFnWeights

func WithLossFnWeights(vals []float64) LossFnOption

type MultiStepLR

type MultiStepLR struct {
	// contains filtered or unexported fields
}

StepLR decays the learning rates of each optimizer parameter group by gamm once the number of epochs reaches one of the milestones.

NOTE. Such decay can happen simultaneously with other changes to the learning rate from outside this scheduler.

func NewMultiStepLR

func NewMultiStepLR(opt *Optimizer, milestones []int, gamma float64) *MultiStepLR

NewStepLR creates a new StepLR.

func (*MultiStepLR) Build

func (ms *MultiStepLR) Build() *LRScheduler

Build implements scheduler interface.

func (*MultiStepLR) SetLRs

func (ms *MultiStepLR) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

type MultiplicativeLR

type MultiplicativeLR struct {
	// contains filtered or unexported fields
}

MultiplicativeLR calculates new learning rates for each optimizer para groups by applying corresponding Lambda function to the CURRENT learning rate.

func NewMultiplicativeLR

func NewMultiplicativeLR(opt *Optimizer, ldFns []LambdaFn) *MultiplicativeLR

NewMultiplicativeLR creates a new MultiplicativeLR.

func (*MultiplicativeLR) Build

func (m *MultiplicativeLR) Build() *LRScheduler

Build implements scheduler interface.

func (*MultiplicativeLR) SetLRs

func (m *MultiplicativeLR) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

type OneCycleLR

type OneCycleLR struct {
	// contains filtered or unexported fields
}

OneCycleLR sets the learning rate of each parameter group according to the 1cycle learning rate policy. The 1cycle policy anneals the learning rate from an initial learning rate to some maximum learning rate and then from that maximum learning rate to some minimum learning rate much lower than the initial learning rate.

This policy was initially described in the paper `Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates`_. The 1cycle learning rate policy changes the learning rate after every batch. `step` should be called after a batch has been used for training. This scheduler is not chainable.

Note also that the total number of steps in the cycle can be determined in one of two ways (listed in order of precedence): - A value for total_steps is explicitly provided. - A number of epochs (epochs) and a number of steps per epoch (steps_per_epoch) are provided. In this case, the number of total steps is inferred by total_steps = epochs * steps_per_epoch You must either provide a value for total_steps or provide a value for both epochs and steps_per_epoch.

Source: Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates https://arxiv.org/abs/1708.07120

func NewOneCycleLR

func NewOneCycleLR(opt *Optimizer, maxLR float64, opts ...OneCycleOption) *OneCycleLR

func (*OneCycleLR) Build

func (oc *OneCycleLR) Build() *LRScheduler

func (*OneCycleLR) SetLRs

func (oc *OneCycleLR) SetLRs(opts ...SchedulerOption)

type OneCycleOption

type OneCycleOption func(*OneCycleOptions)

func WithOneCycleAnnealStrategy

func WithOneCycleAnnealStrategy(v string) OneCycleOption

func WithOneCycleBaseMomentum

func WithOneCycleBaseMomentum(v float64) OneCycleOption

func WithOneCycleCycleMomentum

func WithOneCycleCycleMomentum(v bool) OneCycleOption

func WithOneCycleDivFactor

func WithOneCycleDivFactor(v float64) OneCycleOption

func WithOneCycleEpochs

func WithOneCycleEpochs(v int) OneCycleOption

func WithOneCycleFinalDivFactor

func WithOneCycleFinalDivFactor(v float64) OneCycleOption

func WithOneCycleLastEpoch

func WithOneCycleLastEpoch(v int) OneCycleOption

func WithOneCycleMaxMomentum

func WithOneCycleMaxMomentum(v float64) OneCycleOption

func WithOneCyclePctStart

func WithOneCyclePctStart(v float64) OneCycleOption

func WithOneCycleStepsPerEpoch

func WithOneCycleStepsPerEpoch(v int) OneCycleOption

func WithOneCycleTotalSteps

func WithOneCycleTotalSteps(v int) OneCycleOption

type OneCycleOptions

type OneCycleOptions struct {
	TotalSteps     int
	Epochs         int
	StepsPerEpoch  int
	PctStart       float64
	AnnealStrategy string
	CycleMomentum  bool
	BaseMomentum   float64
	MaxMomentum    float64
	DivFactor      float64
	FinalDivFactor float64
	LastEpoch      int
}

type Optimizer

type Optimizer struct {
	// contains filtered or unexported fields
}

Optimizer is a struct object to run gradient descent.

func (*Optimizer) AddParamGroup

func (opt *Optimizer) AddParamGroup(tensors []ts.Tensor)

func (*Optimizer) BackwardStep

func (opt *Optimizer) BackwardStep(loss *ts.Tensor)

BackwardStep applies a backward step pass, update the gradients, and performs an optimization step.

func (*Optimizer) BackwardStepClip

func (opt *Optimizer) BackwardStepClip(loss *ts.Tensor, max float64)

BackwardStepClip applies a backward step pass, update the gradients, and performs an optimization step.

The gradients are clipped based on `max` before being applied.

func (*Optimizer) BackwardStepClipNorm

func (opt *Optimizer) BackwardStepClipNorm(loss *ts.Tensor, max float64)

TODO. Applies a backward step pass, update the gradients, and performs an optimization step.

The gradients L2 norm is clipped based on `max`.

func (*Optimizer) ClipGradNorm

func (opt *Optimizer) ClipGradNorm(max float64)

/ TODO. Clips gradient L2 norm over all trainable parameters.

The norm is computed over all gradients together, as if they were concatenated into a single vector.

func (*Optimizer) ClipGradValue

func (opt *Optimizer) ClipGradValue(max float64)

Clips gradient value at some specified maximum value.

func (*Optimizer) GetLRs

func (opt *Optimizer) GetLRs() []float64

func (*Optimizer) ParamGroupNum

func (opt *Optimizer) ParamGroupNum() int

func (*Optimizer) ResetStepCount

func (opt *Optimizer) ResetStepCount()

ResetStepCount set step count to zero.

func (*Optimizer) SetLR

func (opt *Optimizer) SetLR(lr float64)

SetLR sets the optimizer learning rate.

NOTE. it sets a SINGLE value of learning rate for all parameter groups. Most of the time, there's one parameter group.

func (*Optimizer) SetLRs

func (opt *Optimizer) SetLRs(lrs []float64)

SetLRs sets learning rates for ALL parameter groups respectively.

func (*Optimizer) SetMomentum

func (opt *Optimizer) SetMomentum(m float64)

SetMomentum sets the optimizer momentum.

func (*Optimizer) Step

func (opt *Optimizer) Step()

Step performs an optimization step, updating the tracked tensors based on their gradients.

func (*Optimizer) StepCount

func (opt *Optimizer) StepCount() int

StepCount get current step count.

func (*Optimizer) ZeroGrad

func (opt *Optimizer) ZeroGrad()

ZeroGrad zeroes the gradient for the tensors tracked by this optimizer.

type OptimizerConfig

type OptimizerConfig interface {

	// Build builds an optimizer with the specified learning rate handling variables stored in `vs`.
	//
	// NOTE: Build is a 'default' method. It can be called by wrapping
	// 'DefaultBuild' function
	// E.g. AdamOptimizerConfig struct have a method to fullfil `Build` method of
	// OptimizerConfig by wrapping `DefaultBuild` like
	// (config AdamOptimizerConfig) Build(vs VarStore, lr float64) (retVal Optimizer, err error){
	//		return defaultBuild(config, vs, lr)
	// }
	Build(vs *VarStore, lr float64) (*Optimizer, error)
	// contains filtered or unexported methods
}

OptimizerConfig defines Optimizer configurations. These configs can be used to build optimizer.

type Path

type Path struct {
	// contains filtered or unexported fields
}

Path is variable store with an associated path for variables naming.

func (*Path) Add

func (p *Path) Add(name string, x *ts.Tensor, trainable bool) *ts.Tensor

Add adds a tensor to a given path.

func (*Path) Device

func (p *Path) Device() gotch.Device

Device gets the device where the var-store variables are stored.

func (*Path) Entry

func (p *Path) Entry(name string) *Entry

Entry gets the entry corresponding to a given name for in-place manipulation.

func (*Path) Get

func (p *Path) Get(name string) (*ts.Tensor, error)

Get gets the tensor corresponding to a given name if present.

func (*Path) KaimingUniform

func (p *Path) KaimingUniform(name string, dims []int64) *ts.Tensor

KaimingUniform creates a new variable initialized randomly with kaiming uniform.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized randomly using a uniform distribution which bounds follow Kaiming initialization.

func (*Path) NewVar

func (p *Path) NewVar(name string, dims []int64, ini Init) *ts.Tensor

NewVar creates a new variable.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized as per the related argument.

func (*Path) Ones

func (p *Path) Ones(name string, dims []int64) *ts.Tensor

Ones creates a new variable initialized with ones.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized with ones.

func (*Path) OnesNoTrain

func (p *Path) OnesNoTrain(name string, dims []int64) *ts.Tensor

OnesNoTrain creates a new variable initialized with ones.

The new variable is named according to the name parameter and has the specified shape. The variable will not be trainable so gradients will not be tracked. The variable uses a float tensor initialized with ones.

func (*Path) Randn

func (p *Path) Randn(name string, dims []int64, mean float64, stdev float64) *ts.Tensor

Randn creates a new variable initialized randomly with normal distribution.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized randomly using a normal distribution with the specified mean and standard deviation.

func (*Path) RandnStandard

func (p *Path) RandnStandard(name string, dims []int64) *ts.Tensor

RandnStandard creates a new variable initialized randomly with normal distribution.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized randomly using a standard normal distribution.

func (*Path) SetGroup

func (p *Path) SetGroup(g uint)

func (*Path) Sub

func (p *Path) Sub(str string) *Path

Sub gets a sub-path of the given path.

func (*Path) Uniform

func (p *Path) Uniform(name string, dims []int64, lo, up float64) *ts.Tensor

Uniform creates a new variable initialized randomly with uniform distribution.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized randomly using a uniform distribution between the specified bounds.

func (*Path) VarCopy

func (p *Path) VarCopy(name string, t *ts.Tensor) *ts.Tensor

VarCopy creates a new variable initialized by copying an existing tensor.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized by copying some given tensor.

func (*Path) Zeros

func (p *Path) Zeros(name string, dims []int64) *ts.Tensor

Zeros creates a new variable initialized with zeros.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized with zeros.

func (*Path) ZerosNoTrain

func (p *Path) ZerosNoTrain(name string, dims []int64) *ts.Tensor

ZerosNoTrain creates a new variable initialized with zeros.

The new variable is named according to the name parameter and has the specified shape. The variable will not be trainable so gradients will not be tracked. The variable uses a float tensor initialized with zeros.

type RMSPropConfig

type RMSPropConfig struct {
	Alpha    float64
	Eps      float64
	Wd       float64
	Momentum float64
	Centered bool
}

func DefaultRMSPropConfig

func DefaultRMSPropConfig() *RMSPropConfig

DefaultAdamConfig creates AdamConfig with default values

func NewRMSPropConfig

func NewRMSPropConfig(alpha, eps, wd, momentum float64, centered bool) *RMSPropConfig

NewRMSPropConfig creates RMSPropConfig with specified values

func (*RMSPropConfig) Build

func (c *RMSPropConfig) Build(vs *VarStore, lr float64) (*Optimizer, error)

type RNN

type RNN interface {

	// A zero state from which the recurrent network is usually initialized.
	ZeroState(batchDim int64) State

	// Applies a single step of the recurrent network.
	//
	// The input should have dimensions [batch_size, features].
	Step(input *ts.Tensor, inState State) State

	// Applies multiple steps of the recurrent network.
	//
	// The input should have dimensions [batch_size, seq_len, features].
	// The initial state is the result of applying zero_state.
	Seq(input *ts.Tensor) (*ts.Tensor, State)

	// Applies multiple steps of the recurrent network.
	//
	// The input should have dimensions [batch_size, seq_len, features].
	SeqInit(input *ts.Tensor, inState State) (*ts.Tensor, State)
}

type RNNConfig

type RNNConfig struct {
	HasBiases     bool
	NumLayers     int64
	Dropout       float64
	Train         bool
	Bidirectional bool
	BatchFirst    bool
}

The GRU and LSTM layers share the same config. Configuration for the GRU and LSTM layers.

func DefaultRNNConfig

func DefaultRNNConfig() *RNNConfig

Default creates default RNN configuration

type ReduceLROnPlateau

type ReduceLROnPlateau struct {
	// contains filtered or unexported fields
}

ReduceLROnPlateau reduces learning rate when a metric has stopped improving. Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This scheduler reads a metrics quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced.

func NewReduceLROnPlateau

func NewReduceLROnPlateau(opt *Optimizer, opts ...ReduceLROnPlateauOption) *ReduceLROnPlateau

func (*ReduceLROnPlateau) Build

func (s *ReduceLROnPlateau) Build() *LRScheduler

Build implements scheduler interface.

func (*ReduceLROnPlateau) SetLRs

func (s *ReduceLROnPlateau) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

type ReduceLROnPlateauOption

type ReduceLROnPlateauOption func(*ReduceLROnPlateauOptions)

func WithReduceOnPlateauCooldown

func WithReduceOnPlateauCooldown(cooldown int) ReduceLROnPlateauOption

func WithReduceOnPlateauEps

func WithReduceOnPlateauEps(eps float64) ReduceLROnPlateauOption

func WithReduceOnPlateauFactor

func WithReduceOnPlateauFactor(factor float64) ReduceLROnPlateauOption

func WithReduceOnPlateauMinLRs

func WithReduceOnPlateauMinLRs(minLRs []float64) ReduceLROnPlateauOption

func WithReduceOnPlateauMode

func WithReduceOnPlateauMode(mode string) ReduceLROnPlateauOption

func WithReduceOnPlateauPatience

func WithReduceOnPlateauPatience(patience int) ReduceLROnPlateauOption

func WithReduceOnPlateauThreshold

func WithReduceOnPlateauThreshold(threshold float64) ReduceLROnPlateauOption

func WithReduceOnPlateauThresholdMode

func WithReduceOnPlateauThresholdMode(thresholdMode string) ReduceLROnPlateauOption

func WithReduceOnPlateauVerbose

func WithReduceOnPlateauVerbose(verbose bool) ReduceLROnPlateauOption

type ReduceLROnPlateauOptions

type ReduceLROnPlateauOptions struct {
	Mode          string
	Factor        float64
	Patience      int
	Verbose       bool
	Threshold     float64
	ThresholdMode string
	MinLRs        []float64
	Cooldown      int
	Eps           float64
}

type SGDConfig

type SGDConfig struct {
	Momentum  float64
	Dampening float64
	Wd        float64
	Nesterov  bool
}

SGDConfig holds parameters for building the SGD (Stochastic Gradient Descent) optimizer.

func DefaultSGDConfig

func DefaultSGDConfig() *SGDConfig

DefaultSGDConfig creates SGDConfig with default values.

func NewSGDConfig

func NewSGDConfig(momentum, dampening, wd float64, nesterov bool) *SGDConfig

NewSGD creates the configuration for a SGD optimizer with specified values

func (*SGDConfig) Build

func (c *SGDConfig) Build(vs *VarStore, lr float64) (*Optimizer, error)

type SchedulerOption

type SchedulerOption func(*SchedulerOptions)

func WithLastEpoch

func WithLastEpoch(epoch int) SchedulerOption

func WithLoss

func WithLoss(loss float64) SchedulerOption

type SchedulerOptions

type SchedulerOptions struct {
	// Metrics   map[string]interface{}
	Loss      float64 // Usually metrics is loss value
	LastEpoch int
}

type Sequential

type Sequential struct {
	// contains filtered or unexported fields
}

Sequential is a layer (container) that combines multiple other layers.

func Seq

func Seq() *Sequential

Seq creates a new empty sequential layer

func (*Sequential) Add

func (s *Sequential) Add(l ts.Module)

Add appends a layer after all the current layers.

func (*Sequential) AddFn

func (s *Sequential) AddFn(fn ts.Module)

AddFn appends a closure after all the current layers.

NOTE: fn should have signature `func(t ts.Tensor) ts.Tensor` and it implements Module interface

func (*Sequential) Forward

func (s *Sequential) Forward(xs *ts.Tensor) (retVal *ts.Tensor)

Forward implements Module interface for Sequential

func (*Sequential) ForwardAll

func (s *Sequential) ForwardAll(xs *ts.Tensor, opts ...uint8) (retVal []ts.Tensor)

ForwardAll applies the forward pass and returns the output for each layer.

func (*Sequential) IsEmpty

func (s *Sequential) IsEmpty() (retVal bool)

IsEmpty returns true if this layer does not have any sub-layers.

func (*Sequential) Len

func (s *Sequential) Len() (retVal int64)

Len returns number of sub-layers embedded in this layer

type SequentialT

type SequentialT struct {
	// contains filtered or unexported fields
}

SequentialT is a sequential layer combining new layers with support for a training mode.

func SeqT

func SeqT() *SequentialT

/ SeqT creates a new empty sequential layer.

func (*SequentialT) Add

func (s *SequentialT) Add(l ts.ModuleT)

Add appends a layer after all the current layers.

func (*SequentialT) AddFn

func (s *SequentialT) AddFn(fn ts.ModuleT)

AddFn appends a closure after all the current layers.

NOTE: fn should have signature `func(t ts.Tensor) ts.Tensor` and it implements Module interface

func (*SequentialT) AddFnT

func (s *SequentialT) AddFnT(fn ts.ModuleT)

AddFn appends a closure after all the current layers.

NOTE: fn should have signature `func(t ts.Tensor, train bool) ts.Tensor` and it implements Module interface

func (*SequentialT) ForwardAllT

func (s *SequentialT) ForwardAllT(xs *ts.Tensor, train bool, opts ...uint8) (retVal []ts.Tensor)

ForwardAll applies the forward pass and returns the output for each layer.

func (*SequentialT) ForwardT

func (s *SequentialT) ForwardT(xs *ts.Tensor, train bool) *ts.Tensor

func (*SequentialT) IsEmpty

func (s *SequentialT) IsEmpty() (retVal bool)

IsEmpty returns true if this layer does not have any sub-layers.

func (*SequentialT) Len

func (s *SequentialT) Len() (retVal int64)

Len returns number of sub-layers embedded in this layer

type State

type State interface{}

type StepLR

type StepLR struct {
	// contains filtered or unexported fields
}

StepLR decays the learning rates of each optimizer parameter group by gamma every step size epochs.

NOTE. Such decay can happen simultaneously with other changes to the learning rate from outside this scheduler.

func NewStepLR

func NewStepLR(opt *Optimizer, stepSize int, gamma float64) *StepLR

NewStepLR creates a new StepLR.

func (*StepLR) Build

func (s *StepLR) Build() *LRScheduler

Build implements scheduler interface.

func (*StepLR) SetLRs

func (s *StepLR) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

type TrainableCModule

type TrainableCModule struct {
	Inner *ts.CModule
}

TrainableCModule is a trainable version of JIT Pytorch module

These modules can be created via TorchScript python API. See: https://pytorch.org/docs/stable/jit.html

func TrainableCModuleLoad

func TrainableCModuleLoad(p *Path, file string) (*TrainableCModule, error)

TrainableCModuleLoad loads a PyTorch saved JIT module from a file and adds tensors (weights) to `varstore` so that module can be trained.

func TrainableCModuleLoadData

func TrainableCModuleLoadData(p *Path, stream io.Reader) (*TrainableCModule, error)

func (*TrainableCModule) ForwardT

func (m *TrainableCModule) ForwardT(x *ts.Tensor, train bool) *ts.Tensor

ForwardT implements ModuleT for TrainableCModule. NOTE: train parameter will not be used.

func (*TrainableCModule) Save

func (m *TrainableCModule) Save(file string) error

Save saves TrainableCModule to specified file.

func (*TrainableCModule) SetEval

func (m *TrainableCModule) SetEval()

SetEval set TrainableCModule to inference mode

func (*TrainableCModule) SetTrain

func (m *TrainableCModule) SetTrain()

SetTrain set TrainableCModule to train mode

type Var

type Var struct {
	Tensor *ts.Tensor
	Group  uint // optimizer parameter group
}

type VarStore

type VarStore struct {
	Vars Variables
	// contains filtered or unexported fields
}

VarStore is used to store variables used by one or multiple layers. It specifies a SINGLE device where all variables are stored.

func NewVarStore

func NewVarStore(device gotch.Device) *VarStore

NewVarStore creates a new variable store located on the specified device

func (*VarStore) Copy

func (vs *VarStore) Copy(src VarStore) error

Copy copies variable values from a source var store to this var store.

All the variables in this var store have to exist with the same name in the source var store, otherwise an error is returned.

func (*VarStore) Device

func (vs *VarStore) Device() gotch.Device

Device returns device for this var-store

func (*VarStore) Freeze

func (vs *VarStore) Freeze()

Freeze freezes a var store.

Gradients for the variables in this store are not tracked anymore.

func (*VarStore) IsEmpty

func (vs *VarStore) IsEmpty() bool

IsEmpty returns true if no tensors are currently stored on this var-store

func (*VarStore) Len

func (vs *VarStore) Len() int

Len returns the number of tensors currently stored on this var-store

func (*VarStore) Load

func (vs *VarStore) Load(filepath string) error

Load loads the var-store variable values from a file.

NOTE: Weight values for all the tensors currently stored in the var-store gets loaded from the given file. Note that the set of variables stored in the var-store is not changed, only the values for these tensors are modified. It will throw error if name of the loaded tensors can not find in the current var-store named tensors set.

func (*VarStore) LoadPartial

func (vs *VarStore) LoadPartial(filepath string) ([]string, error)

LoadPartial loads the var-store variable values from a file if it exists.

Weight values for the tensors currently stored in the var-store and the given file get loaded from the given file. If a variable in the var store is not present in the given file, it is skipped and its values are not updated. This method should be used if pre-trained weight for only parts of the model are available. Note that the set of variables stored in the var-store is not changed, only the values for these tensors are modified.

Returns a String Vector containing the names of missing variables.

func (*VarStore) Root

func (vs *VarStore) Root() *Path

Root gets the root path for this var-store

NOTE: Variables are named and organized using paths. This function returns the top level path for the var store and can be combined with '/' to create sub-paths.

func (*VarStore) Save

func (vs *VarStore) Save(filepath string) error

Save saves the var-store variable values to a file

NOTE: Weight values for all the tensors currently stored in the var-store gets saved in the given file.

func (*VarStore) TrainableVariables

func (vs *VarStore) TrainableVariables() []ts.Tensor

TrainableVariabless returns all trainable variables for this var-store

func (*VarStore) Unfreeze

func (vs *VarStore) Unfreeze()

Unfreeze unfreezes a var store.

Gradients for the variables in this store are tracked again.

func (*VarStore) Variables

func (vs *VarStore) Variables() map[string]*ts.Tensor

Variables returns all variables and their names in a map[variable_name]Tensor

type Variables

type Variables struct {
	NamedVariables     map[string]*ts.Tensor
	TrainableVariables []Var
	// contains filtered or unexported fields
}

Variables represents a collection of tensors.

NOTE: When the variable store is frozen, trainable still is set to tree, however the tensor is not set to require gradients.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL