timeseries

package
v1.40.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 2, 2026 License: Apache-2.0 Imports: 13 Imported by: 0

Documentation

Overview

Package timeseries provides time-series specific neural network layers.

Stability: alpha

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type DualSpaceEncoder added in v1.21.0

type DualSpaceEncoder[T tensor.Float] struct {
	// contains filtered or unexported fields
}

DualSpaceEncoder processes time series in both time and frequency domains, following the dual-space masked reconstruction approach from IBM Granite TSPulse.

The encoder splits input into patches, then processes them through two parallel paths — one in the time domain and one in the frequency domain (via DFT). The results are fused via concatenation and linear projection to produce embeddings that capture both temporal patterns and spectral characteristics.

func NewDualSpaceEncoder added in v1.21.0

func NewDualSpaceEncoder[T tensor.Float](
	engine compute.Engine[T],
	ops numeric.Arithmetic[T],
	dModel, patchLen, numLayers int,
) (*DualSpaceEncoder[T], error)

NewDualSpaceEncoder creates a new dual-space encoder.

Parameters:

  • engine: compute engine for tensor operations
  • ops: arithmetic operations for the numeric type
  • dModel: model embedding dimension
  • patchLen: length of each time-domain patch
  • numLayers: number of transformer encoder layers in each path

func (*DualSpaceEncoder[T]) Forward added in v1.21.0

func (e *DualSpaceEncoder[T]) Forward(ctx context.Context, input *tensor.TensorNumeric[T]) (*DualSpaceOutput[T], error)

Forward processes the input through both time and frequency domain paths, fuses the results, and returns fine-grained and semantic embeddings.

Input shape: [batch, seqLen] where seqLen is divisible by patchLen (or will be padded). Output: DualSpaceOutput with FineGrained [batch, numPatches, dModel] and Semantic [batch, dModel].

func (*DualSpaceEncoder[T]) Parameters added in v1.21.0

func (e *DualSpaceEncoder[T]) Parameters() []*graph.Parameter[T]

Parameters returns all trainable parameters of the dual-space encoder.

type DualSpaceOutput added in v1.21.0

type DualSpaceOutput[T tensor.Float] struct {
	// FineGrained is the full patch-level representation [batch, numPatches, dModel].
	// Used for anomaly detection and imputation where per-timestep detail is needed.
	FineGrained *tensor.TensorNumeric[T]

	// Semantic is the mean-pooled series-level representation [batch, dModel].
	// Used for classification and similarity search.
	Semantic *tensor.TensorNumeric[T]
}

DualSpaceOutput contains both fine-grained and semantic embeddings produced by the DualSpaceEncoder.

type GRN

type GRN[T tensor.Numeric] struct {
	// contains filtered or unexported fields
}

GRN implements a Gated Residual Network:

GRN(x) = LayerNorm(x + ELU(W1*x + b1) * sigmoid(W2*x + b2))

where LayerNorm is approximated as mean-subtraction and variance-normalization.

func NewGRN

func NewGRN[T tensor.Numeric](
	name string,
	engine compute.Engine[T],
	ops numeric.Arithmetic[T],
	inputDim, hiddenDim, outputDim int,
) (*GRN[T], error)

NewGRN creates a new Gated Residual Network layer.

func (*GRN[T]) Forward

func (g *GRN[T]) Forward(ctx context.Context, x *tensor.TensorNumeric[T]) (*tensor.TensorNumeric[T], error)

Forward computes GRN(x) = LayerNorm(residual + ELU(W1*x + b1) * sigmoid(W2*x + b2)) projected through wOut to outputDim. Input x: [batch, inputDim]. Output: [batch, outputDim].

func (*GRN[T]) Parameters

func (g *GRN[T]) Parameters() []*graph.Parameter[T]

Parameters returns the trainable parameters.

type MLSTM added in v1.37.0

type MLSTM[T tensor.Float] struct {

	// Weight matrices: projections [inputDim, hiddenDim]
	Wk *graph.Parameter[T]
	Wv *graph.Parameter[T]
	Wq *graph.Parameter[T]

	// Gate weights: [inputDim] (scalar gate per hidden unit)
	Wi *graph.Parameter[T]
	Wf *graph.Parameter[T]
	Wo *graph.Parameter[T]

	// Gate biases: [hiddenDim]
	Bi *graph.Parameter[T]
	Bf *graph.Parameter[T]
	Bo *graph.Parameter[T]
	// contains filtered or unexported fields
}

MLSTM implements the mLSTM (Matrix LSTM) cell from the xLSTM paper (Beck et al., 2024, arXiv:2405.04517).

The mLSTM replaces the scalar cell state of a classical LSTM with a matrix cell state (covariance memory) updated via outer products of key and value projections:

k_t = W_k * x_t                              — key projection
v_t = W_v * x_t                              — value projection
q_t = W_q * x_t                              — query projection
i_t = exp(clamp(w_i * x_t + b_i))            — exponential input gate (scalar per head)
f_t = exp(clamp(w_f * x_t + b_f))            — exponential forget gate (scalar per head)
C_t = f_t * C_{t-1} + i_t * (v_t * k_t^T)   — matrix cell state (outer product update)
n_t = f_t * n_{t-1} + i_t * k_t              — normalizer vector
h_t = o_t * (C_t * q_t) / max(|n_t^T * q_t|, 1) — hidden state
o_t = σ(w_o * x_t + b_o)                     — output gate (sigmoid)

Gate pre-activations for i and f are clamped to [-maxGatePreAct, maxGatePreAct] before applying exp() to prevent overflow.

func NewMLSTM added in v1.37.0

func NewMLSTM[T tensor.Float](engine compute.Engine[T], inputDim, hiddenDim int) (*MLSTM[T], error)

NewMLSTM creates a new mLSTM cell.

Parameters:

  • engine: the compute engine for tensor operations
  • inputDim: dimensionality of the input vector at each time step
  • hiddenDim: dimensionality of the hidden/key/value/query space

func (*MLSTM[T]) Attributes added in v1.37.0

func (m *MLSTM[T]) Attributes() map[string]interface{}

Attributes returns the attributes of the layer.

func (*MLSTM[T]) Forward added in v1.37.0

func (m *MLSTM[T]) Forward(
	ctx context.Context,
	x, hPrev *tensor.TensorNumeric[T],
	cPrev *tensor.TensorNumeric[T],
	nPrev *tensor.TensorNumeric[T],
) (h *tensor.TensorNumeric[T], cOut *tensor.TensorNumeric[T], nOut *tensor.TensorNumeric[T], err error)

Forward performs a single mLSTM time step.

Inputs:

  • x: input vector [batch, inputDim]
  • hPrev: previous hidden state [batch, hiddenDim]
  • cPrev: previous cell state [batch, hiddenDim, hiddenDim] (matrix memory)
  • nPrev: previous normalizer [batch, hiddenDim]

Returns (h, C, n) — the new hidden state, matrix cell state, and normalizer.

func (*MLSTM[T]) OpType added in v1.37.0

func (m *MLSTM[T]) OpType() string

OpType returns the operation type of the layer.

func (*MLSTM[T]) OutputShape added in v1.37.0

func (m *MLSTM[T]) OutputShape() []int

OutputShape returns the output shape of the layer.

func (*MLSTM[T]) Parameters added in v1.37.0

func (m *MLSTM[T]) Parameters() []*graph.Parameter[T]

Parameters returns all learnable parameters of the mLSTM cell.

type PatchEmbed

type PatchEmbed[T tensor.Numeric] struct {
	// contains filtered or unexported fields
}

PatchEmbed splits a 1D time series into non-overlapping patches and projects each patch to embed_dim using a learned linear projection.

func NewPatchEmbed

func NewPatchEmbed[T tensor.Numeric](
	name string,
	engine compute.Engine[T],
	ops numeric.Arithmetic[T],
	patchSize, embedDim int,
) (*PatchEmbed[T], error)

NewPatchEmbed creates a new PatchEmbed layer.

func (*PatchEmbed[T]) Attributes

func (pe *PatchEmbed[T]) Attributes() map[string]interface{}

Attributes returns the attributes of the layer.

func (*PatchEmbed[T]) Backward

func (pe *PatchEmbed[T]) Backward(ctx context.Context, mode types.BackwardMode, outputGradient *tensor.TensorNumeric[T], inputs ...*tensor.TensorNumeric[T]) ([]*tensor.TensorNumeric[T], error)

Backward computes the gradients for the patch embedding layer.

func (*PatchEmbed[T]) Forward

func (pe *PatchEmbed[T]) Forward(ctx context.Context, inputs ...*tensor.TensorNumeric[T]) (*tensor.TensorNumeric[T], error)

Forward takes [batch, seq_len] input and returns [batch, num_patches, embed_dim]. seq_len is padded with zeros if not divisible by PatchSize.

func (*PatchEmbed[T]) Name

func (pe *PatchEmbed[T]) Name() string

Name returns the name of the layer.

func (*PatchEmbed[T]) OpType

func (pe *PatchEmbed[T]) OpType() string

OpType returns the operation type of the layer.

func (*PatchEmbed[T]) OutputShape

func (pe *PatchEmbed[T]) OutputShape() []int

OutputShape returns the output shape of the layer.

func (*PatchEmbed[T]) Parameters

func (pe *PatchEmbed[T]) Parameters() []*graph.Parameter[T]

Parameters returns the trainable parameters.

func (*PatchEmbed[T]) SetName

func (pe *PatchEmbed[T]) SetName(name string)

SetName sets the name of the layer.

type SLSTM added in v1.37.0

type SLSTM[T tensor.Float] struct {

	// Weight matrices: input projections [inputDim, hiddenDim]
	Wi *graph.Parameter[T]
	Wf *graph.Parameter[T]
	Wz *graph.Parameter[T]
	Wo *graph.Parameter[T]

	// Weight matrices: recurrent projections [hiddenDim, hiddenDim]
	Ri *graph.Parameter[T]
	Rf *graph.Parameter[T]
	Rz *graph.Parameter[T]
	Ro *graph.Parameter[T]

	// Biases [hiddenDim]
	Bi *graph.Parameter[T]
	Bf *graph.Parameter[T]
	Bz *graph.Parameter[T]
	Bo *graph.Parameter[T]
	// contains filtered or unexported fields
}

SLSTM implements the sLSTM cell from the xLSTM paper (Beck et al., 2024).

The sLSTM extends the classical LSTM with exponential gating and a scalar normalizer state that stabilises the cell when input and forget gates use exp() instead of sigmoid():

i_t = exp(Wi*x_t + Ri*h_{t-1} + bi)   — exponential input gate
f_t = exp(Wf*x_t + Rf*h_{t-1} + bf)   — exponential forget gate
z_t = tanh(Wz*x_t + Rz*h_{t-1} + bz)  — cell input
o_t = σ(Wo*x_t + Ro*h_{t-1} + bo)      — output gate (sigmoid)
n_t = f_t * n_{t-1} + i_t               — normalizer state
c_t = f_t * c_{t-1} + i_t * z_t         — cell state
h_t = o_t * (c_t / n_t)                 — hidden state

Gate pre-activations for i and f are clamped to [-maxGatePreAct, maxGatePreAct] before applying exp() to prevent overflow.

func NewSLSTM added in v1.37.0

func NewSLSTM[T tensor.Float](engine compute.Engine[T], inputDim, hiddenDim int) (*SLSTM[T], error)

NewSLSTM creates a new sLSTM cell.

Parameters:

  • engine: the compute engine for tensor operations
  • inputDim: dimensionality of the input vector at each time step
  • hiddenDim: dimensionality of the hidden state

func (*SLSTM[T]) Attributes added in v1.37.0

func (s *SLSTM[T]) Attributes() map[string]interface{}

Attributes returns the attributes of the layer.

func (*SLSTM[T]) Forward added in v1.37.0

func (s *SLSTM[T]) Forward(
	ctx context.Context,
	x, hPrev, cPrev, nPrev *tensor.TensorNumeric[T],
) (h, c, n *tensor.TensorNumeric[T], err error)

Forward performs a single sLSTM time step.

Inputs:

  • x: input vector [batch, inputDim]
  • hPrev: previous hidden state [batch, hiddenDim]
  • cPrev: previous cell state [batch, hiddenDim]
  • nPrev: previous normalizer [batch, hiddenDim]

Returns (h, c, n) — the new hidden state, cell state, and normalizer.

func (*SLSTM[T]) OpType added in v1.37.0

func (s *SLSTM[T]) OpType() string

OpType returns the operation type of the layer.

func (*SLSTM[T]) OutputShape added in v1.37.0

func (s *SLSTM[T]) OutputShape() []int

OutputShape returns the output shape of the layer.

func (*SLSTM[T]) Parameters added in v1.37.0

func (s *SLSTM[T]) Parameters() []*graph.Parameter[T]

Parameters returns all learnable parameters of the sLSTM cell.

type SSMLayer added in v1.21.0

type SSMLayer[T tensor.Float] struct {

	// Learnable parameters.
	// A is stored in log-space so that the actual diagonal is -exp(A),
	// guaranteeing stability (negative real eigenvalues).
	A  *graph.Parameter[T] // [d_state]
	B  *graph.Parameter[T] // [d_state, d_input]
	C  *graph.Parameter[T] // [d_output, d_state]
	D  *graph.Parameter[T] // [d_output, d_input] feedthrough (skip connection)
	Dt *graph.Parameter[T] // [1] discretisation step size (log-space)
	// contains filtered or unexported fields
}

SSMLayer implements a State Space Model layer with diagonal state matrix.

The continuous-time system is:

x'(t) = A*x(t) + B*u(t)   (state equation)
y(t)  = C*x(t) + D*u(t)   (output equation)

For efficient computation, the layer uses a discretized recurrence with Zero-Order Hold (ZOH). Because A is diagonal (stored as a 1-D vector), all matrix operations reduce to element-wise products:

A_bar = exp(A * dt)
B_bar = (A_bar - 1) / A * B
x[k+1] = A_bar * x[k] + B_bar * u[k]
y[k]   = C * x[k] + D * u[k]

This is the S4D/S5-style parameterisation used by IBM Granite FlowState for time-series forecasting.

func NewSSMLayer added in v1.21.0

func NewSSMLayer[T tensor.Float](engine compute.Engine[T], dState, dInput, dOutput int) (*SSMLayer[T], error)

NewSSMLayer creates a new State Space Model layer with diagonal state matrix.

Parameters:

  • engine: the compute engine for tensor operations
  • dState: dimensionality of the hidden state
  • dInput: dimensionality of the input at each time step
  • dOutput: dimensionality of the output at each time step

func (*SSMLayer[T]) Attributes added in v1.21.0

func (s *SSMLayer[T]) Attributes() map[string]interface{}

Attributes returns the attributes of the layer.

func (*SSMLayer[T]) Forward added in v1.21.0

func (s *SSMLayer[T]) Forward(ctx context.Context, input *tensor.TensorNumeric[T]) (*tensor.TensorNumeric[T], error)

Forward processes an input sequence through the SSM.

Input shape: [batch, seq_len, d_input] Output shape: [batch, seq_len, d_output]

The method performs a sequential scan (one time step at a time). A parallel scan variant can be added later for GPU optimisation.

func (*SSMLayer[T]) OpType added in v1.21.0

func (s *SSMLayer[T]) OpType() string

OpType returns the operation type of the layer.

func (*SSMLayer[T]) OutputShape added in v1.21.0

func (s *SSMLayer[T]) OutputShape() []int

OutputShape returns the output shape of the layer.

func (*SSMLayer[T]) Parameters added in v1.21.0

func (s *SSMLayer[T]) Parameters() []*graph.Parameter[T]

Parameters returns all learnable parameters of the SSM layer.

type TSMixerBlock added in v1.21.0

type TSMixerBlock[T tensor.Float] struct {
	// contains filtered or unexported fields
}

TSMixerBlock implements a single TSMixer block with time-mixing and feature-mixing MLPs. This is the backbone layer used by IBM Granite TinyTimeMixer (TTM) for time series forecasting.

Each block contains:

  • Time-mixing MLP: mixes information across the patch/time dimension
  • Feature-mixing MLP: mixes information across the feature/channel dimension
  • LayerNorm after each mixing step
  • Residual connections around each mixing step

Time-mixing transposes the input so the MLP operates along the time axis, while feature-mixing applies the MLP along the last (feature) axis directly.

func NewTSMixerBlock added in v1.21.0

func NewTSMixerBlock[T tensor.Float](
	engine compute.Engine[T],
	ops numeric.Arithmetic[T],
	numPatches, dModel, expansion int,
	channelMixing bool,
) (*TSMixerBlock[T], error)

NewTSMixerBlock creates a new TSMixer block.

Parameters:

  • engine: the compute engine for tensor operations
  • ops: arithmetic operations for the numeric type
  • numPatches: the number of time patches (time dimension size)
  • dModel: the model/feature dimension size
  • expansion: expansion factor for the feature-mixing MLP hidden dim
  • channelMixing: if true, include the feature-mixing MLP; if false, channel-independent mode

func (*TSMixerBlock[T]) Attributes added in v1.21.0

func (b *TSMixerBlock[T]) Attributes() map[string]interface{}

Attributes returns the attributes of the layer.

func (*TSMixerBlock[T]) Forward added in v1.21.0

func (b *TSMixerBlock[T]) Forward(ctx context.Context, inputs ...*tensor.TensorNumeric[T]) (*tensor.TensorNumeric[T], error)

Forward computes the forward pass of the TSMixer block.

Input shape: [batch, numPatches, dModel] Output shape: [batch, numPatches, dModel]

Steps:

  1. Time-mixing: LayerNorm -> transpose -> MLP(GELU) -> transpose -> residual add
  2. Feature-mixing (if enabled): LayerNorm -> MLP(GELU) -> residual add

func (*TSMixerBlock[T]) OpType added in v1.21.0

func (b *TSMixerBlock[T]) OpType() string

OpType returns the operation type of the layer.

func (*TSMixerBlock[T]) OutputShape added in v1.21.0

func (b *TSMixerBlock[T]) OutputShape() []int

OutputShape returns the output shape of the layer.

func (*TSMixerBlock[T]) Parameters added in v1.21.0

func (b *TSMixerBlock[T]) Parameters() []*graph.Parameter[T]

Parameters returns all trainable parameters of the block.

type VSN

type VSN[T tensor.Numeric] struct {
	// contains filtered or unexported fields
}

VSN implements a Variable Selection Network for the Temporal Fusion Transformer.

Each of N input variables is projected to d_model via a learned linear projection. The flat concatenation of all variable embeddings is passed through a GRN and softmax to produce N importance weights. The output is the weighted sum of the variable embeddings.

func NewVSN

func NewVSN[T tensor.Numeric](
	name string,
	engine compute.Engine[T],
	ops numeric.Arithmetic[T],
	numVars, varInputDim, dModel int,
) (*VSN[T], error)

NewVSN creates a new Variable Selection Network. numVars is the number of input variables. varInputDim is the input dimension of each variable. dModel is the projection/output dimension.

func (*VSN[T]) Attributes

func (v *VSN[T]) Attributes() map[string]interface{}

Attributes returns the attributes of the layer.

func (*VSN[T]) Backward

func (v *VSN[T]) Backward(ctx context.Context, mode types.BackwardMode, outputGradient *tensor.TensorNumeric[T], inputs ...*tensor.TensorNumeric[T]) ([]*tensor.TensorNumeric[T], error)

Backward computes gradients for the VSN layer.

func (*VSN[T]) Forward

func (v *VSN[T]) Forward(ctx context.Context, inputs []*tensor.TensorNumeric[T]) (*tensor.TensorNumeric[T], []float32, error)

Forward computes the variable selection network. inputs is a slice of N tensors, each [batch, varInputDim]. Returns (weighted_embedding [batch, dModel], importance_weights [numVars], error).

func (*VSN[T]) Name

func (v *VSN[T]) Name() string

Name returns the name of the layer.

func (*VSN[T]) OpType

func (v *VSN[T]) OpType() string

OpType returns the operation type of the layer.

func (*VSN[T]) OutputShape

func (v *VSN[T]) OutputShape() []int

OutputShape returns the output shape of the layer.

func (*VSN[T]) Parameters

func (v *VSN[T]) Parameters() []*graph.Parameter[T]

Parameters returns the trainable parameters.

func (*VSN[T]) SetName

func (v *VSN[T]) SetName(name string)

SetName sets the name of the layer.

type ValueTokenizer added in v1.37.0

type ValueTokenizer struct {
	// contains filtered or unexported fields
}

ValueTokenizer maps continuous float values to discrete bin indices and back.

Chronos (Ansari et al., 2024) tokenizes time-series values into discrete bins whose edges are learned during pre-training and stored in the model config. This tokenizer performs:

  • Tokenize: map a float value to the index of the bin that contains it. Values below the first edge map to bin 0; values at or above the last edge map to the last bin (numBins-1).

  • Detokenize: map a bin index back to the bin center, computed as the midpoint of the bin's lower and upper edges. The first and last bins use the nearest interior edge width for extrapolation.

func NewValueTokenizer added in v1.37.0

func NewValueTokenizer(edges []float64) (*ValueTokenizer, error)

NewValueTokenizer creates a tokenizer from the given bin edges.

edges must contain at least 2 elements (defining at least 1 bin) and must be sorted in strictly ascending order.

func (*ValueTokenizer) Centers added in v1.37.0

func (vt *ValueTokenizer) Centers() []float64

Centers returns a copy of the precomputed bin centers.

func (*ValueTokenizer) Detokenize added in v1.37.0

func (vt *ValueTokenizer) Detokenize(bin int) float64

Detokenize maps a bin index back to the bin center value.

If the index is out of range it is clamped to [0, numBins-1].

func (*ValueTokenizer) DetokenizeBatch added in v1.37.0

func (vt *ValueTokenizer) DetokenizeBatch(bins []int) []float64

DetokenizeBatch maps a slice of bin indices back to bin center values.

func (*ValueTokenizer) Edges added in v1.37.0

func (vt *ValueTokenizer) Edges() []float64

Edges returns a copy of the bin edges.

func (*ValueTokenizer) NumBins added in v1.37.0

func (vt *ValueTokenizer) NumBins() int

NumBins returns the number of discrete bins.

func (*ValueTokenizer) Tokenize added in v1.37.0

func (vt *ValueTokenizer) Tokenize(v float64) int

Tokenize maps a continuous value to its bin index.

The bin index is determined by binary search over the edges. A value v falls into bin i if edges[i] <= v < edges[i+1]. Values below edges[0] map to bin 0; values >= edges[numBins] map to bin numBins-1.

func (*ValueTokenizer) TokenizeBatch added in v1.37.0

func (vt *ValueTokenizer) TokenizeBatch(values []float64) []int

TokenizeBatch maps a slice of values to bin indices.

type VariateProjection added in v1.37.0

type VariateProjection[T tensor.Numeric] struct {
	// contains filtered or unexported fields
}

VariateProjection projects each variate of a multivariate time series independently, then concatenates with a learned frequency embedding. This follows the Moirai-2 any-variate input projection design, supporting arbitrary numbers of variates with potentially different lengths via padding and attention masks.

Each variate is projected from its time dimension to embedDim using a shared linear projection. A learnable frequency embedding is added per variate to encode variate identity.

func NewVariateProjection added in v1.37.0

func NewVariateProjection[T tensor.Numeric](
	name string,
	engine compute.Engine[T],
	ops numeric.Arithmetic[T],
	inputDim, embedDim, maxVariates int,
) (*VariateProjection[T], error)

NewVariateProjection creates a new any-variate input projection layer.

Parameters:

  • name: layer name
  • engine: compute engine for tensor operations
  • ops: arithmetic operations for the numeric type
  • inputDim: length of each variate's time series
  • embedDim: output embedding dimension per variate
  • maxVariates: maximum number of variates supported (for frequency embedding table)

func (*VariateProjection[T]) Attributes added in v1.37.0

func (vp *VariateProjection[T]) Attributes() map[string]interface{}

Attributes returns the attributes of the layer.

func (*VariateProjection[T]) Forward added in v1.37.0

func (vp *VariateProjection[T]) Forward(ctx context.Context, inputs ...*tensor.TensorNumeric[T]) (*tensor.TensorNumeric[T], error)

Forward projects each variate independently and adds frequency embeddings.

Input shape: [batch, numVariates, inputDim]

  • Each variate is a time series of length inputDim.
  • Variates shorter than inputDim should be zero-padded by the caller.
  • numVariates must be <= maxVariates.

Output shape: [batch, numVariates, embedDim]

Optional second input: attention mask [batch, numVariates] with 1.0 for valid variates and 0.0 for padded variates. When provided, padded variate outputs are zeroed out.

func (*VariateProjection[T]) Name added in v1.37.0

func (vp *VariateProjection[T]) Name() string

Name returns the name of the layer.

func (*VariateProjection[T]) OpType added in v1.37.0

func (vp *VariateProjection[T]) OpType() string

OpType returns the operation type of the layer.

func (*VariateProjection[T]) OutputShape added in v1.37.0

func (vp *VariateProjection[T]) OutputShape() []int

OutputShape returns the output shape of the layer.

func (*VariateProjection[T]) Parameters added in v1.37.0

func (vp *VariateProjection[T]) Parameters() []*graph.Parameter[T]

Parameters returns the trainable parameters.

func (*VariateProjection[T]) SetName added in v1.37.0

func (vp *VariateProjection[T]) SetName(name string)

SetName sets the name of the layer.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL