torch

package module
v0.0.0-...-890d392 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 8, 2025 License: AGPL-3.0 Imports: 12 Imported by: 0

README

BioTorch - Neuromorphic Compute(Bio-Neuron Inspired) DeepLearning Engine

build

Inference Compute Runtime Support

  • Built-in Inference Computing Engine (CPU Compute Version Available Under AGPL License / GPU Close Source)
  • ONNX Runtime (Version 7 - Export ONNX Models for Offline Production Computation, Available Under AGPL license / ONNX Runtime Built-in Realtime Compute Integration Close Source)
llama3 example with gpu acceleration built on top of this framework
llama3_1b_cpu_01llama3_1b_cpu-02
CNN From Scratch example AND re-design build-in Graphic Renderer that compatible with Tensor data struct:
CNN From Scratchgraphic_renderer
More Examples: README.md
Notice

Some parts of this project’s code may generate, optimize, or annotate by AI.

Terms of Use
Permission to Use

Citizens, companies, and other organizations from Full democracies and Flawed democracies countries (classified by The Economist Democracy Index) are permitted to use this repository under the GNU Affero General Public License (AGPL).

Prohibited Uses

The following use purposes are strictly prohibited:

  • Any activity to serve the ideology or social system of communism, socialism, or any other ideology in a manner that leads to harm, oppression, or violation of human rights.
  • Human rights violations, including surveillance, oppression, or discrimination.
  • Terrorism or supporting terrorist organizations.
  • Any other malicious or harmful purposes that could cause harm to individuals or society.
  • Military applications, including but not limited to warfare, weapons development, or any defense-related activities.
  • Illegal activities, such as hacking, fraud, or any action that violates applicable laws.

Users must comply with all local, national, and international laws when using this repository. To request permission for a use case not covered above, please contact me.

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func LinearCompute

func LinearCompute(Bias, Weights, OutputData []float32, inputLength int, inputFloat32 float32, startPos int)

func LoadFlatDataFromCSV

func LoadFlatDataFromCSV(filePath string) ([]float32, error)

func LoadImageFromCSV

func LoadImageFromCSV(filename string) *tensor.Tensor

func LoadMatrixFromCSV

func LoadMatrixFromCSV(filename string) (*tensor.Tensor, error)

func MSE

func MSE(predictions, targets *tensor.Tensor) float32

func Sigmoid

func Sigmoid(x float32) float32

func SigmoidDerivative

func SigmoidDerivative(x float32) float32

Types

type BasicTrainer

type BasicTrainer struct {
	LossFunc func(predictions, targets *tensor.Tensor) float32
	Verbose  bool
}

func NewBasicTrainer

func NewBasicTrainer(lossFunc func(predictions, targets *tensor.Tensor) float32) *BasicTrainer

func (*BasicTrainer) Train

func (t *BasicTrainer) Train(model ModelInterface, inputs, targets *tensor.Tensor, epochs int, learningRate float32)

func (*BasicTrainer) Validate

func (t *BasicTrainer) Validate(model ModelInterface, inputs, targets *tensor.Tensor) float32

type BatchNormLayer

type BatchNormLayer struct {
	RunningMean *tensor.Tensor
	// contains filtered or unexported fields
}

func NewBatchNormLayer

func NewBatchNormLayer(numFeatures int, eps, momentum float32) *BatchNormLayer

func (*BatchNormLayer) Forward

func (bn *BatchNormLayer) Forward(x *tensor.Tensor) *tensor.Tensor

func (*BatchNormLayer) GetBias

func (l *BatchNormLayer) GetBias() *tensor.Tensor

func (*BatchNormLayer) GetWeights

func (l *BatchNormLayer) GetWeights() *tensor.Tensor

func (*BatchNormLayer) SetBias

func (l *BatchNormLayer) SetBias(data []float32)

func (*BatchNormLayer) SetBiasAndShape

func (l *BatchNormLayer) SetBiasAndShape(data []float32, shape []int)

func (*BatchNormLayer) SetWeights

func (l *BatchNormLayer) SetWeights(data []float32)

func (*BatchNormLayer) SetWeightsAndShape

func (l *BatchNormLayer) SetWeightsAndShape(data []float32, shape []int)

type ConvLayer

type ConvLayer struct {
	InChannels  int
	OutChannels int
	KernelSize  int
	Stride      int
	Padding     int
	Weights     *tensor.Tensor
	Bias        *tensor.Tensor
	InputCache  *tensor.Tensor
	GradWeights *tensor.Tensor
	GradBias    *tensor.Tensor
}

func NewConvLayer

func NewConvLayer(inCh, outCh, kSize, stride, pad int) *ConvLayer

func (*ConvLayer) Backward

func (c *ConvLayer) Backward(gradOutput *tensor.Tensor) *tensor.Tensor

func (*ConvLayer) BackwardWithLR

func (c *ConvLayer) BackwardWithLR(gradOutput *tensor.Tensor, learningRate float32) *tensor.Tensor

func (*ConvLayer) Forward

func (c *ConvLayer) Forward(x *tensor.Tensor) *tensor.Tensor

func (*ConvLayer) GetBias

func (l *ConvLayer) GetBias() *tensor.Tensor

func (*ConvLayer) GetWeights

func (l *ConvLayer) GetWeights() *tensor.Tensor

func (*ConvLayer) SetBias

func (l *ConvLayer) SetBias(data []float32)

func (*ConvLayer) SetBiasAndShape

func (l *ConvLayer) SetBiasAndShape(data []float32, shape []int)

func (*ConvLayer) SetWeights

func (l *ConvLayer) SetWeights(data []float32)

func (*ConvLayer) SetWeightsAndShape

func (l *ConvLayer) SetWeightsAndShape(data []float32, shape []int)

func (*ConvLayer) UpdateParameters

func (c *ConvLayer) UpdateParameters(learningRate float32)

func (*ConvLayer) ZeroGrad

func (c *ConvLayer) ZeroGrad()

type Embedding

type Embedding struct {
	Weights     *tensor.Tensor
	GradWeights *tensor.Tensor
	VocabSize   int
	EmbDim      int
	LastIndices []int
}

func NewEmbedding

func NewEmbedding(vocabSize, embDim int) *Embedding

func (*Embedding) Backward

func (e *Embedding) Backward(gradOutput *tensor.Tensor, learningRate float32) *tensor.Tensor

func (*Embedding) Forward

func (e *Embedding) Forward(indices *tensor.Tensor) *tensor.Tensor

func (*Embedding) GetBias

func (e *Embedding) GetBias() *tensor.Tensor

func (*Embedding) GetWeights

func (e *Embedding) GetWeights() *tensor.Tensor

func (*Embedding) Parameters

func (e *Embedding) Parameters() []*tensor.Tensor

func (*Embedding) SetBiasAndShape

func (e *Embedding) SetBiasAndShape(data []float32, shape []int)

func (*Embedding) SetWeightsAndShape

func (e *Embedding) SetWeightsAndShape(data []float32, shape []int)

func (*Embedding) ZeroGrad

func (e *Embedding) ZeroGrad()

type FlattenLayer

type FlattenLayer struct {
	// contains filtered or unexported fields
}

func NewFlattenLayer

func NewFlattenLayer() *FlattenLayer

func (*FlattenLayer) Backward

func (f *FlattenLayer) Backward(dout *tensor.Tensor) *tensor.Tensor

func (*FlattenLayer) Forward

func (f *FlattenLayer) Forward(x *tensor.Tensor) *tensor.Tensor

func (*FlattenLayer) GetBias

func (r *FlattenLayer) GetBias() *tensor.Tensor

func (*FlattenLayer) GetWeights

func (r *FlattenLayer) GetWeights() *tensor.Tensor

type Layer

type Layer interface {
	Forward(input *tensor.Tensor) *tensor.Tensor

	//DEL
	Backward(gradOutput *tensor.Tensor, learningRate float32) *tensor.Tensor
	ZeroGrad()
	Parameters() []*tensor.Tensor
}

type LayerForTesting

type LayerForTesting interface {
	GetWeights() *tensor.Tensor
	GetBias() *tensor.Tensor
}

type LayerLoader

type LayerLoader interface {
	SetWeightsAndShape(data []float32, shape []int)
	SetBiasAndShape(data []float32, shape []int)
}

type LinearLayer

type LinearLayer struct {
	InputDim          int
	OutputDim         int
	Weights           *tensor.Tensor
	Bias              *tensor.Tensor
	WeightsTransposed bool
}

func NewLinearLayer

func NewLinearLayer(inputDim, outputDim int) *LinearLayer

func (*LinearLayer) Backward

func (l *LinearLayer) Backward(x *tensor.Tensor, lr float32) *tensor.Tensor

func (*LinearLayer) Forward

func (l *LinearLayer) Forward(x *tensor.Tensor) *tensor.Tensor

func (*LinearLayer) ForwardMultiThread

func (l *LinearLayer) ForwardMultiThread(x *tensor.Tensor) *tensor.Tensor

func (*LinearLayer) ForwardSignalThread

func (l *LinearLayer) ForwardSignalThread(x *tensor.Tensor) *tensor.Tensor

func (*LinearLayer) ForwardSignalThreadCompute

func (l *LinearLayer) ForwardSignalThreadCompute(x *tensor.Tensor) *tensor.Tensor

func (*LinearLayer) GetBias

func (l *LinearLayer) GetBias() *tensor.Tensor

func (*LinearLayer) GetWeights

func (l *LinearLayer) GetWeights() *tensor.Tensor

func (*LinearLayer) Parameters

func (l *LinearLayer) Parameters() []*tensor.Tensor

func (*LinearLayer) SetBias

func (l *LinearLayer) SetBias(data []float32)

func (*LinearLayer) SetBiasAndShape

func (l *LinearLayer) SetBiasAndShape(data []float32, shape []int)

func (*LinearLayer) SetWeights

func (l *LinearLayer) SetWeights(data []float32)

func (*LinearLayer) SetWeightsAndShape

func (l *LinearLayer) SetWeightsAndShape(data []float32, shape []int)

func (*LinearLayer) SetupBackward

func (l *LinearLayer) SetupBackward(x, out *tensor.Tensor)

func (*LinearLayer) ZeroGrad

func (l *LinearLayer) ZeroGrad()

type MaxPool2DLayer

type MaxPool2DLayer struct {
	PoolSize  int
	Stride    int
	Padding   int
	Input     *tensor.Tensor
	ArgMax    [][4]int
	OutputDim []int
}

func NewMaxPool2DLayer

func NewMaxPool2DLayer(poolSize, stride, padding int) *MaxPool2DLayer

func (*MaxPool2DLayer) Backward

func (m *MaxPool2DLayer) Backward(gradOutput *tensor.Tensor) *tensor.Tensor

func (*MaxPool2DLayer) Forward

func (m *MaxPool2DLayer) Forward(x *tensor.Tensor) *tensor.Tensor

func (*MaxPool2DLayer) OutputShape

func (m *MaxPool2DLayer) OutputShape() []int

type Model

type Model struct {
	Layers          []Layer
	LayerIndex2Name map[int]string
	LayerName2Index map[string]int
	// contains filtered or unexported fields
}

func NewModel

func NewModel() *Model

func (*Model) AddLayer

func (m *Model) AddLayer(layer Layer)

func (*Model) Backward

func (m *Model) Backward(target *tensor.Tensor, learningRate float32)

func (*Model) Forward

func (m *Model) Forward(input *tensor.Tensor) *tensor.Tensor

func (*Model) PrintModel

func (m *Model) PrintModel()

func (*Model) ZeroGrad

func (m *Model) ZeroGrad()

type ModelInterface

type ModelInterface interface {
	Forward(input *tensor.Tensor) *tensor.Tensor
	Backward(target *tensor.Tensor, learningRate float32)
	ZeroGrad()
}

type ReLULayer

type ReLULayer struct {
	// contains filtered or unexported fields
}

func NewLeakyReLULayer

func NewLeakyReLULayer(negativeSlope float32) *ReLULayer

func NewReLULayer

func NewReLULayer() *ReLULayer

func (*ReLULayer) ActivationType

func (r *ReLULayer) ActivationType() string

func (*ReLULayer) Backward

func (r *ReLULayer) Backward(gradOutput *tensor.Tensor, learningRate float32) *tensor.Tensor

func (*ReLULayer) Forward

func (r *ReLULayer) Forward(x *tensor.Tensor) *tensor.Tensor

func (*ReLULayer) GetBias

func (r *ReLULayer) GetBias() *tensor.Tensor

func (*ReLULayer) GetWeights

func (r *ReLULayer) GetWeights() *tensor.Tensor

func (*ReLULayer) NegativeSlope

func (r *ReLULayer) NegativeSlope() float32

func (*ReLULayer) Parameters

func (r *ReLULayer) Parameters() []*tensor.Tensor

func (*ReLULayer) SetInplace

func (r *ReLULayer) SetInplace(inplace bool)

func (*ReLULayer) ZeroGrad

func (r *ReLULayer) ZeroGrad()

type SigmoidLayer

type SigmoidLayer struct {
}

func NewSigmoidLayer

func NewSigmoidLayer() *SigmoidLayer

func (*SigmoidLayer) Backward

func (s *SigmoidLayer) Backward(gradOutput *tensor.Tensor, learningRate float32) *tensor.Tensor

func (*SigmoidLayer) Forward

func (s *SigmoidLayer) Forward(input *tensor.Tensor) *tensor.Tensor

func (*SigmoidLayer) Parameters

func (s *SigmoidLayer) Parameters() []*tensor.Tensor

func (*SigmoidLayer) ZeroGrad

func (s *SigmoidLayer) ZeroGrad()

type SoftmaxLayer

type SoftmaxLayer struct {
	// contains filtered or unexported fields
}

func NewSoftmaxLayer

func NewSoftmaxLayer(axis int) *SoftmaxLayer

func (*SoftmaxLayer) Backward

func (s *SoftmaxLayer) Backward(dout *tensor.Tensor) *tensor.Tensor

func (*SoftmaxLayer) Forward

func (s *SoftmaxLayer) Forward(x *tensor.Tensor) *tensor.Tensor

func (*SoftmaxLayer) GetAxis

func (s *SoftmaxLayer) GetAxis() int

func (*SoftmaxLayer) SetAxis

func (s *SoftmaxLayer) SetAxis(axis int)

type TrainerInterface

type TrainerInterface interface {
	Train(model ModelInterface, inputs, targets *tensor.Tensor, epochs int, learningRate float32)
	Validate(model ModelInterface, inputs, targets *tensor.Tensor) float32
}

Directories

Path Synopsis
cmd
03_autoencoder command
data_store
datasets_loader
pkg
fmt
log
risc-v
thirdparty
fp16_convert
Package float16 defines support for half-precision floating-point numbers.
Package float16 defines support for half-precision floating-point numbers.
onnx-go/backend/simple
Package simple holds a very simple graph structure suitable to receive an onnx model
Package simple holds a very simple graph structure suitable to receive an onnx model
onnx-go/backend/testbackend
Package testbackend provides a set of testing helper functions that test backend interface implementations.
Package testbackend provides a set of testing helper functions that test backend interface implementations.
onnx-go/backend/testbackend/onnx
Package onnxtest contains an export of the onnx test files
Package onnxtest contains an export of the onnx test files
onnx-go/backend/x/gorgonnx
Package gorgonnx creates a temporary graph that is compatible with backend.ComputationBackend.
Package gorgonnx creates a temporary graph that is compatible with backend.ComputationBackend.
onnx-go/doc/introduction command
Present displays slide presentations and articles.
Present displays slide presentations and articles.
vision

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL