lshattention

package
v0.7.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 24, 2021 License: BSD-2-Clause Imports: 6 Imported by: 0

Documentation

Overview

Package lshattention provides an implementation of the LSH-Attention model, as describe in `Reformer: The Efficient Transformer` by N. Kitaev, Ł. Kaiser, A. Levskaya (https://arxiv.org/pdf/2001.04451.pdf). TODO: Check compatibility with the LSH Attention implemented by Hugging Face: TODO: https://huggingface.co/transformers/model_doc/reformer.html

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Config

type Config struct {
	InputSize   int
	QuerySize   int
	ValueSize   int
	BucketSize  int // num of buckets / 2
	ScaleFactor mat.Float
}

Config provides configuration settings for a LSH-Attention Model.

type ContextProb

type ContextProb struct {
	// Context encodings.
	Context []ag.Node
	// Prob attention scores.
	Prob []mat.Matrix
}

ContextProb is a pair of Context encodings and Prob attention scores.

type Model

type Model struct {
	nn.BaseModel
	Config
	Query     *linear.Model
	R         nn.Param `spago:"type:weights"`
	Value     *linear.Model
	Attention *ContextProb `spago:"scope:processor"`
}

Model contains the serializable parameters.

func New

func New(config Config) *Model

New returns a new model with parameters initialized to zeros.

func (*Model) Forward

func (m *Model) Forward(xs ...ag.Node) []ag.Node

Forward performs the forward step for each input node and returns the result.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL