pkg/

directory
v0.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 9, 2020 License: BSD-2-Clause

Directories

Path Synopsis
mat
internal/asm/f64
Package f64 provides float64 vector primitives.
Package f64 provides float64 vector primitives.
ml
ag
ag/fn
SparseMax implementation based on https://github.com/gokceneraslan/SparseMax.torch
SparseMax implementation based on https://github.com/gokceneraslan/SparseMax.torch
nn
nn/birnncrf
Bidirectional Recurrent Neural Network (BiRNN) with a Conditional Random Fields (CRF) on top.
Bidirectional Recurrent Neural Network (BiRNN) with a Conditional Random Fields (CRF) on top.
nn/bls
Implementation of the Broad Learning System (BLS) described in "Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture" by C. L. Philip Chen and Zhulin Liu, 2017.
Implementation of the Broad Learning System (BLS) described in "Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture" by C. L. Philip Chen and Zhulin Liu, 2017.
nn/gnn/slstm
slstm Reference: "Sentence-State LSTM for Text Representation" by Zhang et al, 2018.
slstm Reference: "Sentence-State LSTM for Text Representation" by Zhang et al, 2018.
nn/gnn/startransformer
StarTransformer is a variant of the model introduced by Qipeng Guo, Xipeng Qiu et al.
StarTransformer is a variant of the model introduced by Qipeng Guo, Xipeng Qiu et al.
nn/lshattention
LSH-Attention as in `Reformer: The Efficient Transformer` by N. Kitaev, Ł. Kaiser, A. Levskaya.
LSH-Attention as in `Reformer: The Efficient Transformer` by N. Kitaev, Ł. Kaiser, A. Levskaya.
nn/normalization/adanorm
Reference: "Understanding and Improving Layer Normalization" by Jingjing Xu, Xu Sun, Zhiyuan Zhang, Guangxiang Zhao, Junyang Lin (2019).
Reference: "Understanding and Improving Layer Normalization" by Jingjing Xu, Xu Sun, Zhiyuan Zhang, Guangxiang Zhao, Junyang Lin (2019).
nn/normalization/fixnorm
Reference: "Improving Lexical Choice in Neural Machine Translation" by Toan Q. Nguyen and David Chiang (2018) (https://arxiv.org/pdf/1710.01329.pdf)
Reference: "Improving Lexical Choice in Neural Machine Translation" by Toan Q. Nguyen and David Chiang (2018) (https://arxiv.org/pdf/1710.01329.pdf)
nn/normalization/layernorm
Reference: "Layer normalization" by Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton (2016).
Reference: "Layer normalization" by Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton (2016).
nn/normalization/layernormsimple
Reference: "Understanding and Improving Layer Normalization" by Jingjing Xu, Xu Sun, Zhiyuan Zhang, Guangxiang Zhao, Junyang Lin (2019).
Reference: "Understanding and Improving Layer Normalization" by Jingjing Xu, Xu Sun, Zhiyuan Zhang, Guangxiang Zhao, Junyang Lin (2019).
nn/normalization/rmsnorm
Reference: "Root Mean Square Layer Normalization" by Biao Zhang and Rico Sennrich (2019).
Reference: "Root Mean Square Layer Normalization" by Biao Zhang and Rico Sennrich (2019).
nn/rae
Implementation of the recursive auto-encoder strategy described in "Towards Lossless Encoding of Sentences" by Prato et al., 2019.
Implementation of the recursive auto-encoder strategy described in "Towards Lossless Encoding of Sentences" by Prato et al., 2019.
nn/rc
This package contains built-in Residual Connections (RC).
This package contains built-in Residual Connections (RC).
nn/rec/horn
Higher Order Recurrent Neural Networks (HORN)
Higher Order Recurrent Neural Networks (HORN)
nn/rec/lstmsc
LSTM enriched with a PolicyGradient to enable Dynamic Skip Connections.
LSTM enriched with a PolicyGradient to enable Dynamic Skip Connections.
nn/rec/mist
Implementation of the MIST (MIxed hiSTory) recurrent network as described in "Analyzing and Exploiting NARX Recurrent Neural Networks for Long-Term Dependencies" by Di Pietro et al., 2018 (https://arxiv.org/pdf/1702.07805.pdf).
Implementation of the MIST (MIxed hiSTory) recurrent network as described in "Analyzing and Exploiting NARX Recurrent Neural Networks for Long-Term Dependencies" by Di Pietro et al., 2018 (https://arxiv.org/pdf/1702.07805.pdf).
nn/rec/nru
Implementation of the NRU (Non-Saturating Recurrent Units) recurrent network as described in "Towards Non-Saturating Recurrent Units for Modelling Long-Term Dependencies" by Chandar et al., 2019.
Implementation of the NRU (Non-Saturating Recurrent Units) recurrent network as described in "Towards Non-Saturating Recurrent Units for Modelling Long-Term Dependencies" by Chandar et al., 2019.
nn/rec/rla
RLA (Recurrent Linear Attention) "Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention" by Katharopoulos et al., 2020.
RLA (Recurrent Linear Attention) "Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention" by Katharopoulos et al., 2020.
nn/rec/srnn
srnn implements the SRNN (Shuffling Recurrent Neural Networks) by Rotman and Wolf, 2020.
srnn implements the SRNN (Shuffling Recurrent Neural Networks) by Rotman and Wolf, 2020.
nn/syntheticattention
This is an implementation of the Synthetic Attention described in: "SYNTHESIZER: Rethinking Self-Attention in Transformer Models" by Tay et al., 2020.
This is an implementation of the Synthetic Attention described in: "SYNTHESIZER: Rethinking Self-Attention in Transformer Models" by Tay et al., 2020.
nlp
charlm
CharLM implements a character-level language model that uses a recurrent neural network as its backbone.
CharLM implements a character-level language model that uses a recurrent neural network as its backbone.
contextualstringembeddings
Implementation of the "Contextual String Embeddings" of words (Akbik et al., 2018).
Implementation of the "Contextual String Embeddings" of words (Akbik et al., 2018).
evolvingembeddings
A word embedding model that evolves itself by dynamically aggregating contextual embeddings over time during inference.
A word embedding model that evolves itself by dynamically aggregating contextual embeddings over time during inference.
sequencelabeler
Implementation of a sequence labeling architecture composed by Embeddings -> BiRNN -> Scorer -> CRF.
Implementation of a sequence labeling architecture composed by Embeddings -> BiRNN -> Scorer -> CRF.
stackedembeddings
StackedEmbeddings is a convenient module that stacks multiple word embedding representations by concatenating them.
StackedEmbeddings is a convenient module that stacks multiple word embedding representations by concatenating them.
tokenizers
This package is an interim solution while developing `gotokenizers` (https://github.com/nlpodyssey/gotokenizers).
This package is an interim solution while developing `gotokenizers` (https://github.com/nlpodyssey/gotokenizers).
tokenizers/basetokenizer
BaseTokenizer is a very simple tokenizer that splits per white-spaces (and alike) and punctuation symbols.
BaseTokenizer is a very simple tokenizer that splits per white-spaces (and alike) and punctuation symbols.
transformers/bert
Reference: "Attention Is All You Need" by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin (2017) (http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf).
Reference: "Attention Is All You Need" by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin (2017) (http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf).
ner

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL