fastembed

package module
v1.0.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 31, 2024 License: MIT Imports: 16 Imported by: 0

README

FastEmbed-go

Go implementation of @Qdrant/fastembed

Go Reference MIT Licensed Semantic release

🍕 Features

  • Supports batch embeddings with parallelism using go-routines.
  • Uses @sugarme/tokenizer for fast tokenization.
  • Optimized embedding models.

The default embedding supports "query" and "passage" prefixes for the input text. The default model is Flag Embedding, which is top of the MTEB leaderboard.

🔍 Not looking for Go?

🤖 Models

🚀 Installation

Run the following Go CLI command in your project directory:

go get -u github.com/Anush008/fastembed-go

📖 Usage

import "github.com/anush008/fastembed-go"

// With default options
model, err := fastembed.NewFlagEmbedding(nil)
if err != nil {
 panic(err)
}
defer model.Destroy()

// With custom options
options := fastembed.InitOptions{
 Model:     fastembed.BGEBaseEN,
 CacheDir:  "model_cache",
 MaxLength: 200,
}

model, err = fastembed.NewFlagEmbedding(&options)
if err != nil {
 panic(err)
}
defer model.Destroy()

documents := []string{
 "passage: Hello, World!",
 "query: Hello, World!",
 "passage: This is an example passage.",
 // You can leave out the prefix but it's recommended
 "fastembed-go is licensed under MIT",
}

// Generate embeddings with a batch-size of 25, defaults to 256
embeddings, err := model.Embed(documents, 25)  //  -> Embeddings length: 4
if err != nil {
 panic(err)
}

Supports passage and query embeddings for more accurate results

// Generate embeddings for the passages
// The texts are prefixed with "passage" for better results
// The batch size is set to 1 for demonstration purposes
passages := []string{
 "This is the first passage. It contains provides more context for retrieval.",
 "Here's the second passage, which is longer than the first one. It includes additional information.",
 "And this is the third passage, the longest of all. It contains several sentences and is meant for more extensive testing.",
}

embeddings, err := model.PassageEmbed(passages, 1)  //  -> Embeddings length: 3
if err != nil {
 panic(err)
}

// Generate embeddings for the query
// The text is prefixed with "query" for better retrieval
query := "What is the answer to this generic question?";

embeddings, err := model.QueryEmbed(query)
if err != nil {
 panic(err)
}

ℹ︎ Notice:

The Onnx runtime path is automatically loaded on most environments. However, if you encounter

panic: Platform-specific initialization failed: Error loading ONNX shared library

Set the ONNX_PATH env to your Onnx installation. For eg, on MacOS:

export ONNX_PATH="/path/to/onnx/lib/libonnxruntime.dylib"

On Linux:

export ONNX_PATH="/path/to/onnx/lib/libonnxruntime.so"

You can find the Onnx runtime releases here.

🚒 Under the hood

Why fast?

It's important we justify the "fast" in FastEmbed. FastEmbed is fast because:

  1. Quantized model weights
  2. ONNX Runtime which allows for inference on CPU, GPU, and other dedicated runtimes

Why light?

  1. No hidden dependencies via Huggingface Transformers

Why accurate?

  1. Better than OpenAI Ada-002
  2. Top of the Embedding leaderboards e.g. MTEB

📄 LICENSE

MIT © 2023

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type EmbeddingModel

type EmbeddingModel string

Enum-type representing the available embedding models.

const (
	AllMiniLML6V2 EmbeddingModel = "fast-all-MiniLM-L6-v2"
	BGEBaseEN     EmbeddingModel = "fast-bge-base-en"
	BGEBaseENV15  EmbeddingModel = "fast-bge-base-en-v1.5"
	BGESmallEN    EmbeddingModel = "fast-bge-small-en"
	BGESmallENV15 EmbeddingModel = "fast-bge-small-en-v1.5"
	BGESmallZH    EmbeddingModel = "fast-bge-small-zh-v1.5"
)

type FlagEmbedding

type FlagEmbedding struct {
	// contains filtered or unexported fields
}

Struct to interface with a FastEmbed model.

func NewFlagEmbedding

func NewFlagEmbedding(options *InitOptions) (*FlagEmbedding, error)

Function to initialize a FastEmbed model.

func (*FlagEmbedding) Destroy

func (f *FlagEmbedding) Destroy() error

Function to cleanup the internal onnxruntime environment when it is no longer needed.

func (*FlagEmbedding) Embed

func (f *FlagEmbedding) Embed(input []string, batchSize int) ([]([]float32), error)

Function to embed a batch of input strings The batchSize parameter controls the number of inputs to embed in a single batch The batches are processed in parallel Returns the first error encountered if any Default batch size is 256.

func (*FlagEmbedding) PassageEmbed

func (f *FlagEmbedding) PassageEmbed(input []string, batchSize int) ([]([]float32), error)

Function to embed string prefixed with "passage: ".

func (*FlagEmbedding) QueryEmbed

func (f *FlagEmbedding) QueryEmbed(input string) ([]float32, error)

Function to embed a single input string prefixed with "query: " Recommended for generating query embeddings for semantic search.

type InitOptions

type InitOptions struct {
	Model                EmbeddingModel
	ExecutionProviders   []string
	MaxLength            int
	CacheDir             string
	ShowDownloadProgress *bool
}

Options to initialize a FastEmbed model Model: The model to use for embedding ExecutionProviders: The execution providers to use for onnxruntime MaxLength: The maximum length of the input sequence CacheDir: The directory to cache the model files ShowDownloadProgress: Whether to show the download progress bar NOTE: We use a pointer for "ShowDownloadProgress" so that we can distinguish between the user not setting this flag and the user setting it to false. We want the default value to be true. As Go assigns a default(empty) value of "false" to bools, we can't distinguish if the user set it to false or not set at all. A pointer to bool will be nil if not set explicitly.

type ModelInfo

type ModelInfo struct {
	Model       EmbeddingModel
	Dim         int
	Description string
}

Struct to represent FastEmbed model information.

func ListSupportedModels added in v0.1.4

func ListSupportedModels() []ModelInfo

Function to list the supported FastEmbed models.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL