gptparallel

package module
v0.3.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 10, 2024 License: MIT Imports: 12 Imported by: 2

README

Go Reference

GPT Parallel

GPT Parallel is an early beta library for the Go programming language that helps you manage multiple concurrent requests to the OpenAI API and OpenAI Azure. It's built on top of the go-openai package and is designed to make it easier to handle multiple requests, track progress, and apply retries with a customizable exponential backoff strategy.

Disclaimer: This library is in early beta and error handling has not been thoroughly tested. Use it at your own risk.

Features

  • Supports concurrent requests to GPT models
  • Progress tracking using mpb (multi progress bars)
  • Customizable exponential backoff strategy for retries
  • Custom logger support

Contributing

We welcome contributions to improve GPT Parallel. If you have ideas for improvements, bug fixes, or additional features, please feel free to open an issue or submit a pull request.

When submitting a pull request, you agree to assign copyright for your changes to the project maintainers. This is a standard practice in open-source projects and helps ensure the long-term viability of the project.

Mocks and Test Cases

We are actively looking for mocks and more comprehensive test cases to help improve the reliability and stability of GPT Parallel. If you have experience in this area, your contributions will be greatly appreciated.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Documentation

Overview

Package gptparallel provides an efficient way to handle multiple requests to OpenAI GPT models using the go-openai library. It simplifies the execution of parallel requests and incorporates retry logic.

GPTParallel wraps the go-openai library and manages concurrent requests to the OpenAI API or the Azure OpenAI API. It allows users to set a configurable number of concurrent requests and a custom exponential backoff strategy to handle retries in case of failures.

This package is designed to efficiently manage concurrent requests to GPT models while providing progress updates and handling retries with exponential backoff strategy.

Example
package main

import (
	"context"
	"fmt"
	"os"

	gptparallel "github.com/tbiehn/gptparallel"

	backoff "github.com/cenkalti/backoff/v4"
	openai "github.com/sashabaranov/go-openai"
)

func main() {
	client := openai.NewClient(os.Getenv("OPENAI_API_KEY"))
	ctx := context.Background()

	backoffSettings := backoff.NewExponentialBackOff()

	gptParallel := gptparallel.NewGPTParallel(ctx, client, nil, backoffSettings, nil)

	// Prepare requests and their respective callbacks
	requests := []gptparallel.RequestWithCallback{
		{
			Request: openai.ChatCompletionRequest{
				Messages: []openai.ChatCompletionMessage{
					{
						Role:    "system",
						Content: "You are a helpful assistant.",
					},
					{
						Role:    "user",
						Content: "Who won the world series in 2020?",
					},
				},
				Model:     "gpt-3.5-turbo",
				MaxTokens: 10,
			},
			Callback: func(result gptparallel.RequestResult) {
				if result.Err != nil {
					fmt.Printf("Request failed with error: %v", result.Err)
				} else {
					fmt.Print("Received response!\n")
				}
			},
		},
		// More requests can be added here
	}

	concurrency := 2 // Number of concurrent requests
	gptParallel.RunRequests(requests, concurrency)
}
Output:

Received response!

Index

Examples

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type GPTParallel

type GPTParallel struct {
	Client          *openai.Client
	Progress        *mpb.Progress
	BackoffSettings *backoff.ExponentialBackOff
	Logger          Logger
	// contains filtered or unexported fields
}

GPTParallel: The main struct responsible for managing concurrent requests, progress bars, and backoff settings.

func NewGPTParallel

func NewGPTParallel(context context.Context, client *openai.Client, progress *mpb.Progress, backoffSettings *backoff.ExponentialBackOff, optLogger Logger) *GPTParallel

NewGPTParallel: A function that creates a new GPTParallel instance with the given context, client, progress, backoff settings, and optional logger.

func (*GPTParallel) RunEmbeddingsChan added in v0.2.4

func (g *GPTParallel) RunEmbeddingsChan(requestsChan <-chan VectorRequestWithCallback, concurrency int) <-chan VectorRequestResult

RunEmbeddingsChan: A method that executes requests received from a channel in parallel with the given concurrency level, manages progress bars and retries, and sends results to a channel.

func (*GPTParallel) RunRequests

func (g *GPTParallel) RunRequests(requests []RequestWithCallback, concurrency int)

RunRequests: A method that executes all requests in parallel with the given concurrency level and manages progress bars and retries.

func (*GPTParallel) RunRequestsChan added in v0.2.0

func (g *GPTParallel) RunRequestsChan(requestsChan <-chan RequestWithCallback, concurrency int) <-chan RequestResult

RunRequestsChan: A method that executes requests received from a channel in parallel with the given concurrency level, manages progress bars and retries, and sends results to a channel.

type Logger

type Logger interface {
	Debug(args ...interface{})
	Debugf(format string, args ...interface{})
	Info(args ...interface{})
	Infof(format string, args ...interface{})
	Warn(args ...interface{})
	Warnf(format string, args ...interface{})
	Error(args ...interface{})
	Errorf(format string, args ...interface{})
}

Logger: An interface to support different logging implementations, with a default no-op Logger provided.

type RequestResult

type RequestResult struct {
	Request        openai.ChatCompletionRequest `json:"request"`
	Response       string                       `json:"response"`
	FunctionName   string                       `json:"function_name",omitempty`
	FunctionParams string                       `json:"function_params",omitempty`
	Identifier     string                       `json:"identifier"`
	FinishReason   string                       `json:"finish_reason"`
	Err            error                        `json:"error,omitempty"`
}

RequestResult: A struct containing the original request, the response, the finish reason, and any errors that occurred during the request.

type RequestWithCallback

type RequestWithCallback struct {
	Request    openai.ChatCompletionRequest
	Callback   func(result RequestResult)
	Identifier string
}

type ResponseWithFunction added in v0.2.6

type ResponseWithFunction struct {
	Response       string
	FunctionName   string
	FunctionParams string
}

type VectorRequestResult added in v0.2.4

type VectorRequestResult struct {
	Request    openai.EmbeddingRequest `json:"request"`
	Vector     []float32               `json:"vector"`
	Identifier string                  `json:"identifier"`
	Err        error                   `json:"error,omitempty"`
}

RequestResult: A struct containing the original request, the response, the finish reason, and any errors that occurred during the request.

type VectorRequestWithCallback added in v0.2.4

type VectorRequestWithCallback struct {
	Request    openai.EmbeddingRequest
	Callback   func(result VectorRequestResult)
	Identifier string
}

* * Embeddings. *

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL