chatgptgo

package module
v1.0.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 3, 2023 License: MIT Imports: 4 Imported by: 0

README

ChatGPT Go

Communicate with OpenAi's GPT3.5 (ChatGPT) API.

Usage

    package main

    import chatgptgo "github.com/AidenHadisi/chat-gpt-go"

    func main() {
        api := chatgptgo.NewApi("YOUR_API_KEY")

        request := &chatgptgo.Request{
            Model: chatgptgo.Turbo,
            Messages: []*chatgptgo.Message{
                {
                    Role:    "user",
                    Content: "Hello, world!",
                },
            },
        }

        response, err := api.Chat(request)
        if err != nil {
            panic(err)
        }

        println(response.Choices[0].Message.Content)
    }

Additional Configuration

Following configuration options are available in the Request struct:

  • Model: ID of the model to use. Currently, only gpt-3.5-turbo and gpt-3.5-turbo-0301 are supported.
  • Messages: messages to generate chat completions for, in the chat format.
  • Temperature: what sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
  • TopP: an alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
  • N: how many chat completion choices to generate for each input message.
  • Stop: up to 4 sequences where the API will stop generating further tokens.
  • MaxTokens: maximum number of tokens to generate for each chat completion choice.
  • PresencePenalty: a number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
  • FrequencyPenalty: a number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
  • User: a unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.

Providing Organization ID

You may provide your OpenAI organization ID when creating the API instance.

    api := chatgptgo.NewApi("YOUR_API_KEY").WithOrganizationId("YOUR_ORGANIZATION_ID")

Using your own HTTP client

You may provide your own custom HTTP client when creating the API instance. (Uses http.DefaultClient if not provided).

    api := chatgptgo.NewApi("YOUR_API_KEY").WithClient(&http.Client{Timeout: 10 * time.Second})

Handling OpenAI API errors

If you need to get details about the error returned by the OpenAI API, you can try converting the error to ApiError type.

    response, err := api.Chat(request)
	if err != nil {
		var aerr *chatgptgo.ApiError
		if errors.As(err, &aerr) {
			// handle OpenAI API errors
			fmt.Println(aerr.StatusCode)
			fmt.Println(aerr.ErrorDetails.Type)
			fmt.Println(aerr.ErrorDetails.Message)
		} else {
			// handle other errors
			fmt.Println(err)
		}
	}

Documentation

Overview

Package chatgptgo provides a client for communicating with the OpenAI's GPT-3.5 (ChatGPT) API.

Index

Constants

View Source
const (
	// BaseUrl is the base URL for the OpenAI API.
	BaseUrl = "https://api.openai.com/v1"

	// ChatUrl is the URL for the chat endpoint.
	ChatUrl = BaseUrl + "/chat/completions"
)

Variables

This section is empty.

Functions

This section is empty.

Types

type Api

type Api struct {
	// contains filtered or unexported fields
}

Api is the client for communicating with the OpenAI API.

func NewApi

func NewApi(key string) *Api

NewApi creates a new Api instance.

func (*Api) Chat

func (a *Api) Chat(r *Request) (*Response, error)

Chat sends a chat request to the OpenAI API and returns the response.

func (*Api) WithClient

func (a *Api) WithClient(client *http.Client) *Api

WithClient sets your custom HTTP client to use for requests.

func (*Api) WithOrganizationId

func (a *Api) WithOrganizationId(organizationId string) *Api

WithOrganizationId sets your organization ID to use for requests.

type ApiError

type ApiError struct {
	StatusCode   int           `json:"-"`
	ErrorDetails *ErrorDetails `json:"error"`
}

ApiError is the error returned from OpenAI's API.

func (*ApiError) Error

func (e *ApiError) Error() string

type Choice

type Choice struct {
	Index        int      `json:"index"`
	Message      *Message `json:"message"`
	FinishReason string   `json:"finish_reason"`
}

Choice is a single choice (message) returned from the chat endpoint.

type ErrorDetails

type ErrorDetails struct {
	Message string `json:"message"`
	Type    string `json:"type"`
	Code    string `json:"code"`
}

type Message

type Message struct {
	Role    string `json:"role"`
	Content string `json:"content"`
}

Message is a single message sent to and returned from the chat endpoint.

type Model

type Model string
const (
	Turbo     Model = "gpt-3.5-turbo"
	Turbo0301 Model = "gpt-3.5-turbo-0301"
)

type Request

type Request struct {
	// Model is ID of the model to use. Currently, only `gpt-3.5-turbo` and `gpt-3.5-turbo-0301` are supported.
	Model Model `json:"model"`

	// Messages is the messages to generate chat completions for, in the chat format.
	Messages []*Message `json:"messages"`

	//Temperature is what sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
	Temperature float64 `json:"temperature,omitempty"`

	// TopP is an alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
	TopP float64 `json:"top_p,omitempty"`

	// N is how many chat completion choices to generate for each input message.
	N int `json:"n,omitempty"`

	//Stop is up to 4 sequences where the API will stop generating further tokens.
	Stop []string `json:"stop,omitempty"`

	// MaxTokens is the maximum number of tokens to generate for each chat completion choice.
	MaxTokens int `json:"max_tokens,omitempty"`

	// PresencePenalty is a number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
	PresencePenalty float64 `json:"presence_penalty,omitempty"`

	// FrequencyPenalty is a number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
	FrequencyPenalty float64 `json:"frequency_penalty,omitempty"`

	// User is a unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
	User string `json:"user,omitempty"`
}

Request is the request body for the chat endpoint.

type Response

type Response struct {
	ID       string    `json:"id"`
	Object   string    `json:"object"`
	Created  int64     `json:"created"`
	Choices  []*Choice `json:"choices"`
	Usage    *Usage    `json:"usage"`
	ThreadId string    `json:"-"`
}

Response is the response body returned from the chat endpoint.

type Usage

type Usage struct {
	PromptTokens     int `json:"prompt_tokens"`
	CompletionTokens int `json:"completion_tokens"`
	TotalTokens      int `json:"total_tokens"`
}

Usage is the usage information returned from the chat endpoint.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL