grok_go

package module
v1.0.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 3, 2025 License: MIT Imports: 11 Imported by: 0

README

Grok-Go: Go Client Library for X.AI API

Go Reference Go Report Card

Read this in 中文

A lightweight, type-safe Go client library for the X.AI API. This library enables seamless integration with X.AI's Grok models in Go applications.

Features

  • Complete API Coverage: Access all X.AI API endpoints
  • Strong Typing: First-class Go types for all requests and responses
  • Streaming Support: Handle streaming responses efficiently
  • Function Calling: Utilize function calling and tool capabilities
  • Image Generation: Generate images with the Grok model
  • Error Handling: Comprehensive error handling with detailed error messages

Installation

go get github.com/SimonMorphy/grok-go

Requires Go 1.18 or later.

Quick Start

Authentication

Set your API key as an environment variable (recommended):

export GROK_API_KEY="your-api-key-here"

Or provide it directly in code:

client, err := grok.NewClient("your-api-key-here") // Not recommended for production
Basic Chat Example
package main

import (
	"context"
	"fmt"
	"log"
	"os"
	
	grok "github.com/SimonMorphy/grok-go"
)

func main() {
	// Get API key from environment variable
	apiKey := os.Getenv("GROK_API_KEY")
	if apiKey == "" {
		log.Fatal("GROK_API_KEY environment variable not set")
	}
	
	// Initialize client
	client, err := grok.NewClient(apiKey)
	if err != nil {
		log.Fatalf("Failed to create client: %v", err)
	}
	
	// Create chat completion request
	request := &grok.ChatCompletionRequest{
		Model: "grok-3",
		Messages: []grok.ChatCompletionMessage{
			{
				Role:    "user",
				Content: "What is artificial intelligence?",
			},
		},
		Temperature: 0.7,
		MaxTokens:   500,
	}
	
	// Send request
	ctx := context.Background()
	response, err := grok.CreateChatCompletion(ctx, client, request)
	if err != nil {
		log.Fatalf("Request failed: %v", err)
	}
	
	// Print response
	if len(response.Choices) > 0 {
		fmt.Println(response.Choices[0].Message.Content)
	}
}

Advanced Usage

Function Calling
// Define a calculator tool
calculatorTool := grok.Tool{
    Type: "function",
    Function: grok.Function{
        Name:        "calculate",
        Description: "Perform mathematical calculations",
        Parameters: &grok.FunctionParameters{
            Type: "object",
            Properties: map[string]interface{}{
                "operation": map[string]interface{}{
                    "type":        "string",
                    "enum":        []string{"add", "subtract", "multiply", "divide"},
                    "description": "The mathematical operation to perform",
                },
                "x": map[string]interface{}{
                    "type":        "number",
                    "description": "The first operand",
                },
                "y": map[string]interface{}{
                    "type":        "number",
                    "description": "The second operand",
                },
            },
            Required: []string{"operation", "x", "y"},
        },
    },
}

// Create a request using the tool
request := &grok.ChatCompletionRequest{
    Model: "grok-3",
    Messages: []grok.ChatCompletionMessage{
        {
            Role:    "user",
            Content: "Calculate 25 times 16",
        },
    },
    Tools:      []grok.Tool{calculatorTool},
    ToolChoice: "auto",
}
Streaming Responses
request.Stream = true
stream, err := grok.CreateChatCompletionStream(ctx, client, request)
if err != nil {
    log.Fatalf("Failed to create stream: %v", err)
}
defer stream.Close()

// Process the stream
for {
    response, err := stream.Recv()
    if err == io.EOF {
        break
    }
    if err != nil {
        log.Printf("Stream error: %v", err)
        break
    }
    
    // Handle chunk
    if len(response.Choices) > 0 {
        chunk := response.Choices[0].Delta.Content
        if chunk != "" {
            fmt.Print(chunk)
        }
    }
}
Image Generation
imageClient, err := grok.CreateImageGenerationClient(apiKey)
if err != nil {
    log.Fatalf("Failed to create client: %v", err)
}

request := &grok.ImageGenerationRequest{
    Model:  "grok-3-image",
    Prompt: "A futuristic city with flying cars and tall skyscrapers",
    Size:   "1024x1024",
    N:      1,
}

response, err := grok.CreateImage(ctx, imageClient, request)
if err != nil {
    log.Fatalf("Image generation failed: %v", err)
}

// Process the image URL or Base64 data
for i, imageData := range response.Data {
    if imageData.URL != "" {
        fmt.Printf("Image %d URL: %s\n", i+1, imageData.URL)
        // Download the image...
    } else if imageData.B64JSON != "" {
        fmt.Printf("Image %d received as Base64 data\n", i+1)
        // Save the Base64 image...
    }
}

Examples

For complete, runnable examples, check the example directory:

Documentation

Testing

Run all tests (requires API key for integration tests):

export GROK_API_KEY="your-api-key-here"
go test ./...

Run unit tests only (no API key required):

go test ./... -short

License

This project is licensed under the MIT License.

Contributing

Contributions are welcome! Feel free to:

  • Report bugs
  • Request features
  • Submit pull requests

Please ensure your code passes all tests and follows Go best practices.

Documentation

Index

Constants

View Source
const (
	DefaultBaseURL  = "https://api.x.ai/v1/"
	DefaultTimeout  = 30 * time.Second
	DefaultEndpoint = "chat/completions"
)

Default configuration values

Variables

This section is empty.

Functions

func ValidateChatCompletionRequest

func ValidateChatCompletionRequest(request *ChatCompletionRequest) error

ValidateChatCompletionRequest validates a chat completion request

func ValidateClient

func ValidateClient(client *Client) error

ValidateClient validates client configuration

func ValidateImageGenerationRequest

func ValidateImageGenerationRequest(request *ImageGenerationRequest) error

ValidateImageGenerationRequest validates an image generation request

func ValidateLanguageCode

func ValidateLanguageCode(languageCode string) error

ValidateLanguageCode validates a language code

Types

type APIErrorResponse

type APIErrorResponse struct {
	Error struct {
		Message string `json:"message"`
		Type    string `json:"type"`
		Param   string `json:"param,omitempty"`
		Code    string `json:"code,omitempty"`
	} `json:"error"`
}

APIErrorResponse defines the structure for API error responses

type APIToolCall

type APIToolCall struct {
	ID       string       `json:"id"`       // Unique ID of the tool call
	Type     string       `json:"type"`     // Type of the tool, usually "function"
	Function FunctionCall `json:"function"` // Function call details
}

APIToolCall represents a call to a function made by the model

type AssistantMessage

type AssistantMessage struct {
	BaseMessage
	ToolCalls []ToolCall `json:"tool_calls,omitempty"` // Tool calls the assistant wants to make
}

AssistantMessage represents a message from the assistant May contain tool calls that the assistant wants to make

func NewAssistantMessage

func NewAssistantMessage(content any) AssistantMessage

NewAssistantMessage creates a new assistant message Assistant messages represent responses from the AI assistant

type BaseMessage

type BaseMessage struct {
	Content any    `json:"content"`        // Content of the message (can be string or structured content)
	Name    string `json:"name,omitempty"` // Optional name of the message author
	Role    string `json:"role"`           // Role of the message sender
}

BaseMessage implements the basic structure for Message interface It provides the common fields and methods required by the Message interface

func NewMultiContentMessage

func NewMultiContentMessage(role string, items []any) BaseMessage

NewMultiContentMessage creates a new message with multiple content items Used when a message contains a mix of content types (e.g., text and images)

func NewSystemMessage

func NewSystemMessage(content any) BaseMessage

NewSystemMessage creates a new system message System messages provide instructions or context to the model

func NewTextMessage

func NewTextMessage(role string, text string) BaseMessage

NewTextMessage creates a new text message with the specified role Generic function to create messages with simple text content

func NewUserMessage

func NewUserMessage(content any) BaseMessage

NewUserMessage creates a new user message User messages represent input from the end user

func WithName

func WithName(msg BaseMessage, name string) BaseMessage

WithName adds a name to the message Returns a new message with the name field set

func (BaseMessage) GetContent

func (m BaseMessage) GetContent() any

GetContent returns the content of the message

func (BaseMessage) GetName

func (m BaseMessage) GetName() string

GetName returns the name of the message

func (BaseMessage) GetRole

func (m BaseMessage) GetRole() string

GetRole returns the role of the message

type ChatCompletionMessage

type ChatCompletionMessage struct {
	Role       string        `json:"role"`                 // Role of the message sender (system, user, assistant, tool)
	Content    string        `json:"content"`              // The content of the message
	Name       string        `json:"name,omitempty"`       // The name of the author of this message
	ToolCalls  []APIToolCall `json:"tool_calls,omitempty"` // Tool calls made in this message
	ToolCallID string        `json:"tool_call_id,omitempty"`
	Prefix     bool          `json:"prefix,omitempty"` // Whether this is a prefix message for completion
}

ChatCompletionMessage represents a message in the chat completion request

type ChatCompletionRequest

type ChatCompletionRequest struct {
	// Required parameters
	Model    string                  `json:"model"`    // Model identifier, e.g. "grok-3-beta"
	Messages []ChatCompletionMessage `json:"messages"` // Array of conversation messages

	// Optional streaming parameters
	Stream        bool `json:"stream,omitempty"` // Whether to stream the response
	StreamOptions struct {
		IncludeUsage bool `json:"include_usage,omitempty"` // Whether to include token usage stats in the stream
	} `json:"stream_options,omitempty"`
	Deferred bool `json:"deferred,omitempty"` // Whether to return a deferred response

	// Optional control parameters
	MaxTokens           int                `json:"max_tokens,omitempty"`           // Maximum number of tokens to generate
	Temperature         float64            `json:"temperature,omitempty"`          // Sampling temperature (0-2)
	TopP                float64            `json:"top_p,omitempty"`                // Nucleus sampling parameter
	TopK                int                `json:"top_k,omitempty"`                // Top-k sampling parameter
	Stop                []string           `json:"stop,omitempty"`                 // Sequences where the API will stop generating
	PresencePenalty     float64            `json:"presence_penalty,omitempty"`     // Penalty for new tokens based on presence in text
	FrequencyPenalty    float64            `json:"frequency_penalty,omitempty"`    // Penalty for new tokens based on frequency in text
	LogitBias           map[string]float64 `json:"logit_bias,omitempty"`           // Modify likelihood of specified tokens
	Logprobs            bool               `json:"logprobs,omitempty"`             // Whether to return log probabilities
	TopLogprobs         int                `json:"top_logprobs,omitempty"`         // How many log probabilities to return
	Seed                int64              `json:"seed,omitempty"`                 // Random seed for deterministic results
	ResponseFormat      *ResponseFormat    `json:"response_format,omitempty"`      // Format of the response content
	Tools               []Tool             `json:"tools,omitempty"`                // Tools the model may call
	ToolChoice          interface{}        `json:"tool_choice,omitempty"`          // Controls which tool is called by the model
	User                string             `json:"user,omitempty"`                 // A unique identifier for the end-user
	JSONMode            bool               `json:"json_mode,omitempty"`            // Always output valid JSON
	PrefixTokens        []int              `json:"prefix_tokens,omitempty"`        // Tokens to prepend to generated text
	SkipParameterCheck  bool               `json:"-"`                              // Do not validate parameters
	BackendType         string             `json:"backend_type,omitempty"`         // Backend processing preference
	AdaptivePrompt      bool               `json:"adaptive_prompt,omitempty"`      // Whether to adapt the system prompt based on conversation
	ManagedCredentials  map[string]string  `json:"managed_credentials,omitempty"`  // Credentials for API integrations
	HTTPBasedPlugins    interface{}        `json:"http_based_plugins,omitempty"`   // Custom plugins based on HTTP endpoints
	ConversationContext string             `json:"conversation_context,omitempty"` // Additional context for conversation understanding
}

ChatCompletionRequest Request structure compliant with X.AI API specifications

type Client

type Client struct {
	ApiKey     string        `json:"api_key"`
	BaseUrl    string        `json:"base_url"`
	Timeout    time.Duration `json:"timeout"`
	Endpoint   string        `json:"endpoint"`
	HttpClient *http.Client  `json:"-"`
}

Client Represents Http Client

func CreateChatCompletionClient

func CreateChatCompletionClient(apiKey string, options ...ClientOption) (*Client, error)

CreateChatCompletionClient creates a client specifically configured for chat completions It sets the endpoint to "chat/completions" and applies any additional options

func CreateImageGenerationClient

func CreateImageGenerationClient(apiKey string, options ...ClientOption) (*Client, error)

CreateImageGenerationClient creates a client specifically configured for image generation It sets the endpoint to "images/generations" and applies any additional options

func CreateModelsClient

func CreateModelsClient(apiKey string, options ...ClientOption) (*Client, error)

CreateModelsClient creates a client specifically configured for listing models It sets the endpoint to "models" and applies any additional options

func NewClient

func NewClient(apiKey string) (*Client, error)

NewClient creates a new client with default configuration It takes an API key and returns a configured client or an error

func NewClientWithOptions

func NewClientWithOptions(apiKey string, options ...ClientOption) (*Client, error)

NewClientWithOptions creates a new client with custom options It takes an API key and optional configuration options Returns a configured client or an error if validation fails

func (*Client) Url

func (client *Client) Url() string

Url returns the full API URL by combining base URL and endpoint This is used for constructing API request URLs

type ClientOption

type ClientOption func(*Client)

ClientOption defines a function type that can modify a Client

func WithBaseURL

func WithBaseURL(baseURL string) ClientOption

WithBaseURL sets a custom base URL for the client

func WithEndpoint

func WithEndpoint(endpoint string) ClientOption

WithEndpoint sets a custom API endpoint

func WithHTTPClient

func WithHTTPClient(httpClient *http.Client) ClientOption

WithHTTPClient sets a custom HTTP client

func WithTimeout

func WithTimeout(timeout time.Duration) ClientOption

WithTimeout sets a custom timeout for HTTP requests

type Function

type Function struct {
	Name        string              `json:"name"`                  // The name of the function
	Description string              `json:"description,omitempty"` // A description of what the function does
	Parameters  *FunctionParameters `json:"parameters,omitempty"`  // The parameters the function accepts
}

Function represents a function specification

type FunctionCall

type FunctionCall struct {
	Name      string `json:"name"`      // Name of the function being called
	Arguments string `json:"arguments"` // Arguments to pass to the function, as a JSON string
}

FunctionCall represents the details of a function call

type FunctionParameters

type FunctionParameters struct {
	Type       string                 `json:"type"` // The type of the parameters, usually "object"
	Properties map[string]interface{} `json:"properties"`
	Required   []string               `json:"required,omitempty"` // Which parameters are required
}

FunctionParameters defines the parameters that a function accepts

type ImageGenerationData

type ImageGenerationData struct {
	URL           string `json:"url,omitempty"`
	B64JSON       string `json:"b64_json,omitempty"`
	RevisedPrompt string `json:"revised_prompt,omitempty"`
}

ImageGenerationData represents a single generated image

type ImageGenerationRequest

type ImageGenerationRequest struct {
	Model          string `json:"model"`                     // ID of the model to use
	Prompt         string `json:"prompt"`                    // Text description of the desired image
	N              int    `json:"n,omitempty"`               // Number of images to generate
	Size           string `json:"size,omitempty"`            // Size of the generated images
	Quality        string `json:"quality,omitempty"`         // Quality of the generated images
	Style          string `json:"style,omitempty"`           // Style of the generated images
	ResponseFormat string `json:"response_format,omitempty"` // Format of the response
	User           string `json:"user,omitempty"`            // User identifier
}

ImageGenerationRequest defines the request structure for image generation

type ImageGenerationResponse

type ImageGenerationResponse struct {
	Created int64                 `json:"created"`
	Data    []ImageGenerationData `json:"data"`
}

ImageGenerationResponse defines the response structure from image generation

func CreateImage

func CreateImage(ctx context.Context, client *Client, request *ImageGenerationRequest) (*ImageGenerationResponse, error)

CreateImage sends an image generation request to the X.AI API It returns an ImageGenerationResponse object and an error if one occurred Context is used for request cancellation and logging

type LanguageInfo

type LanguageInfo struct {
	Code       string `json:"code"`        // ISO language code
	Name       string `json:"name"`        // Language name
	NativeName string `json:"native_name"` // Language name in the native script
	Supported  bool   `json:"supported"`   // Whether translation is supported
}

LanguageInfo represents information about a supported language

func GetLanguageInfo

func GetLanguageInfo(ctx context.Context, client *Client, languageCode string) (*LanguageInfo, error)

GetLanguageInfo fetches information about a specific language It returns a LanguageInfo object for the requested language code Returns an error if the language is not found or if the API request fails

type LanguageListResponse

type LanguageListResponse struct {
	Object string         `json:"object"`
	Data   []LanguageInfo `json:"data"`
}

LanguageListResponse represents a list of available languages for translation

func ListLanguages

func ListLanguages(ctx context.Context, client *Client) (*LanguageListResponse, error)

ListLanguages fetches a list of supported languages from the X.AI API It returns a LanguageListResponse object and an error if one occurred Context is used for request cancellation and logging

type LogProbs

type LogProbs struct {
	Content []TokenLogProb `json:"content"` // Log probability information for content tokens
}

LogProbs contains log probability information for token generation

type Message

type Message interface {
	// GetContent returns the content of the message
	GetContent() any

	// GetName returns the name of the message author
	GetName() string

	// GetRole returns the role of the message (e.g., "system", "user", "assistant", "tool")
	GetRole() string
}

Message defines the basic interface for all message types All message implementations must provide methods to access content, name, and role

type Messages

type Messages []Message

Messages represents an array of messages Used to build a conversation history

func NewMessageArray

func NewMessageArray() Messages

NewMessageArray creates a new empty message array Alias for NewMessages for backward compatibility

func NewMessages

func NewMessages() Messages

NewMessages creates a new empty message array Initializes a conversation with zero messages

func (*Messages) AddMessage

func (m *Messages) AddMessage(msg Message)

AddMessage adds a message to the array Appends the message to the end of the conversation

type ModelInfo

type ModelInfo struct {
	ID      string `json:"id"`
	Object  string `json:"object"`
	Created int64  `json:"created"`
	OwnedBy string `json:"owned_by"`
}

ModelInfo represents information about a single model

func GetModelInfo

func GetModelInfo(ctx context.Context, client *Client, modelID string) (*ModelInfo, error)

GetModelInfo fetches information about a specific model

type ModelListResponse

type ModelListResponse struct {
	Object string      `json:"object"`
	Data   []ModelInfo `json:"data"`
}

ModelListResponse represents a list of available models

func ListModels

func ListModels(ctx context.Context, client *Client) (*ModelListResponse, error)

ListModels fetches available models from the X.AI API

type Request

type Request struct {
	Auth string
	Url  string
	Body []byte
}

Request A valid Http request to invoke remote api

func (Request) BuildHttpRequest

func (r Request) BuildHttpRequest(ctx context.Context) (*http.Request, error)

BuildHttpRequest creates an HTTP request with the specified parameters It returns the constructed HTTP request or an error if creation fails The context is used for request cancellation and logging

type Response

type Response struct {
	ID                string           `json:"id"`                           // Unique identifier for the response
	Object            string           `json:"object"`                       // Object type, usually "chat.completion"
	Created           int64            `json:"created"`                      // Unix timestamp when the response was created
	Model             string           `json:"model"`                        // Model used for the completion
	Choices           []ResponseChoice `json:"choices"`                      // Array of completion choices
	SystemFingerprint string           `json:"system_fingerprint,omitempty"` // Fingerprint of the system configuration
	Usage             Usage            `json:"usage,omitempty"`              // Token usage information
}

Response structure for X.AI API chat completion response

func CreateChatCompletion

func CreateChatCompletion(ctx context.Context, client *Client, request *ChatCompletionRequest) (*Response, error)

CreateChatCompletion sends a chat completion request to the X.AI API It returns a Response object and an error if one occurred Context is used for request cancellation and logging

type ResponseChoice

type ResponseChoice struct {
	Index        int             `json:"index"`              // Index of this choice
	Message      ResponseMessage `json:"message"`            // The message content
	LogProbs     *LogProbs       `json:"logprobs,omitempty"` // Log probabilities if requested
	FinishReason string          `json:"finish_reason"`      // Reason why generation stopped
}

ResponseChoice represents a single completion choice in the response

type ResponseFormat

type ResponseFormat struct {
	Type string `json:"type"` // The format type, e.g., "text" or "json_object"
}

ResponseFormat specifies the format of the model's response

type ResponseMessage

type ResponseMessage struct {
	Role      string        `json:"role"`                 // Role of the message sender, usually "assistant"
	Content   string        `json:"content"`              // The content of the message
	ToolCalls []APIToolCall `json:"tool_calls,omitempty"` // Tool calls made in this message
}

ResponseMessage represents the message in a response choice

type StreamChoice

type StreamChoice struct {
	Index        int           `json:"index"`
	Delta        StreamMessage `json:"delta"`
	LogProbs     *LogProbs     `json:"logprobs,omitempty"`
	FinishReason string        `json:"finish_reason,omitempty"`
}

StreamChoice represents a chunk of the stream response

type StreamMessage

type StreamMessage struct {
	Role      string        `json:"role,omitempty"`
	Content   string        `json:"content,omitempty"`
	ToolCalls []APIToolCall `json:"tool_calls,omitempty"`
}

StreamMessage represents a message chunk in the stream

type StreamReader

type StreamReader struct {
	// contains filtered or unexported fields
}

StreamReader handles reading from a streaming response

func CreateChatCompletionStream

func CreateChatCompletionStream(ctx context.Context, client *Client, request *ChatCompletionRequest) (*StreamReader, error)

CreateChatCompletionStream creates a streaming connection for chat completions It returns a StreamReader that can be used to read streaming responses Context is used for request cancellation and logging

func (*StreamReader) Close

func (s *StreamReader) Close() error

Close closes the stream connection

func (*StreamReader) Recv

func (s *StreamReader) Recv() (*StreamResponse, error)

Recv reads the next event from the stream It returns a StreamResponse object and an error if one occurred Returns io.EOF when the stream ends

type StreamResponse

type StreamResponse struct {
	ID                string         `json:"id"`
	Object            string         `json:"object"`
	Created           int64          `json:"created"`
	Model             string         `json:"model"`
	SystemFingerprint string         `json:"system_fingerprint,omitempty"`
	Choices           []StreamChoice `json:"choices"`
	Usage             *Usage         `json:"usage,omitempty"`
}

StreamResponse structure for streaming responses

type TokenLogProb

type TokenLogProb struct {
	Token       string             `json:"token"`        // The token string
	LogProb     float64            `json:"logprob"`      // Log probability of this token
	TopLogProbs map[string]float64 `json:"top_logprobs"` // Log probabilities of top alternatives
	Bytes       []int              `json:"bytes"`        // Token byte representation
}

TokenLogProb holds log probability information for a single token

type Tool

type Tool struct {
	Type     string   `json:"type"`     // The type of tool, usually "function"
	Function Function `json:"function"` // The function to be called
}

Tool represents a function that the model may generate JSON inputs for

type ToolCall

type ToolCall struct {
	Function struct {
		Arguments string `json:"arguments"` // Arguments to the function as a JSON string
		Name      string `json:"name"`      // Name of the function to call
	} `json:"function"`
	ID    string `json:"id"`              // Unique identifier for this tool call
	Index int    `json:"index,omitempty"` // Index of this tool call in the sequence
	Type  string `json:"type,omitempty"`  // Type of the tool call (usually "function")
}

ToolCall represents a tool call in the message Contains information about a function to be called

type ToolCallMessage

type ToolCallMessage struct {
	BaseMessage
	ToolCallID string `json:"tool_call_id,omitempty"` // ID of the tool call this message responds to
}

ToolCallMessage represents a message with a tool call Used for responses from tool executions

func NewToolCallMessage

func NewToolCallMessage(content any, toolCallID string) ToolCallMessage

NewToolCallMessage creates a new tool call message Tool call messages represent responses from tools that were called

type Usage

type Usage struct {
	PromptTokens     int `json:"prompt_tokens"`     // Number of tokens in the prompt
	CompletionTokens int `json:"completion_tokens"` // Number of tokens in the completion
	TotalTokens      int `json:"total_tokens"`      // Total tokens used (prompt + completion)
}

Usage provides token usage statistics for the request and response

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL