README
¶
LuaPrompt
A powerful DSL (Domain-Specific Language) for defining LLM prompts using Lua syntax.
⚠️ Experimental Project: This is a proof-of-concept and has not been validated in production environments. Use at your own risk.
Why LuaPrompt?
- Clean Syntax: Use Lua's elegant syntax for prompt definition
- IDE Support: Full syntax highlighting, linting, and autocomplete
- Type Safety: Validate prompts before deployment
- Multi-Provider: Support for OpenAI, Claude, Gemini, and more
- Programmable: Use variables, functions, loops, and conditional logic
- Modular: Split prompts into reusable modules
Features
- ✅ Parse Lua DSL to standardized JSON format
- ✅ Support for OpenAI, Anthropic Claude, and Google Gemini
- ✅ OpenAI Structured Outputs (JSON Schema) - 100% guaranteed schema compliance
- ✅ Function calling / Tool usage
- ✅ Multimodal inputs (text + images)
- ✅ Validation and error checking
- ✅ Format conversion between providers
- ✅ Lua's full programming capabilities
- ✅ Standard Library - Pre-built prompts, tools, and utilities
Installation
# Clone the repository
git clone https://github.com/xleliu/luaprompt.git
cd luaprompt
# Install dependencies
go mod download
# Build the CLI tool
go build -o luaprompt ./cmd/luaprompt
# Optional: Install globally
sudo mv luaprompt /usr/local/bin/
Quick Start
1. Create a simple prompt file
Create prompt.lua:
return {
provider = "openai",
model = "gpt-4",
temperature = 0.7,
max_tokens = 1000,
messages = {
{
role = "system",
content = "You are a helpful AI assistant."
},
{
role = "user",
content = "What is the capital of France?"
}
}
}
2. Parse or validate the prompt
# Parse to JSON
luaprompt parse prompt.lua
# Save to file
luaprompt parse prompt.lua -o output.json
# Validate only
luaprompt validate prompt.lua
Examples
Simple Chat
return {
model = "gpt-4",
temperature = 0.7,
messages = {
{role = "system", content = "You are a helpful assistant."},
{role = "user", content = "Hello!"}
}
}
Function Calling
return {
model = "gpt-4-turbo-preview",
messages = {
{role = "user", content = "What's the weather in SF?"}
},
tools = {
{
type = "function",
func = {
name = "get_weather",
description = "Get current weather",
parameters = {
type = "object",
properties = {
location = {
type = "string",
description = "City and state"
}
},
required = {"location"}
}
}
}
}
}
Multimodal (Vision)
return {
model = "gpt-4-vision-preview",
messages = {
{
role = "user",
content = {
{type = "text", text = "What's in this image?"},
{
type = "image_url",
image_url = {
url = "https://example.com/image.jpg",
detail = "high"
}
}
}
}
}
}
Advanced: Using Lua Features
-- Variables and functions
local function msg(role, content)
return {role = role, content = content}
end
local questions = {"What is AI?", "How does ML work?"}
local messages = {msg("system", "You are a teacher.")}
-- Loops
for _, q in ipairs(questions) do
table.insert(messages, msg("user", q))
end
return {
model = "gpt-4",
messages = messages
}
Using the Standard Library
The standard library is embedded into the parser and available automatically:
-- Import from the embedded standard library
local prompts = require("prompts")
local system_prompts = require("prompts.system_prompts")
local tools = require("prompts.tools")
local helpers = require("prompts.helpers")
-- Use pre-built components
return {
model = "gpt-4",
messages = {
prompts.system(system_prompts.assistant.helpful),
prompts.user("What's the weather in San Francisco?")
},
tools = {tools.weather(), tools.get_time()}
}
Note: The standard library is compiled into the binary using Go's embed package. No external files are needed - all modules are available out of the box!
📖 See full Standard Library documentation
Using Variables
Pass runtime variables from Go code to your Lua prompts:
Lua Prompt (prompt.lua):
local prompts = require("prompts")
-- Use variables passed from Go
return {
model = model_name or "gpt-4",
temperature = 0.7,
messages = {
prompts.system("You are a helpful assistant."),
prompts.user(string.format("Hello %s, %s", user_name, question))
}
}
Go Code:
vars := map[string]interface{}{
"user_name": "Alice",
"question": "How are you?",
"model_name": "gpt-4-turbo-preview",
}
prompt, err := parser.ParseFileWithVars("prompt.lua", vars)
Supported Types: string, int, int64, float64, bool, []interface{}, map[string]interface{}
See examples/with_variables for a complete example.
Provider-Specific Features
For provider-specific features (like OpenAI's Structured Outputs, Claude's thinking mode, etc.), use the fantasy API in your Go code:
// Example: Using OpenAI structured outputs via fantasy
call := fantasy.Call{
// ... standard parameters from prompt
ProviderOptions: fantasy.ProviderOptions{
"openai": map[string]interface{}{
"response_format": map[string]interface{}{
"type": "json_schema",
"json_schema": map[string]interface{}{
"name": "person",
"strict": true,
"schema": map[string]interface{}{
"type": "object",
"properties": map[string]interface{}{
"name": map[string]string{"type": "string"},
"age": map[string]string{"type": "integer"},
},
},
},
},
},
},
}
See docs/PARAMETERS.md for more details.
Legacy Provider Examples
Note: The following Claude and Gemini examples show old syntax. With fantasy integration, these provider-specific options should be set via Go code.
return {
provider = "claude",
model = "claude-3-opus-20240229",
max_tokens = 4096,
system = "You are a helpful assistant.",
messages = {
{role = "user", content = "Hello!"}
},
thinking_config = {
type = "enabled",
budget_tokens = 1000
}
}
Gemini-Specific
return {
provider = "gemini",
model = "gemini-pro",
system = "You are a creative assistant.",
messages = {
{role = "user", content = "Write a poem."}
},
generation_config = {
temperature = 0.9,
top_p = 1.0,
top_k = 32,
max_output_tokens = 2048
},
safety_settings = {
{
category = "HARM_CATEGORY_HARASSMENT",
threshold = "BLOCK_MEDIUM_AND_ABOVE"
}
}
}
Usage in Go
With Fantasy (Recommended)
LuaPrompt integrates with fantasy to provide a unified interface across multiple LLM providers:
import (
"context"
"charm.land/fantasy"
"charm.land/fantasy/providers/openrouter"
"github.com/xleliu/luaprompt/pkg/parser"
"github.com/xleliu/luaprompt/pkg/executor"
)
// Parse Lua prompt (includes model field)
p := parser.New("./prompts")
defer p.Close()
prompt, err := p.ParseFile("prompts/agent.lua")
if err != nil {
log.Fatal(err)
}
// Create provider (OpenRouter, OpenAI-compatible, Bedrock, etc.)
provider, err := openrouter.New(openrouter.WithAPIKey(apiKey))
if err != nil {
log.Fatal(err)
}
// Execute with fantasy
// The executor uses prompt.Model to select the language model
exec := executor.New(provider)
response, err := exec.Execute(context.Background(), prompt)
if err != nil {
log.Fatal(err)
}
fmt.Println(response.Content.Text())
Benefits:
- ✅ One API for OpenAI, Anthropic, OpenRouter, Bedrock, and more
- ✅ Easy provider switching without code changes
- ✅ Built-in streaming support
- ✅ Maintained provider implementations
See examples/with_fantasy for a complete example.
Standalone (Without Fantasy)
You can also use the parser standalone to generate JSON for your own LLM client:
import (
"github.com/xleliu/luaprompt/pkg/parser"
"github.com/xleliu/luaprompt/pkg/types"
)
// Create parser (standard library is automatically available)
p := parser.New("./prompts")
defer p.Close()
// Parse prompt file
prompt, err := p.ParseFile("prompts/agent.lua")
if err != nil {
log.Fatal(err)
}
// Use prompt with your LLM client
response := callYourLLMClient(prompt)
Standard Library: The standard library (prompts, prompts.system_prompts, prompts.tools, prompts.helpers) is embedded into the parser and available to all Lua prompt files automatically via require().
See AGENTS.md for detailed integration examples.
CLI Tool (Development & Testing)
The CLI tool is included for development and testing purposes:
# Validate prompt files
luaprompt validate prompts/agent.lua
# Parse and preview
luaprompt parse prompts/agent.lua
# Output to file
luaprompt parse prompts/agent.lua -o output.json
Note: The CLI tool only parses Lua to JSON. For actual LLM execution across different providers, use the fantasy integration (see examples/with_fantasy).
Project Structure
luaprompt/
├── cmd/
│ └── luaprompt/ # CLI tool
│ └── main.go
├── pkg/
│ ├── types/ # Type definitions
│ │ └── types.go
│ ├── parser/ # Lua parser (with embedded standard library)
│ │ └── parser.go
│ └── executor/ # Fantasy integration
│ └── executor.go
├── examples/ # Example Lua files
│ ├── with_fantasy/ # Fantasy integration example
│ ├── embed_custom/ # Custom embedded modules example
│ └── *.lua # Various prompt examples
├── docs/ # Documentation
├── lib/ # Lua type definitions for IDE
├── README.md # This file
└── go.mod
DSL Specification
Core Structure
Every prompt file must return a table with these fields:
| Field | Type | Required | Description |
|---|---|---|---|
model |
string | ✅ | Model identifier (e.g., "gpt-4") |
messages |
array | ✅ | Array of message objects |
temperature |
number | Temperature (0-2) | |
max_tokens |
number | Maximum tokens to generate | |
top_p |
number | Nucleus sampling (0-1) | |
top_k |
number | Top-k sampling | |
presence_penalty |
number | Presence penalty | |
frequency_penalty |
number | Frequency penalty | |
tools |
array | Tool/function definitions | |
tool_choice |
string/object | Tool selection strategy | |
provider |
string | Provider hint (optional, for documentation) | |
provider_options |
table | Provider-specific options |
Message Format
{
role = "user|assistant|system|tool",
content = "text" or {parts...},
name = "optional_name",
tool_calls = {...}, -- For assistant messages
tool_call_id = "id" -- For tool messages
}
Content Parts (Multimodal)
content = {
{type = "text", text = "..."},
{
type = "image_url",
image_url = {
url = "https://...",
detail = "high" -- low, high, auto
}
}
}
Tool Definition
{
type = "function",
func = {
name = "function_name",
description = "What it does",
parameters = {
type = "object",
properties = {...},
required = {...}
}
}
}
IDE Support
For full IDE support with Lua Language Server, create a .luarc.json:
{
"diagnostics": {
"globals": ["os", "table"]
},
"workspace": {
"library": ["./lib"]
}
}
Type Definitions
For type checking, create a type definition file:
---@class Message
---@field role "system"|"user"|"assistant"|"tool"
---@field content string|table
---@class Prompt
---@field model string
---@field messages Message[]
---@field temperature? number
---@return Prompt
return {}
Best Practices
- Modularize: Split complex prompts into reusable modules
- Validate: Always validate before deployment
- Version Control: Keep prompts in git
- Test: Create test cases with expected outputs
- Document: Add comments explaining prompt logic
- Use Functions: Create helper functions for repeated patterns
Error Handling
LuaPrompt provides detailed error messages:
Error: Invalid message format at examples/chat.lua:15
Expected 'role' field in message object
Error: Temperature must be between 0 and 2, got 3.5 at examples/chat.lua:4
Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
License
MIT License - see LICENSE file for details
Documentation
- 📖 Documentation Index - Start here!
- 🤖 AI Agents Integration Guide - For autonomous systems
- 📚 Standard Library Documentation - Built-in Lua modules
- 📝 Full Specification - Complete DSL reference
- ⚙️ Parameters Guide - Available parameters
- 📂 Example Files - Working code samples
- 📋 Changelog - Version history
- 🔗 GitHub Repository