Discover your ideal AI model. It provides a GUI for managing various language learning model (LLM) API providers, logs all your requests in a SQLite database, and allows you to compare and score model outputs against each other. This enables you to find faster and more cost-effective models to use.
Key Features
LLM Proxy: Manage all your LLM API providers securely from a single source. This includes providers like OpenAI, Together, OpenRouter, Local, and more.
Logging: Keep track of all your requests with a robust SQLite database logging system.
Testing: Compare the outputs of different models and score the results against each other. This feature helps you identify faster and cheaper models.
Getting Started
Follow these steps to get started with Y2A Evaluate:
Install Go: Ensure you have Go version 1.22 or higher installed on your system. If not, you can install it from the official Go website.
Install the Evaluator CLI tool: Use the following command to install the Evaluator CLI tool:
go install github.com/y2a-labs/evaluate@latest
Start the server: Run the following command to start the server:
evaluate server
Add your API providers: OpenAI is required for text embedding, but all other providers are optional.
Log your requests: Update your base url and set the model name to any of the providers
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:3000/v1/",
api_key="any",
)
response = client.chat.completions.create(
model="Your model name"
messages=[
{"role": "user","content": "How are you doing today?"}
])
print(response.choices[0].message.content)
Create a test: Convert a log of a previous request into a test, or make one from scratch.