Configuration
Configure Evalytic with evalytic.toml, environment variables, and CLI flags.
Precedence Order
Configuration is resolved in this order (highest priority first):
- CLI flags —
--judge openai/gpt-5.2 - Environment variables —
GEMINI_API_KEY=... .envfile — auto-loaded from current directoryevalytic.toml— project config file- Defaults — built-in defaults
evalytic.toml
Create an evalytic.toml in your project root. The easiest way is the interactive wizard:
evaly init
Evalytic searches for config files in:
./evalytic.toml(current directory)~/.evalytic/config.toml(user home)
Full example
# evalytic.toml
[keys]
fal = "fal_key_xxx"
gemini = "gemini_key_xxx"
openai = "sk-xxx"
anthropic = "sk-ant-xxx"
[bench]
judge = "gemini-2.5-flash"
concurrency = 4
dimensions = ["visual_quality", "prompt_adherence"]
image_size = "landscape_16_9"
seed = 42
output_dir = "./reports"
[bench.metrics]
clip_threshold = 0.18
clip_weight = 0.20
lpips_threshold = 0.40
lpips_weight = 0.20
[keys] Section
API keys defined here are set as environment variables when Evalytic loads:
| Config Key | Environment Variable | Used By |
|---|---|---|
fal | FAL_KEY | fal.ai image generation |
gemini | GEMINI_API_KEY | Gemini judge |
openai | OPENAI_API_KEY | OpenAI judge |
anthropic | ANTHROPIC_API_KEY | Anthropic judge |
Security: Don't commit
evalytic.toml with API keys to version control. Add it to .gitignore, or use environment variables / .env instead.
[bench] Section
Default settings for the evaly bench command:
| Key | Type | Default | Description |
|---|---|---|---|
judge | string | "gemini-2.5-flash" | Default VLM judge (single mode) |
judges | string[] | — | Multi-judge consensus mode (2-3 judges). Overrides judge when set. |
models | string[] | — | Default models for evaly bench (avoids -m flag) |
prompts | string | — | Default prompts file path or inline prompt |
concurrency | int | 4 | Max parallel generation requests |
dimensions | string[] | auto | Default dimensions to score |
image_size | string | — | Default image size |
seed | int | — | Fixed seed for reproducibility |
output_dir | string | — | Default output directory. Each run creates a timestamped subfolder with reports and error log. |
[bench.metrics] Section
Thresholds and weights for local CLIP/LPIPS metrics:
| Key | Type | Default | Description |
|---|---|---|---|
clip_threshold | float | 0.18 | CLIP score flag threshold |
clip_weight | float | 0.20 | CLIP weight in overall score |
lpips_threshold | float | 0.40 | LPIPS flag threshold |
lpips_weight | float | 0.20 | LPIPS weight in overall score |
.env File
Evalytic auto-loads .env from the current directory using python-dotenv:
# .env
FAL_KEY=fal_key_xxx
GEMINI_API_KEY=gemini_key_xxx
OPENAI_API_KEY=sk-xxx
ANTHROPIC_API_KEY=sk-ant-xxx
Environment Variables
| Variable | Description |
|---|---|
FAL_KEY | fal.ai API key for image generation |
GEMINI_API_KEY | Google Gemini API key for judge |
OPENAI_API_KEY | OpenAI API key for judge |
ANTHROPIC_API_KEY | Anthropic API key for judge |
Example Configurations
Minimal (Gemini)
[keys]
fal = "fal_key_xxx"
gemini = "gemini_key_xxx"
CI/CD with GPT-5.2 judge
[bench]
judge = "openai/gpt-5.2"
concurrency = 2
dimensions = ["visual_quality", "prompt_adherence", "text_rendering"]
Local development with Ollama
[keys]
fal = "fal_key_xxx"
[bench]
judge = "ollama/qwen2.5-vl:7b"
seed = 42
Consensus mode (multi-judge)
# Consensus scoring: 2 primary judges + optional tiebreaker
[keys]
fal = "fal_key_xxx"
gemini = "gemini_key_xxx"
openai = "sk-xxx"
[bench]
judges = ["gemini-2.5-flash", "gpt-5.2"]
# Or with explicit tiebreaker (3rd judge)
# judges = ["gemini-2.5-flash", "gpt-5.2", "claude-haiku-4-5"]
Default models and prompts
# Saves you from typing -m and -p every time
[keys]
fal = "fal_key_xxx"
gemini = "gemini_key_xxx"
[bench]
models = ["flux-schnell", "flux-dev"]
prompts = "prompts.json"
With this config, evaly bench -y is all you need — models and prompts are loaded from the config file.
Inspect Configuration
Use evaly config show to see the active configuration, which keys are loaded, and where they came from:
evaly config show