Configuration

Configure Evalytic with evalytic.toml, environment variables, and CLI flags.

Precedence Order

Configuration is resolved in this order (highest priority first):

  1. CLI flags--judge openai/gpt-5.2
  2. Environment variablesGEMINI_API_KEY=...
  3. .env file — auto-loaded from current directory
  4. evalytic.toml — project config file
  5. Defaults — built-in defaults

evalytic.toml

Create an evalytic.toml in your project root. The easiest way is the interactive wizard:

evaly init

Evalytic searches for config files in:

  1. ./evalytic.toml (current directory)
  2. ~/.evalytic/config.toml (user home)

Full example

# evalytic.toml

[keys]
fal = "fal_key_xxx"
gemini = "gemini_key_xxx"
openai = "sk-xxx"
anthropic = "sk-ant-xxx"

[bench]
judge = "gemini-2.5-flash"
concurrency = 4
dimensions = ["visual_quality", "prompt_adherence"]
image_size = "landscape_16_9"
seed = 42
output_dir = "./reports"

# Weight VLM dimensions (default: equal)
[bench.dimension_weights]
input_fidelity = 0.5
visual_quality = 0.1

[bench.metrics]
clip_threshold = 0.18
clip_weight = 0.20
clip_range = [0.20, 0.40]
lpips_threshold = 0.40
lpips_weight = 0.20
lpips_range = [0.40, 0.95]
face_range = [0.60, 0.95]

# Override model cost or settings
[bench.model_overrides.flux-kontext]
cost = 0.06

[bench.model_overrides.my-custom-model]
endpoint = "fal-ai/my-custom/v1"
pipeline = "img2img"
cost = 0.04
image_field = "image_urls"

[keys] Section

API keys defined here are set as environment variables when Evalytic loads:

Config KeyEnvironment VariableUsed By
falFAL_KEYfal.ai image generation + fal/* judges
geminiGEMINI_API_KEYGemini judge
openaiOPENAI_API_KEYOpenAI judge
anthropicANTHROPIC_API_KEYAnthropic judge
Security: Don't commit evalytic.toml with API keys to version control. Add it to .gitignore, or use environment variables / .env instead.

[bench] Section

Default settings for the evaly bench command:

KeyTypeDefaultDescription
judgestring"gemini-2.5-flash"Default VLM judge (single mode)
judgesstring[]Multi-judge consensus mode (2-3 judges). Overrides judge when set.
modelsstring[]Default models for evaly bench (avoids -m flag)
promptsstringDefault prompts file path or inline prompt
concurrencyint4Max parallel generation requests
dimensionsstring[]autoDefault dimensions to score
image_sizestringDefault image size
seedintFixed seed for reproducibility
output_dirstringDefault output directory. Each run creates a timestamped subfolder with reports and error log.

[bench.dimension_weights] Section

Customize how VLM dimensions contribute to the overall score. By default all dimensions are weighted equally (1/n). When you specify weights, unspecified dimensions share the remaining weight equally. Weights are normalized to sum to 1.0.

# E-commerce: product shape matters most
[bench.dimension_weights]
input_fidelity = 0.5
visual_quality = 0.1
# Remaining 0.4 split equally among other active dimensions
You can also set dimension weights via CLI: --dim-weights '{"input_fidelity": 0.5}'. CLI flags override toml values.

[bench.metrics] Section

Thresholds, weights, and normalize ranges for local metrics. Sharpness is always available (no torch required); CLIP/LPIPS/face require evalytic[metrics].

KeyTypeDefaultDescription
clip_thresholdfloat0.18CLIP score flag threshold
clip_weightfloat0.20CLIP weight in overall score
clip_rangefloat[2][0.18, 0.35]CLIP normalize range [min, max] for mapping to 0–5
lpips_thresholdfloat0.40LPIPS flag threshold
lpips_weightfloat0.20LPIPS weight in overall score
lpips_rangefloat[2][0.40, 0.95]LPIPS normalize range [min, max]
face_rangefloat[2][0.60, 0.95]Face similarity normalize range [min, max]

[bench.model_overrides] Section

Override cost or settings for any model. Useful when fal.ai prices change or you're using a custom endpoint. Overrides take priority over both the built-in registry and auto-detected pricing.

# Override cost for an existing model
[bench.model_overrides.flux-kontext]
cost = 0.06

# Register a custom model
[bench.model_overrides.my-custom-model]
endpoint = "fal-ai/my-custom/v1"
pipeline = "img2img"
cost = 0.04
image_field = "image_urls"
KeyTypeDescription
endpointstringfal.ai endpoint path
pipelinestring"text2img" or "img2img"
costfloatUSD per image (overrides auto-detect)
image_fieldstring"image_url" or "image_urls"
Cost priority: model_overrides > fal.ai live pricing (auto-detected) > built-in registry defaults. Run evaly bench --list-models to see current prices.

.env File

Evalytic auto-loads .env from the current directory using python-dotenv:

# .env
FAL_KEY=fal_key_xxx
GEMINI_API_KEY=gemini_key_xxx
OPENAI_API_KEY=sk-xxx
ANTHROPIC_API_KEY=sk-ant-xxx

Environment Variables

VariableDescription
FAL_KEYfal.ai API key for image generation + fal/* judges
GEMINI_API_KEYGoogle Gemini API key for judge
OPENAI_API_KEYOpenAI API key for judge
ANTHROPIC_API_KEYAnthropic API key for judge

Example Configurations

Single key (fal.ai only)

# One key for both generation and judging
[keys]
fal = "fal_key_xxx"

[bench]
judge = "fal/gemini-2.5-flash"

Two keys (fal.ai + Gemini)

[keys]
fal = "fal_key_xxx"
gemini = "gemini_key_xxx"

CI/CD with GPT-5.2 judge

[bench]
judge = "openai/gpt-5.2"
concurrency = 2
dimensions = ["visual_quality", "prompt_adherence", "text_rendering"]

Local development with Ollama

[keys]
fal = "fal_key_xxx"

[bench]
judge = "ollama/qwen2.5-vl:7b"
seed = 42

Consensus mode (multi-judge)

# Consensus via fal.ai — single key, multiple judges
[keys]
fal = "fal_key_xxx"

[bench]
judges = ["fal/gemini-2.5-flash", "fal/gpt-5.2"]

# Or with separate API keys per provider
# [keys]
# gemini = "gemini_key_xxx"
# openai = "sk-xxx"
# [bench]
# judges = ["gemini-2.5-flash", "gpt-5.2"]

Default models and prompts

# Saves you from typing -m and -p every time
[keys]
fal = "fal_key_xxx"
gemini = "gemini_key_xxx"

[bench]
models = ["flux-schnell", "flux-dev"]
prompts = "prompts.json"

With this config, evaly bench -y is all you need — models and prompts are loaded from the config file.

Inspect Configuration

Use evaly config show to see the active configuration, which keys are loaded, and where they came from:

evaly config show