Are you an LLM? Read llms.txt for a summary of the docs, or llms-full.txt for the full context.
Skip to content

Providers

Amodal supports multiple LLM providers with a unified interface. Switch providers by changing an environment variable — no code changes needed.

Supported Providers

ProviderModelsAuth
AnthropicClaude Opus, Sonnet, HaikuANTHROPIC_API_KEY
OpenAIGPT-4o, GPT-4, GPT-3.5OPENAI_API_KEY
GoogleGemini Pro, FlashGOOGLE_API_KEY
AWS BedrockClaude, Titan, LlamaAWS credentials
Azure OpenAIGPT-4o, GPT-4AZURE_OPENAI_API_KEY + endpoint

Configuration

Auto-detection

Set the relevant environment variable and Amodal auto-detects the provider:

export ANTHROPIC_API_KEY=sk-ant-...
amodal dev   # uses Anthropic automatically

Explicit config

Specify in amodal.json:

{
  "provider": "anthropic",
  "model": "claude-sonnet-4-20250514"
}

Failover

The FailoverProvider cascades between providers with retry logic and linear backoff:

{
  "provider": "failover",
  "providers": ["anthropic", "openai"],
  "retries": 2,
  "backoffMs": 1000
}

If the primary provider fails, the runtime automatically tries the next one.

Streaming

All providers support SSE streaming via chatStream(). The streaming interface is unified — your client code works identically regardless of which provider is active.

Multi-Model Comparison

Use amodal eval or amodal experiment to compare providers:

amodal eval --providers anthropic,openai,google

This runs the same eval suite against each provider and reports quality, latency, and cost differences.