LLM Configuration
Agent Air loads LLM provider configuration from a YAML file or environment variables. The YAML file takes precedence when it exists and contains valid provider definitions.
Configuration File Location
The configuration file path is specified by config_path() in your AgentConfig implementation:
fn config_path(&self) -> &str {
"~/.myagent/config.yaml"
}
Paths starting with ~/ are expanded to the user’s home directory. The framework does not create the configuration directory or file automatically. If the path does not exist, the framework falls back to environment variables.
YAML File Format
The configuration file has two top-level fields:
providers:
- provider: anthropic
api_key: sk-ant-...
model: claude-sonnet-4-20250514
- provider: openai
api_key: sk-...
model: gpt-4-turbo-preview
default_provider: anthropic
providers (required)
An array of provider configurations. Each entry requires:
- provider: The provider identifier (see Supported Providers below)
- api_key: Authentication key for the provider API
- model: Model identifier to use for requests (optional for some providers)
default_provider (optional)
Specifies which provider to use when no specific provider is requested. If omitted, the first provider in the list becomes the default.
Supported Providers
Native Providers
These providers have dedicated implementations with full feature support:
| Provider | YAML Name | Default Model | Context Limit |
|---|---|---|---|
| Anthropic | anthropic | claude-sonnet-4-20250514 | 200,000 |
| OpenAI | openai | gpt-4-turbo-preview | 128,000 |
google | gemini-2.5-flash | 1,000,000 |
OpenAI-Compatible Providers
These providers use the OpenAI API format and are automatically configured with appropriate defaults:
| Provider | YAML Name | Default Model | Context Limit |
|---|---|---|---|
| Groq | groq | llama-3.3-70b-versatile | 131,072 |
| Together AI | together | meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo | 131,072 |
| Fireworks AI | fireworks | accounts/fireworks/models/llama-v3p1-70b-instruct | 131,072 |
| Mistral AI | mistral | mistral-large-latest | 128,000 |
| Perplexity | perplexity | llama-3.1-sonar-large-128k-online | 128,000 |
| DeepSeek | deepseek | deepseek-chat | 64,000 |
| OpenRouter | openrouter | anthropic/claude-3.5-sonnet | 200,000 |
| xAI | xai | grok-2-latest | 131,072 |
| AI21 Labs | ai21 | jamba-1.5-large | 256,000 |
| Anyscale | anyscale | meta-llama/Meta-Llama-3.1-70B-Instruct | 128,000 |
| Cerebras | cerebras | llama3.1-70b | 128,000 |
| SambaNova | sambanova | Meta-Llama-3.1-70B-Instruct | 128,000 |
Local Providers
These providers run locally and do not require API keys:
| Provider | YAML Name | Default Model | Context Limit |
|---|---|---|---|
| Ollama | ollama | llama3.1 | 128,000 |
| LM Studio | lmstudio | local-model | 128,000 |
Configuration Examples
Single Provider
providers:
- provider: anthropic
api_key: sk-ant-api03-xxxxx
model: claude-sonnet-4-20250514
Multiple Providers
providers:
- provider: anthropic
api_key: sk-ant-api03-xxxxx
model: claude-sonnet-4-20250514
- provider: groq
api_key: gsk_xxxxx
model: llama-3.3-70b-versatile
default_provider: anthropic
Local Provider
providers:
- provider: ollama
api_key: ""
model: llama3.1
For local providers, the api_key field can be empty but must be present.
Environment Variables
When no YAML configuration file exists or when it is empty, the framework falls back to environment variables. Set the appropriate API key variable to enable a provider.
Native Providers
| Provider | API Key Variable | Model Variable | Default Model |
|---|---|---|---|
| Anthropic | ANTHROPIC_API_KEY | ANTHROPIC_MODEL | claude-sonnet-4-20250514 |
| OpenAI | OPENAI_API_KEY | OPENAI_MODEL | gpt-4-turbo-preview |
GOOGLE_API_KEY | GOOGLE_MODEL | gemini-2.5-flash |
OpenAI-Compatible Providers
| Provider | API Key Variable | Model Variable |
|---|---|---|
| Groq | GROQ_API_KEY | GROQ_MODEL |
| Together AI | TOGETHER_API_KEY | TOGETHER_MODEL |
| Fireworks AI | FIREWORKS_API_KEY | FIREWORKS_MODEL |
| Mistral AI | MISTRAL_API_KEY | MISTRAL_MODEL |
| Perplexity | PERPLEXITY_API_KEY | PERPLEXITY_MODEL |
| DeepSeek | DEEPSEEK_API_KEY | DEEPSEEK_MODEL |
| OpenRouter | OPENROUTER_API_KEY | OPENROUTER_MODEL |
| xAI | XAI_API_KEY | XAI_MODEL |
| AI21 Labs | AI21_API_KEY | AI21_MODEL |
| Anyscale | ANYSCALE_API_KEY | ANYSCALE_MODEL |
| Cerebras | CEREBRAS_API_KEY | CEREBRAS_MODEL |
| SambaNova | SAMBANOVA_API_KEY | SAMBANOVA_MODEL |
Local Providers
| Provider | Host Variable |
|---|---|
| Ollama | OLLAMA_HOST |
| LM Studio | LMSTUDIO_HOST |
Setting the host variable enables the local provider. No API key is required.
Precedence
Environment variables are only used when:
- The YAML config file does not exist
- The YAML config file cannot be read
- The YAML config file contains no providers
If the YAML file contains valid provider configuration, environment variables are ignored. There is no merging of configuration sources.
When multiple API keys are set via environment variables, the default provider is selected in this order: Anthropic, OpenAI, Google, then OpenAI-compatible providers.
Available Models
Anthropic
| Model | Description |
|---|---|
claude-sonnet-4-20250514 | Latest Sonnet model (default) |
claude-opus-4-20250514 | Latest Opus model |
claude-3-5-sonnet-20241022 | Claude 3.5 Sonnet |
claude-3-opus-20240229 | Claude 3 Opus |
claude-3-haiku-20240307 | Claude 3 Haiku (fast) |
OpenAI
| Model | Description |
|---|---|
gpt-4-turbo-preview | GPT-4 Turbo (default) |
gpt-4o | GPT-4o |
gpt-4o-mini | GPT-4o Mini (fast) |
gpt-4 | GPT-4 |
gpt-3.5-turbo | GPT-3.5 Turbo |
| Model | Description |
|---|---|
gemini-2.5-flash | Gemini 2.5 Flash (default) |
gemini-2.5-pro | Gemini 2.5 Pro |
gemini-1.5-pro | Gemini 1.5 Pro |
Groq
| Model | Description |
|---|---|
llama-3.3-70b-versatile | Llama 3.3 70B (default) |
llama-3.1-70b-versatile | Llama 3.1 70B |
mixtral-8x7b-32768 | Mixtral 8x7B |
For other OpenAI-compatible providers, refer to the provider’s documentation for available models.
Model-Specific Capabilities
Some features vary by model:
Streaming support:
- Anthropic: Enabled by default
- OpenAI: Disabled by default (can be enabled)
- Google: Enabled by default
- OpenAI-compatible: Varies by provider
Tool use:
- Most modern models support tool use (function calling)
- Check provider documentation for specific model capabilities
Model identifiers are not validated by the framework. Invalid model names result in errors from the provider API when making requests.
API Key Security
File Permissions
Restrict access to configuration files containing API keys:
chmod 600 ~/.myagent/config.yaml
Do Not Commit Keys
Add configuration files to .gitignore:
# Agent configuration with API keys
.myagent/config.yaml
config.yaml
Production Deployments
For production, prefer environment variables or secrets management systems:
- Container orchestration (Kubernetes secrets, Docker secrets)
- Cloud provider secrets managers (AWS Secrets Manager, GCP Secret Manager)
- CI/CD pipeline secrets
- Process managers (systemd environment files)
Key Formats
| Provider | Key Prefix | Example |
|---|---|---|
| Anthropic | sk-ant- | sk-ant-api03-xxxxx |
| OpenAI | sk- | sk-xxxxx |
| Groq | gsk_ | gsk_xxxxx |
| OpenRouter | sk-or- | sk-or-xxxxx |
Error Handling
Configuration errors are represented by ConfigError:
| Error | Description |
|---|---|
ReadError | The file could not be read (missing, permissions) |
ParseError | The file contains invalid YAML |
UnknownProvider | A provider name is not recognized |
When configuration loading fails, the framework logs a warning and the agent starts without LLM capabilities.
Changing Configuration
Configuration is loaded once at startup. To change providers, models, or API keys:
- Update the YAML configuration file or environment variable
- Restart the agent
