OpenAI Provider
The OpenAI provider enables integration with GPT models through the Chat Completions API. It supports both synchronous and streaming message completion with full tool use capabilities. Beyond the standard OpenAI API, this provider also supports Azure OpenAI and any OpenAI-compatible API (Groq, Together, Fireworks, etc.) through configurable base URLs.
The provider handles OpenAI-specific API formats, including the function-wrapped tool format, system messages in the message array, and the choices-based response structure. Azure OpenAI uses a different URL format and authentication method, which the provider handles automatically based on configuration.
OpenAIProvider Struct
The provider is defined in src/client/providers/openai/mod.rs:
pub struct OpenAIProvider {
pub api_key: String,
pub model: String,
pub base_url: Option<String>,
}
impl OpenAIProvider {
pub fn new(api_key: String, model: String) -> Self {
Self { api_key, model, base_url: None }
}
pub fn with_base_url(mut self, url: String) -> Self {
self.base_url = Some(url);
self
}
}
Standard OpenAI API
API Configuration
| Setting | Value |
|---|---|
| Endpoint | https://api.openai.com/v1/chat/completions |
| Content-Type | application/json |
Request headers:
Authorization: Bearer <api_key>
Content-Type: application/json
LLMSessionConfig Builder
Create an OpenAI session configuration using the builder:
use agent_air::controller::LLMSessionConfig;
let config = LLMSessionConfig::openai("sk-...", "gpt-4-turbo");
The openai() method sets these defaults:
| Option | Default Value |
|---|---|
| max_tokens | 4096 |
| streaming | true |
| context_limit | 128,000 |
| compaction | Threshold (default) |
Builder Methods
let config = LLMSessionConfig::openai("sk-...", "gpt-4-turbo")
.with_max_tokens(4096)
.with_system_prompt("You are a helpful assistant.")
.with_temperature(0.7)
.with_streaming(true)
.with_context_limit(128_000);
Azure OpenAI
Azure OpenAI provides OpenAI models through Microsoft Azure with enterprise features like VPC integration, managed identity, and Azure compliance certifications.
API Configuration
| Setting | Value |
|---|---|
| Endpoint | https://{resource}.openai.azure.com/openai/deployments/{deployment}/chat/completions |
| Query | ?api-version={version} |
| Auth Header | api-key |
LLMSessionConfig Builder
Create an Azure OpenAI session configuration:
use agent_air::controller::LLMSessionConfig;
let config = LLMSessionConfig::azure_openai(
"your-api-key", // Azure OpenAI API key
"my-resource", // Azure resource name
"gpt-4-deployment" // Deployment name
);
The azure_openai() method sets these defaults:
| Option | Default Value |
|---|---|
| max_tokens | 4096 |
| streaming | true |
| context_limit | 128,000 |
| api_version | 2024-10-21 |
Customizing Azure Config
let config = LLMSessionConfig::azure_openai(
api_key,
"my-resource",
"gpt-4-deployment"
)
.with_azure_api_version("2024-10-21") // Override API version
.with_max_tokens(4096)
.with_system_prompt("You are a helpful assistant.")
.with_temperature(0.7);
Environment Variables for Azure
| Variable | Description |
|---|---|
AZURE_OPENAI_API_KEY | Azure OpenAI API key |
AZURE_OPENAI_RESOURCE | Resource name |
AZURE_OPENAI_DEPLOYMENT | Deployment name |
AZURE_OPENAI_API_VERSION | API version (optional) |
OpenAI-Compatible APIs
Many providers offer OpenAI-compatible APIs, allowing you to use the same interface with different backends. The provider supports custom base URLs for this purpose.
Supported Providers
| Provider | Base URL | Notes |
|---|---|---|
| Groq | https://api.groq.com/openai/v1 | Fast inference |
| Together | https://api.together.xyz/v1 | Open models |
| Fireworks | https://api.fireworks.ai/inference/v1 | Fast inference |
| Anyscale | https://api.endpoints.anyscale.com/v1 | Custom endpoints |
| Local (Ollama) | http://localhost:11434/v1 | Local models |
| LM Studio | http://localhost:1234/v1 | Local models |
LLMSessionConfig Builder
Create an OpenAI-compatible session configuration:
use agent_air::controller::LLMSessionConfig;
// Groq example
let config = LLMSessionConfig::openai_compatible(
"gsk_...", // API key
"llama-3.1-70b-versatile", // Model
"https://api.groq.com/openai/v1", // Base URL
128_000 // Context limit
);
// Together example
let config = LLMSessionConfig::openai_compatible(
"together_...",
"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo",
"https://api.together.xyz/v1",
131_072
);
// Local Ollama example
let config = LLMSessionConfig::openai_compatible(
"ollama", // Any non-empty string
"llama3.1:70b",
"http://localhost:11434/v1",
128_000
);
Using with_base_url
Alternatively, start with a standard OpenAI config and override the base URL:
let config = LLMSessionConfig::openai("gsk_...", "llama-3.1-70b-versatile")
.with_base_url("https://api.groq.com/openai/v1")
.with_context_limit(128_000);
Streaming Support
The OpenAI provider fully supports streaming via Server-Sent Events (SSE). When streaming is enabled, responses arrive incrementally as StreamEvent values:
pub enum StreamEvent {
MessageStart { message_id: String, model: String },
TextDelta(String),
ToolUse { id: String, name: String, input: Value },
MessageStop,
Error(String),
}
Enable streaming by setting streaming: true in the request body.
Request Format
OpenAI uses a messages array with system messages included:
{
"model": "gpt-4-turbo",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello"}
],
"max_tokens": 4096,
"stream": true
}
Role Mapping
| Generic Role | OpenAI Role |
|---|---|
| User | user |
| Assistant | assistant |
| System | system |
| Tool Result | tool |
Tool Use Format
Tools are wrapped in a function structure:
{
"tools": [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
},
"required": ["location"]
}
}
}]
}
Tool Choice
Tool choice values differ from Anthropic:
| Value | Description |
|---|---|
"auto" | Model decides |
"required" | Must use at least one tool |
"none" | Cannot use tools |
{"type": "function", "function": {"name": "..."}} | Force specific tool |
Response Format
OpenAI responses use the choices array structure:
{
"id": "chatcmpl-...",
"choices": [{
"message": {
"role": "assistant",
"content": "Hello! How can I help?",
"tool_calls": [...]
},
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 20
}
}
The provider extracts choices[0].message and converts it to the generic Message type.
Environment Variables
Standard OpenAI
| Variable | Description | Default |
|---|---|---|
OPENAI_API_KEY | API key (required) | None |
OPENAI_MODEL | Model identifier | gpt-4-turbo |
OPENAI_BASE_URL | Custom base URL | None |
Azure OpenAI
| Variable | Description | Default |
|---|---|---|
AZURE_OPENAI_API_KEY | API key | None |
AZURE_OPENAI_RESOURCE | Resource name | None |
AZURE_OPENAI_DEPLOYMENT | Deployment name | None |
AZURE_OPENAI_API_VERSION | API version | 2024-10-21 |
Available Models
OpenAI Models
| Model | Context | Description |
|---|---|---|
gpt-4-turbo | 128K | Latest GPT-4 Turbo |
gpt-4o | 128K | Optimized GPT-4 |
gpt-4o-mini | 128K | Cost-effective GPT-4 |
gpt-4 | 8K | Standard GPT-4 |
gpt-3.5-turbo | 16K | Fast, cost-effective |
o1-preview | 128K | Advanced reasoning |
o1-mini | 128K | Efficient reasoning |
Azure OpenAI Models
Azure models are accessed via deployment names. Available models depend on your Azure subscription and region.
YAML Configuration
Standard OpenAI
providers:
- provider: openai
api_key: sk-...
model: gpt-4-turbo
system_prompt: "You are a helpful assistant."
default_provider: openai
Azure OpenAI
providers:
- provider: openai
api_key: your-azure-api-key
azure_resource: my-resource
azure_deployment: gpt-4-deployment
azure_api_version: "2024-10-21"
system_prompt: "You are a helpful assistant."
default_provider: openai
OpenAI-Compatible
providers:
- provider: openai
api_key: gsk_...
model: llama-3.1-70b-versatile
base_url: https://api.groq.com/openai/v1
context_limit: 128000
system_prompt: "You are helpful."
default_provider: openai
Complete Example
use agent_air::AgentAir;
use agent_air::controller::LLMSessionConfig;
struct MyConfig;
impl AgentConfig for MyConfig {
fn config_path(&self) -> &str { ".myagent/config.yaml" }
fn default_system_prompt(&self) -> &str { "You are helpful." }
fn log_prefix(&self) -> &str { "myagent" }
fn name(&self) -> &str { "MyAgent" }
}
fn main() -> std::io::Result<()> {
let mut agent = AgentAir::new(&MyConfig)?;
// Configuration is loaded automatically from:
// 1. ~/.myagent/config.yaml (if exists)
// 2. OPENAI_API_KEY environment variable (fallback)
agent.run()
}
Error Handling
OpenAI API errors are converted to LlmError:
pub struct LlmError {
pub error_code: String,
pub error_message: String,
}
Common error codes:
| Error Code | Description |
|---|---|
invalid_api_key | Invalid API key |
rate_limit_exceeded | Too many requests |
model_not_found | Invalid model identifier |
context_length_exceeded | Input too long |
insufficient_quota | Billing issue |
Comparison with Anthropic
| Feature | OpenAI | Anthropic |
|---|---|---|
| Default context | 128K | 200K |
| System message | In messages array | Dedicated field |
| Tool wrapper | {type: "function", function: {...}} | Direct tool object |
| Force tool use | "required" | "any" |
| Azure variant | Yes | No |
| Compatible APIs | Many | Few |
