OpenAI Provider

The OpenAI provider enables integration with GPT models through the Chat Completions API. It supports both synchronous and streaming message completion with full tool use capabilities. Beyond the standard OpenAI API, this provider also supports Azure OpenAI and any OpenAI-compatible API (Groq, Together, Fireworks, etc.) through configurable base URLs.

The provider handles OpenAI-specific API formats, including the function-wrapped tool format, system messages in the message array, and the choices-based response structure. Azure OpenAI uses a different URL format and authentication method, which the provider handles automatically based on configuration.


OpenAIProvider Struct

The provider is defined in src/client/providers/openai/mod.rs:

pub struct OpenAIProvider {
    pub api_key: String,
    pub model: String,
    pub base_url: Option<String>,
}

impl OpenAIProvider {
    pub fn new(api_key: String, model: String) -> Self {
        Self { api_key, model, base_url: None }
    }

    pub fn with_base_url(mut self, url: String) -> Self {
        self.base_url = Some(url);
        self
    }
}

Standard OpenAI API

API Configuration

SettingValue
Endpointhttps://api.openai.com/v1/chat/completions
Content-Typeapplication/json

Request headers:

Authorization: Bearer <api_key>
Content-Type: application/json

LLMSessionConfig Builder

Create an OpenAI session configuration using the builder:

use agent_air::controller::LLMSessionConfig;

let config = LLMSessionConfig::openai("sk-...", "gpt-4-turbo");

The openai() method sets these defaults:

OptionDefault Value
max_tokens4096
streamingtrue
context_limit128,000
compactionThreshold (default)

Builder Methods

let config = LLMSessionConfig::openai("sk-...", "gpt-4-turbo")
    .with_max_tokens(4096)
    .with_system_prompt("You are a helpful assistant.")
    .with_temperature(0.7)
    .with_streaming(true)
    .with_context_limit(128_000);

Azure OpenAI

Azure OpenAI provides OpenAI models through Microsoft Azure with enterprise features like VPC integration, managed identity, and Azure compliance certifications.

API Configuration

SettingValue
Endpointhttps://{resource}.openai.azure.com/openai/deployments/{deployment}/chat/completions
Query?api-version={version}
Auth Headerapi-key

LLMSessionConfig Builder

Create an Azure OpenAI session configuration:

use agent_air::controller::LLMSessionConfig;

let config = LLMSessionConfig::azure_openai(
    "your-api-key",           // Azure OpenAI API key
    "my-resource",            // Azure resource name
    "gpt-4-deployment"        // Deployment name
);

The azure_openai() method sets these defaults:

OptionDefault Value
max_tokens4096
streamingtrue
context_limit128,000
api_version2024-10-21

Customizing Azure Config

let config = LLMSessionConfig::azure_openai(
    api_key,
    "my-resource",
    "gpt-4-deployment"
)
.with_azure_api_version("2024-10-21")  // Override API version
.with_max_tokens(4096)
.with_system_prompt("You are a helpful assistant.")
.with_temperature(0.7);

Environment Variables for Azure

VariableDescription
AZURE_OPENAI_API_KEYAzure OpenAI API key
AZURE_OPENAI_RESOURCEResource name
AZURE_OPENAI_DEPLOYMENTDeployment name
AZURE_OPENAI_API_VERSIONAPI version (optional)

OpenAI-Compatible APIs

Many providers offer OpenAI-compatible APIs, allowing you to use the same interface with different backends. The provider supports custom base URLs for this purpose.

Supported Providers

ProviderBase URLNotes
Groqhttps://api.groq.com/openai/v1Fast inference
Togetherhttps://api.together.xyz/v1Open models
Fireworkshttps://api.fireworks.ai/inference/v1Fast inference
Anyscalehttps://api.endpoints.anyscale.com/v1Custom endpoints
Local (Ollama)http://localhost:11434/v1Local models
LM Studiohttp://localhost:1234/v1Local models

LLMSessionConfig Builder

Create an OpenAI-compatible session configuration:

use agent_air::controller::LLMSessionConfig;

// Groq example
let config = LLMSessionConfig::openai_compatible(
    "gsk_...",                           // API key
    "llama-3.1-70b-versatile",          // Model
    "https://api.groq.com/openai/v1",   // Base URL
    128_000                              // Context limit
);

// Together example
let config = LLMSessionConfig::openai_compatible(
    "together_...",
    "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo",
    "https://api.together.xyz/v1",
    131_072
);

// Local Ollama example
let config = LLMSessionConfig::openai_compatible(
    "ollama",                            // Any non-empty string
    "llama3.1:70b",
    "http://localhost:11434/v1",
    128_000
);

Using with_base_url

Alternatively, start with a standard OpenAI config and override the base URL:

let config = LLMSessionConfig::openai("gsk_...", "llama-3.1-70b-versatile")
    .with_base_url("https://api.groq.com/openai/v1")
    .with_context_limit(128_000);

Streaming Support

The OpenAI provider fully supports streaming via Server-Sent Events (SSE). When streaming is enabled, responses arrive incrementally as StreamEvent values:

pub enum StreamEvent {
    MessageStart { message_id: String, model: String },
    TextDelta(String),
    ToolUse { id: String, name: String, input: Value },
    MessageStop,
    Error(String),
}

Enable streaming by setting streaming: true in the request body.


Request Format

OpenAI uses a messages array with system messages included:

{
  "model": "gpt-4-turbo",
  "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello"}
  ],
  "max_tokens": 4096,
  "stream": true
}

Role Mapping

Generic RoleOpenAI Role
Useruser
Assistantassistant
Systemsystem
Tool Resulttool

Tool Use Format

Tools are wrapped in a function structure:

{
  "tools": [{
    "type": "function",
    "function": {
      "name": "get_weather",
      "description": "Get weather for a location",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {"type": "string"}
        },
        "required": ["location"]
      }
    }
  }]
}

Tool Choice

Tool choice values differ from Anthropic:

ValueDescription
"auto"Model decides
"required"Must use at least one tool
"none"Cannot use tools
{"type": "function", "function": {"name": "..."}}Force specific tool

Response Format

OpenAI responses use the choices array structure:

{
  "id": "chatcmpl-...",
  "choices": [{
    "message": {
      "role": "assistant",
      "content": "Hello! How can I help?",
      "tool_calls": [...]
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 10,
    "completion_tokens": 20
  }
}

The provider extracts choices[0].message and converts it to the generic Message type.


Environment Variables

Standard OpenAI

VariableDescriptionDefault
OPENAI_API_KEYAPI key (required)None
OPENAI_MODELModel identifiergpt-4-turbo
OPENAI_BASE_URLCustom base URLNone

Azure OpenAI

VariableDescriptionDefault
AZURE_OPENAI_API_KEYAPI keyNone
AZURE_OPENAI_RESOURCEResource nameNone
AZURE_OPENAI_DEPLOYMENTDeployment nameNone
AZURE_OPENAI_API_VERSIONAPI version2024-10-21

Available Models

OpenAI Models

ModelContextDescription
gpt-4-turbo128KLatest GPT-4 Turbo
gpt-4o128KOptimized GPT-4
gpt-4o-mini128KCost-effective GPT-4
gpt-48KStandard GPT-4
gpt-3.5-turbo16KFast, cost-effective
o1-preview128KAdvanced reasoning
o1-mini128KEfficient reasoning

Azure OpenAI Models

Azure models are accessed via deployment names. Available models depend on your Azure subscription and region.


YAML Configuration

Standard OpenAI

providers:
  - provider: openai
    api_key: sk-...
    model: gpt-4-turbo
    system_prompt: "You are a helpful assistant."

default_provider: openai

Azure OpenAI

providers:
  - provider: openai
    api_key: your-azure-api-key
    azure_resource: my-resource
    azure_deployment: gpt-4-deployment
    azure_api_version: "2024-10-21"
    system_prompt: "You are a helpful assistant."

default_provider: openai

OpenAI-Compatible

providers:
  - provider: openai
    api_key: gsk_...
    model: llama-3.1-70b-versatile
    base_url: https://api.groq.com/openai/v1
    context_limit: 128000
    system_prompt: "You are helpful."

default_provider: openai

Complete Example

use agent_air::AgentAir;
use agent_air::controller::LLMSessionConfig;

struct MyConfig;

impl AgentConfig for MyConfig {
    fn config_path(&self) -> &str { ".myagent/config.yaml" }
    fn default_system_prompt(&self) -> &str { "You are helpful." }
    fn log_prefix(&self) -> &str { "myagent" }
    fn name(&self) -> &str { "MyAgent" }
}

fn main() -> std::io::Result<()> {
    let mut agent = AgentAir::new(&MyConfig)?;

    // Configuration is loaded automatically from:
    // 1. ~/.myagent/config.yaml (if exists)
    // 2. OPENAI_API_KEY environment variable (fallback)

    agent.run()
}

Error Handling

OpenAI API errors are converted to LlmError:

pub struct LlmError {
    pub error_code: String,
    pub error_message: String,
}

Common error codes:

Error CodeDescription
invalid_api_keyInvalid API key
rate_limit_exceededToo many requests
model_not_foundInvalid model identifier
context_length_exceededInput too long
insufficient_quotaBilling issue

Comparison with Anthropic

FeatureOpenAIAnthropic
Default context128K200K
System messageIn messages arrayDedicated field
Tool wrapper{type: "function", function: {...}}Direct tool object
Force tool use"required""any"
Azure variantYesNo
Compatible APIsManyFew