Provider Abstraction
This page documents how LLM providers are abstracted to enable a unified interface across different APIs. The abstraction allows switching between Anthropic, OpenAI, and other providers without changing application code.
LlmProvider Trait
The core abstraction is the LlmProvider trait:
pub trait LlmProvider {
fn send_msg(
&self,
client: &HttpClient,
messages: &[Message],
options: &MessageOptions,
) -> Pin<Box<dyn Future<Output = Result<Message, LlmError>> + Send>>;
fn send_msg_stream(
&self,
client: &HttpClient,
messages: &[Message],
options: &MessageOptions,
) -> Pin<Box<dyn Future<Output = Result<
Pin<Box<dyn Stream<Item = Result<StreamEvent, LlmError>> + Send>>,
LlmError
>> + Send>> {
// Default: not implemented
Box::pin(async {
Err(LlmError::new(
"NOT_IMPLEMENTED",
"Streaming not supported for this provider",
))
})
}
}
Design Decisions
| Decision | Rationale |
|---|---|
| Boxed futures | Enables async trait methods with dynamic dispatch |
HttpClient by reference | Providers use shared HTTP client for pooling |
| Default streaming implementation | Not all providers support streaming |
Send bounds | Required for async runtime compatibility |
Provider Enum
Provider selection at configuration time:
pub enum LLMProvider {
Anthropic,
OpenAI,
}
Anthropic Provider
Structure
pub struct AnthropicProvider {
pub api_key: String,
pub model: String,
}
impl AnthropicProvider {
pub fn new(api_key: String, model: String) -> Self {
Self { api_key, model }
}
}
Message Sending
impl LlmProvider for AnthropicProvider {
fn send_msg(
&self,
client: &HttpClient,
messages: &[Message],
options: &MessageOptions,
) -> Pin<Box<dyn Future<Output = Result<Message, LlmError>> + Send>> {
let client = client.clone();
let api_key = self.api_key.clone();
let model = self.model.clone();
let messages = messages.to_vec();
let options = options.clone();
Box::pin(async move {
// Build request body
let body = types::build_request_body(&messages, &options, &model)?;
// Get headers
let headers = types::get_request_headers(&api_key);
// Make API call
let response = client
.post(types::get_api_url(), &headers, &body)
.await?;
// Parse response
types::parse_response(&response)
})
}
}
Anthropic-Specific Features
System Message Handling:
// Anthropic extracts system messages to top-level field
for msg in messages {
match msg.role {
Role::System => {
let text = extract_text_content(msg);
if let Some(existing) = system_prompt.take() {
system_prompt = Some(format!("{}\n{}", existing, text));
} else {
system_prompt = Some(text);
}
}
Role::User | Role::Assistant => {
conversation_messages.push(msg);
}
}
}
Headers:
pub fn get_request_headers(api_key: &str) -> Vec<(&'static str, String)> {
vec![
("Content-Type", "application/json".to_string()),
("x-api-key", api_key.to_string()),
("anthropic-version", "2023-06-01".to_string()),
]
}
API URL:
pub fn get_api_url() -> &'static str {
"https://api.anthropic.com/v1/messages"
}
OpenAI Provider
Structure
pub struct OpenAIProvider {
pub api_key: String,
pub model: String,
}
impl OpenAIProvider {
pub fn new(api_key: String, model: String) -> Self {
Self { api_key, model }
}
}
OpenAI-Specific Features
Headers:
pub fn get_request_headers(api_key: &str) -> Vec<(&'static str, String)> {
vec![
("Content-Type", "application/json".to_string()),
("Authorization", format!("Bearer {}", api_key)),
]
}
API URL:
pub fn get_api_url() -> &'static str {
"https://api.openai.com/v1/chat/completions"
}
Request Format Differences
Tool Choice Mapping
| Internal | Anthropic | OpenAI |
|---|---|---|
Auto | {"type":"auto"} | "auto" |
Any | {"type":"any"} | "required" |
None | {"type":"none"} | "none" |
Tool(name) | {"type":"tool","name":"..."} | {"type":"function","function":{"name":"..."}} |
Tool Definition Format
Anthropic:
{
"name": "get_weather",
"description": "Get current weather",
"input_schema": { "type": "object", "properties": {...} }
}
OpenAI:
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather",
"parameters": { "type": "object", "properties": {...} }
}
}
System Message Handling
| Provider | System Message Location |
|---|---|
| Anthropic | Top-level "system" field |
| OpenAI | Message with role: "system" |
Response Parsing
Anthropic Response
pub fn parse_response(response_body: &str) -> Result<Message, LlmError> {
let parsed: serde_json::Value = serde_json::from_str(response_body)?;
// Check for API error
if let Some(error_type) = parsed["error"]["type"].as_str() {
let error_msg = parsed["error"]["message"].as_str().unwrap_or("Unknown");
return Err(LlmError::new(error_type, error_msg));
}
// Extract content array
let content_array = &parsed["content"];
let mut content_blocks: Vec<Content> = Vec::new();
if let Some(elements) = content_array.as_array() {
for block in elements {
match block["type"].as_str() {
Some("text") => {
if let Some(text) = block["text"].as_str() {
content_blocks.push(Content::Text(text.to_string()));
}
}
Some("tool_use") => {
content_blocks.push(Content::ToolUse(ToolUse {
id: block["id"].as_str().unwrap_or("").to_string(),
name: block["name"].as_str().unwrap_or("").to_string(),
input: block["input"].to_string(),
}));
}
_ => {}
}
}
}
Ok(Message {
role: Role::Assistant,
content: content_blocks,
})
}
OpenAI Response
pub fn parse_response(response_body: &str) -> Result<Message, LlmError> {
let parsed: serde_json::Value = serde_json::from_str(response_body)?;
// Check for API error
if let Some(error_msg) = parsed["error"]["message"].as_str() {
let error_type = parsed["error"]["type"].as_str().unwrap_or("api_error");
return Err(LlmError::new(error_type, error_msg));
}
// OpenAI wraps in choices[0].message
let message = &parsed["choices"][0]["message"];
let mut content_blocks: Vec<Content> = Vec::new();
// Extract text content
if let Some(text) = message["content"].as_str() {
content_blocks.push(Content::Text(text.to_string()));
}
// Extract tool calls
if let Some(tool_calls) = message["tool_calls"].as_array() {
for tc in tool_calls {
content_blocks.push(Content::ToolUse(ToolUse {
id: tc["id"].as_str().unwrap_or("").to_string(),
name: tc["function"]["name"].as_str().unwrap_or("").to_string(),
input: tc["function"]["arguments"].as_str().unwrap_or("{}").to_string(),
}));
}
}
Ok(Message {
role: Role::Assistant,
content: content_blocks,
})
}
Metadata Handling
| Provider | Metadata Format |
|---|---|
| Anthropic | "metadata": {"user_id": "..."} |
| OpenAI | "user": "..." |
Adding New Providers
To add a new provider:
- Create provider struct with API key and model fields
- Implement
LlmProvidertrait - Create request builder for provider’s API format
- Create response parser for provider’s response format
- Add variant to
LLMProviderenum - Add case to factory function
pub struct NewProvider {
pub api_key: String,
pub model: String,
}
impl LlmProvider for NewProvider {
fn send_msg(
&self,
client: &HttpClient,
messages: &[Message],
options: &MessageOptions,
) -> Pin<Box<dyn Future<Output = Result<Message, LlmError>> + Send>> {
// Implementation
}
fn send_msg_stream(
&self,
client: &HttpClient,
messages: &[Message],
options: &MessageOptions,
) -> Pin<Box<dyn Future<Output = Result<
Pin<Box<dyn Stream<Item = Result<StreamEvent, LlmError>> + Send>>,
LlmError
>> + Send>> {
// Implementation
}
}
Provider Selection
Providers are selected based on configuration:
match config.provider {
LLMProvider::Anthropic => {
let provider = AnthropicProvider::new(api_key, model);
LLMClient::new(Box::new(provider))
}
LLMProvider::OpenAI => {
let provider = OpenAIProvider::new(api_key, model);
LLMClient::new(Box::new(provider))
}
}
Next Steps
- HTTP & TLS - HTTP client configuration
- Streaming - Streaming implementation
- Retry Logic - Error handling and retries
