LlmError covers network issues, API rejections, and parsing failures specific to LLM APIs (Anthropic, OpenAI, etc.).

Definition

#[derive(Debug, Error)]
pub enum LlmError {
    #[error("Network request failed: {0}")]
    Network(#[from] reqwest::Error),

    #[error("API error {status}: {message}")]
    Api {
        status: u16,
        message: String,
    },

    #[error("Serialization error: {0}")]
    Serialization(#[from] serde_json::Error),

    #[error("Stream interrupted")]
    StreamInterrupted,

    #[error("Provider configuration invalid: {0}")]
    Config(String),
}

Retry Logic and Recovery

Many LlmErrors are transient (e.g., rate limits, network timeouts). The LLMClient implements automatic retry logic with exponential backoff for:

  • Network errors.
  • Api errors with status codes 429 (Too Many Requests), 500, 502, 503, and 504.

Terminal errors (like 401 Unauthorized or 400 Bad Request) are propagated immediately.