Controller Errors
This page documents the error types used by the LLMController and how errors are handled throughout the controller layer.
ControllerError Enum
The ControllerError enum defines errors that can occur during controller operations:
use thiserror::Error;
#[derive(Error, Debug)]
pub enum ControllerError {
/// Controller has been shutdown
#[error("Controller is shutdown")]
Shutdown,
/// Channel closed unexpectedly
#[error("Channel closed")]
ChannelClosed,
/// Send operation timed out
#[error("Send timeout after {0} seconds")]
SendTimeout(u64),
}
Shutdown
Occurs when attempting to send input after the controller has been shut down:
pub async fn send_input(&self, payload: ControllerInputPayload) -> Result<(), ControllerError> {
if self.is_shutdown() {
return Err(ControllerError::Shutdown);
}
// ...
}
ChannelClosed
Occurs when the receiving end of a channel has been dropped:
match self.input_tx.send(payload).await {
Ok(()) => Ok(()),
Err(_) => Err(ControllerError::ChannelClosed),
}
SendTimeout
Occurs when sending takes longer than the allowed timeout (5 seconds by default):
const SEND_INPUT_TIMEOUT: Duration = Duration::from_secs(5);
pub async fn send_input(&self, payload: ControllerInputPayload) -> Result<(), ControllerError> {
tokio::time::timeout(SEND_INPUT_TIMEOUT, self.input_tx.send(payload))
.await
.map_err(|_| ControllerError::SendTimeout(5))?
.map_err(|_| ControllerError::ChannelClosed)
}
AgentError Enum
The AgentError enum defines errors at the agent level:
use thiserror::Error;
#[derive(Error, Debug)]
pub enum AgentError {
/// Tool registration failed
#[error("Tool registration failed: {0}")]
ToolRegistration(String),
/// Session creation failed due to LLM error
#[error("Session creation failed: {0}")]
Session(#[from] LlmError),
/// Controller not initialized
#[error("Controller not initialized")]
ControllerNotInitialized,
/// No LLM configuration found
#[error("No LLM configuration found: {0}")]
NoConfiguration(String),
}
ToolRegistration
Occurs when the tool registration callback returns an error:
pub fn register_tools<F>(&mut self, f: F) -> Result<(), AgentError>
where
F: FnOnce(...) -> Result<Vec<LLMTool>, String>,
{
let tool_defs = f(®istry, &user_reg, &perm_reg)
.map_err(AgentError::ToolRegistration)?;
// ...
}
Session
Wraps LLM client errors during session creation:
pub fn create_initial_session(&mut self) -> Result<(i64, String, i32), AgentError> {
let session_id = self.runtime.block_on(
Self::create_session_internal(&self.controller, config, &self.tool_definitions)
)?; // LlmError auto-converts to AgentError::Session
// ...
}
ControllerNotInitialized
Occurs when attempting to use the controller before initialization:
if self.controller.is_none() {
return Err(AgentError::ControllerNotInitialized);
}
NoConfiguration
Occurs when no LLM configuration is available:
let registry = self.llm_registry.as_ref()
.ok_or_else(|| AgentError::NoConfiguration(
"No LLM configuration found. Set ANTHROPIC_API_KEY or configure config.yaml".to_string()
))?;
Error Events
Runtime errors are communicated to the TUI via ControllerEvent::Error:
pub enum ControllerEvent {
Error {
session_id: i64,
error: String,
turn_id: Option<TurnId>,
},
// ...
}
Emitting Errors
fn emit_error(&self, session_id: i64, error: &str, turn_id: Option<TurnId>) {
tracing::error!("Controller error for session {}: {}", session_id, error);
self.emit_event(ControllerEvent::Error {
session_id,
error: error.to_string(),
turn_id,
});
}
Common Error Scenarios
| Scenario | Error Message |
|---|---|
| Session not found | ”Session not found” |
| Send to session failed | ”Failed to send message to LLM” |
| API rate limit | ”Rate limit exceeded” |
| Network error | ”Network error: connection refused” |
| Context exceeded | ”Context window exceeded” |
LLM Client Errors
The LlmError enum defines errors from the LLM client:
#[derive(Error, Debug)]
pub enum LlmError {
#[error("HTTP error: {0}")]
Http(#[from] reqwest::Error),
#[error("API error: {status} - {message}")]
Api { status: u16, message: String },
#[error("JSON error: {0}")]
Json(#[from] serde_json::Error),
#[error("Stream error: {0}")]
Stream(String),
#[error("Invalid configuration: {0}")]
Config(String),
#[error("Rate limit exceeded")]
RateLimit,
#[error("Context window exceeded")]
ContextExceeded,
}
Error Handling in Sessions
impl LLMSession {
async fn send_to_api(&self, messages: &[Message]) -> Result<(), LlmError> {
match self.client.send_message_stream(messages, &options).await {
Ok(stream) => {
self.process_stream(stream).await
}
Err(e) => {
// Emit error event
self.from_llm_tx.send(FromLLMPayload {
response_type: LLMResponseType::Error,
error: Some(e.to_string()),
..Default::default()
}).await.ok();
Err(e)
}
}
}
}
Error Recovery Patterns
Retry with Backoff
async fn send_with_retry(&self, payload: &ToLLMPayload) -> Result<(), LlmError> {
let mut attempts = 0;
let max_attempts = 3;
loop {
match self.send_internal(payload).await {
Ok(()) => return Ok(()),
Err(LlmError::RateLimit) if attempts < max_attempts => {
attempts += 1;
let delay = Duration::from_secs(2u64.pow(attempts));
tokio::time::sleep(delay).await;
}
Err(e) => return Err(e),
}
}
}
Graceful Degradation
async fn execute_tool(&self, request: ToolRequest) -> ToolResult {
match self.execute_internal(&request).await {
Ok(content) => ToolResult {
status: ToolResultStatus::Success,
content,
error: None,
..
},
Err(e) => ToolResult {
status: ToolResultStatus::Error,
content: String::new(),
error: Some(e.to_string()),
..
},
}
}
Tool errors do not crash the agent; they are returned to the LLM which can decide how to proceed.
Error Logging
All errors are logged with context:
// Structured logging with context
tracing::error!(
session_id = session_id,
error = %e,
turn_id = ?turn_id,
"Failed to process message"
);
// With spans for context
let span = tracing::error_span!("handle_input", session_id, turn = ?turn_id);
let _guard = span.enter();
tracing::error!("Session not found");
User-Facing Error Messages
Error events are converted to user-friendly messages in the TUI:
fn handle_error(&mut self, session_id: i64, error: &str) {
// Log technical details
tracing::error!("Error for session {}: {}", session_id, error);
// Display user-friendly message
let user_message = match error {
e if e.contains("Rate limit") => "Rate limit reached. Please wait a moment.",
e if e.contains("Context") => "Conversation too long. Consider clearing history.",
e if e.contains("Network") => "Connection error. Check your internet.",
_ => error,
};
self.chat_view.add_error(user_message);
}
Error Propagation
From LLM to TUI
LlmError
↓ (in LLMSession)
FromLLMPayload { response_type: Error, error: Some(...) }
↓ (in LLMController)
ControllerEvent::Error { error: "..." }
↓ (convert_controller_event_to_ui_message)
UiMessage::Error { error: "..." }
↓ (in App)
Display error in ChatView
From Controller to Caller
ControllerError
↓ (in send_input)
Result<(), ControllerError>
↓ (in InputRouter)
Log error, continue running
Testing Error Handling
#[tokio::test]
async fn test_shutdown_error() {
let controller = LLMController::new(None);
controller.shutdown().await;
let result = controller.send_input(ControllerInputPayload::control(
1,
ControlCmd::Clear,
)).await;
assert!(matches!(result, Err(ControllerError::Shutdown)));
}
#[tokio::test]
async fn test_session_not_found() {
let controller = Arc::new(LLMController::new(None));
// Don't create a session
// Send input to non-existent session
controller.send_input(ControllerInputPayload::data(
999,
"Hello",
TurnId::new_user_turn(1),
)).await.ok();
// Error event should be emitted (verify via mock event_func)
}
Next Steps
- LLMController - Controller overview
- Response Handling - How errors in responses are handled
- Logger Setup - Error logging configuration
