Response Handling

This page documents how LLM responses flow from sessions through the controller, including streaming, tool execution, and event emission.

Response Flow Overview

┌─────────────────┐    FromLLMPayload    ┌─────────────────┐
│   LLMSession    │─────────────────────▶│  LLMController  │
│                 │   from_llm_tx.send() │                 │
│  (streaming)    │                      │ handle_llm_     │
└─────────────────┘                      │ response()      │
                                         └────────┬────────┘

                    ┌─────────────────────────────┼─────────────────────────────┐
                    │                             │                             │
                    ▼                             ▼                             ▼
           ┌─────────────────┐          ┌─────────────────┐          ┌─────────────────┐
           │  Emit Event     │          │ Execute Tools   │          │ Track Tokens    │
           │  (TextChunk,    │          │                 │          │                 │
           │   Complete)     │          │                 │          │                 │
           └─────────────────┘          └─────────────────┘          └─────────────────┘

FromLLMPayload

Responses from LLM sessions:

pub struct FromLLMPayload {
    pub session_id: i64,
    pub response_type: LLMResponseType,
    pub text: String,
    pub tool_use: Option<ToolUseInfo>,
    pub tool_uses: Vec<ToolUseInfo>,
    pub is_complete: bool,
    pub error: Option<String>,
    pub model: String,
    pub message_id: String,
    pub content_index: usize,
    pub stop_reason: Option<String>,
    pub input_tokens: i64,
    pub output_tokens: i64,
    pub turn_id: Option<TurnId>,
}

LLMResponseType

pub enum LLMResponseType {
    StreamStart,     // Response begins
    TextChunk,       // Streamed text delta
    ToolUseStart,    // Tool block begins (streaming)
    ToolInputDelta,  // Incremental tool JSON input
    ToolUse,         // Single tool use complete
    ToolBatch,       // Multiple tools complete
    Complete,        // Response finished
    Error,           // Error occurred
    TokenUpdate,     // Token usage metrics
}

Main Response Handler

async fn handle_llm_response(&self, payload: FromLLMPayload) {
    let session_id = payload.session_id;
    let turn_id = payload.turn_id.clone();

    // Track token usage for TokenUpdate events
    if payload.response_type == LLMResponseType::TokenUpdate {
        if let Some(session) = self.session_mgr.get_session_by_id(session_id).await {
            self.token_usage.increment(
                session_id,
                session.model(),
                payload.input_tokens,
                payload.output_tokens,
            ).await;
        }
    }

    // Convert to ControllerEvent and emit
    match payload.response_type {
        LLMResponseType::StreamStart => {
            self.handle_stream_start(payload).await;
        }
        LLMResponseType::TextChunk => {
            self.handle_text_chunk(payload).await;
        }
        LLMResponseType::ToolUseStart => {
            self.handle_tool_use_start(payload).await;
        }
        LLMResponseType::ToolInputDelta => {
            // Internal only, no event emitted
        }
        LLMResponseType::ToolUse => {
            self.handle_tool_use(payload).await;
        }
        LLMResponseType::ToolBatch => {
            self.handle_tool_batch(payload).await;
        }
        LLMResponseType::Complete => {
            self.handle_complete(payload).await;
        }
        LLMResponseType::Error => {
            self.handle_error(payload).await;
        }
        LLMResponseType::TokenUpdate => {
            self.handle_token_update(payload).await;
        }
    }
}

Handling StreamStart

async fn handle_stream_start(&self, payload: FromLLMPayload) {
    self.emit_event(ControllerEvent::StreamStart {
        session_id: payload.session_id,
        message_id: payload.message_id,
        model: payload.model,
        turn_id: payload.turn_id,
    });
}

StreamStart is typically silent in the TUI (converted to empty System message).

Handling TextChunk

async fn handle_text_chunk(&self, payload: FromLLMPayload) {
    self.emit_event(ControllerEvent::TextChunk {
        session_id: payload.session_id,
        text: payload.text,
        turn_id: payload.turn_id,
    });
}

TextChunk events are emitted for each streamed text delta. The TUI appends these to build the complete response.

Handling ToolUseStart

async fn handle_tool_use_start(&self, payload: FromLLMPayload) {
    if let Some(tool) = payload.tool_use {
        self.emit_event(ControllerEvent::ToolUseStart {
            session_id: payload.session_id,
            tool_id: tool.id,
            tool_name: tool.name,
            turn_id: payload.turn_id,
        });
    }
}

Provides early notification before the tool input is fully streamed.

Handling ToolUse (Single Tool)

async fn handle_tool_use(&self, payload: FromLLMPayload) {
    let session_id = payload.session_id;
    let turn_id = payload.turn_id.clone();

    if let Some(tool_info) = payload.tool_use {
        // Get display configuration from registry
        let (display_name, display_title) = self.get_tool_display(&tool_info).await;

        // Emit ToolUse event for UI
        self.emit_event(ControllerEvent::ToolUse {
            session_id,
            tool: tool_info.clone(),
            display_name: display_name.clone(),
            display_title: display_title.clone(),
            turn_id: turn_id.clone(),
        });

        // Build tool request
        let input: HashMap<String, serde_json::Value> =
            serde_json::from_value(tool_info.input.clone()).unwrap_or_default();

        let request = ToolRequest {
            tool_use_id: tool_info.id,
            tool_name: tool_info.name,
            display_name,
            input,
        };

        // Execute tool
        self.tool_executor.execute(
            session_id,
            turn_id,
            request,
            self.cancel_token.clone(),
        ).await;
    }
}

Handling ToolBatch (Multiple Tools)

async fn handle_tool_batch(&self, payload: FromLLMPayload) {
    let session_id = payload.session_id;
    let turn_id = payload.turn_id.clone();

    let mut requests = Vec::new();

    // Emit individual ToolUse events and build requests
    for tool_info in payload.tool_uses {
        let (display_name, display_title) = self.get_tool_display(&tool_info).await;

        // Emit ToolUse event for UI
        self.emit_event(ControllerEvent::ToolUse {
            session_id,
            tool: tool_info.clone(),
            display_name: display_name.clone(),
            display_title: display_title.clone(),
            turn_id: turn_id.clone(),
        });

        let input: HashMap<String, serde_json::Value> =
            serde_json::from_value(tool_info.input.clone()).unwrap_or_default();

        requests.push(ToolRequest {
            tool_use_id: tool_info.id,
            tool_name: tool_info.name,
            display_name,
            input,
        });
    }

    // Execute all tools in parallel
    self.tool_executor.execute_batch(
        session_id,
        turn_id,
        requests,
        self.cancel_token.clone(),
    ).await;
}

Getting Tool Display Configuration

async fn get_tool_display(&self, tool_info: &ToolUseInfo) -> (Option<String>, Option<String>) {
    let registry = self.tool_registry();

    if let Some(tool) = registry.get(&tool_info.name).await {
        let config = tool.display_config();
        let input: HashMap<String, serde_json::Value> =
            serde_json::from_value(tool_info.input.clone()).unwrap_or_default();

        let display_name = Some(config.display_name.clone());
        let display_title = Some((config.display_title)(&input));

        (display_name, display_title)
    } else {
        (None, None)
    }
}

Handling Complete

async fn handle_complete(&self, payload: FromLLMPayload) {
    self.emit_event(ControllerEvent::Complete {
        session_id: payload.session_id,
        stop_reason: payload.stop_reason,
        turn_id: payload.turn_id,
    });
}

The stop_reason indicates why the response ended:

  • "end_turn": Natural completion
  • "tool_use": Waiting for tool results
  • "max_tokens": Hit token limit

Handling Error

async fn handle_error(&self, payload: FromLLMPayload) {
    self.emit_event(ControllerEvent::Error {
        session_id: payload.session_id,
        error: payload.error.unwrap_or_else(|| "Unknown error".to_string()),
        turn_id: payload.turn_id,
    });
}

Handling TokenUpdate

async fn handle_token_update(&self, payload: FromLLMPayload) {
    let context_limit = self.session_mgr
        .get_session_by_id(payload.session_id).await
        .map(|s| s.context_limit())
        .unwrap_or(0);

    self.emit_event(ControllerEvent::TokenUpdate {
        session_id: payload.session_id,
        input_tokens: payload.input_tokens,
        output_tokens: payload.output_tokens,
        context_limit,
    });
}

Tool Batch Result Handling

When all tools in a batch complete, the results are sent back to the LLM:

async fn handle_tool_batch_result(&self, batch: ToolBatchResult) {
    let session_id = batch.session_id;

    let Some(session) = self.session_mgr.get_session_by_id(session_id).await else {
        tracing::error!("Session {} not found for tool results", session_id);
        return;
    };

    // Extract compact summaries for context management
    let mut compact_summaries = HashMap::new();
    let tool_results: Vec<ToolResultInfo> = batch.results
        .iter()
        .map(|result| {
            if let Some(ref summary) = result.compact_summary {
                compact_summaries.insert(result.tool_use_id.clone(), summary.clone());
            }
            ToolResultInfo {
                tool_use_id: result.tool_use_id.clone(),
                content: result.error.clone().unwrap_or_else(|| result.content.clone()),
                is_error: result.error.is_some(),
            }
        })
        .collect();

    // Send tool results back to LLM
    let llm_payload = ToLLMPayload {
        request_type: LLMRequestType::ToolResult,
        content: String::new(),
        tool_results,
        options: None,
        turn_id: batch.turn_id,
        compact_summaries,
    };

    session.send(llm_payload).await;
}

Individual Tool Result Events

Individual tool results emit events for real-time UI feedback:

// In the select! loop
tool_result = tool_result_guard.recv() => {
    if let Some(result) = tool_result {
        self.emit_event(ControllerEvent::ToolResult {
            session_id: result.session_id,
            tool_use_id: result.tool_use_id,
            tool_name: result.tool_name,
            display_name: result.display_name,
            status: result.status,
            content: result.content,
            error: result.error,
            turn_id: result.turn_id,
        });
    }
}

Response Event Sequence

A typical response with tools produces this event sequence:

1. StreamStart          - Response begins
2. TextChunk (n times)  - "I'll search for that..."
3. ToolUse              - web_search requested
4. Complete             - stop_reason: "tool_use"
5. ToolResult           - web_search completed
6. StreamStart          - Continued response
7. TextChunk (n times)  - "Based on the results..."
8. Complete             - stop_reason: "end_turn"
9. TokenUpdate          - Final usage stats

Error Recovery

When errors occur during response handling:

// API errors become Error events
LLMResponseType::Error => {
    self.emit_event(ControllerEvent::Error {
        session_id,
        error: payload.error.unwrap_or_default(),
        turn_id,
    });
}

// Tool execution errors are captured in ToolResult
ToolResult {
    status: ToolResultStatus::Error,
    error: Some("Tool failed: ...".to_string()),
    ...
}

The TUI displays errors to the user and the conversation can continue.

Next Steps