System Prompts

The system prompt establishes the agent’s behavior and persona when communicating with an LLM. It provides instructions that guide how the model responds to user input.

Setting the System Prompt

System prompts are defined through the default_system_prompt() method in your AgentConfig implementation:

use agent_air::agent::AgentConfig;

struct MyAgentConfig;

impl AgentConfig for MyAgentConfig {
    fn default_system_prompt(&self) -> &str {
        "You are a helpful coding assistant. Provide clear, accurate answers. \
         When showing code, include comments explaining the logic."
    }

    fn config_path(&self) -> &str { "~/.myagent/config.yaml" }
    fn log_prefix(&self) -> &str { "myagent" }
    fn name(&self) -> &str { "MyAgent" }
}

This prompt is applied to all configured LLM providers.

Programmatic Configuration

For programmatic configuration, use the with_system_prompt() builder method on LLMSessionConfig:

use agent_air::controller::LLMSessionConfig;

let config = LLMSessionConfig::anthropic("api-key", "claude-sonnet-4-20250514")
    .with_system_prompt("You are a technical documentation writer.");

How Providers Handle System Prompts

Different LLM providers handle system prompts in their API requests differently, but the framework abstracts this away:

Anthropic: The system prompt is sent as a dedicated field separate from the conversation messages.

OpenAI and OpenAI-compatible providers: The system prompt is included as a system message at the start of the conversation.

Google: The system prompt is sent as system instructions.

You do not need to handle these differences. The framework formats the system prompt correctly for each provider.

Writing Effective System Prompts

System prompts typically include several components:

Identity and Role

Define who or what the agent is:

You are an expert software engineer specializing in Rust and systems programming.

Behavioral Instructions

Specify how the agent should respond:

Provide concise answers. Ask clarifying questions when requirements are ambiguous.
Always explain your reasoning before providing code.

Output Format Requirements

Define formatting expectations:

When showing code, use markdown code blocks with language annotations.
Use bullet points for lists of options or steps.

Constraints

Set boundaries on behavior:

Do not make assumptions about the user's environment. Ask when uncertain.
If you don't know something, say so rather than guessing.

Multiline Prompts

For longer system prompts, use Rust’s string continuation:

fn default_system_prompt(&self) -> &str {
    "You are a helpful coding assistant.\n\
     \n\
     Guidelines:\n\
     - Write clean, readable code\n\
     - Include error handling\n\
     - Add comments for complex logic"
}

Or use a raw string for better readability:

fn default_system_prompt(&self) -> &str {
    r#"You are a helpful coding assistant.

Guidelines:
- Write clean, readable code
- Include error handling
- Add comments for complex logic"#
}

Empty System Prompts

If the system prompt is empty or not set, the LLM uses its default behavior without additional system-level instructions. This is valid but may result in less focused responses.

Best Practices

Be specific: Vague prompts produce inconsistent results. Clearly define the agent’s role and expected behavior.

Keep it focused: Long, complex prompts can confuse the model. Prioritize the most important instructions.

Test iteratively: Refine your prompt based on actual agent behavior. Small changes can have significant effects.

Consider context: The system prompt is included in every request, consuming tokens. Balance detail with token efficiency.