Skip to main content
An agent is the central building block in NexAU. It wraps an LLM with a system prompt and a collection of tools, then runs an iterative loop — calling the LLM, executing tools, and updating context — until it produces a final answer.

Creating an agent

You can define an agent entirely in Python, or load its configuration from a YAML file.
Import Agent, AgentConfig, LLMConfig, and Tool from nexau, then build a config and instantiate the agent.
import os
from nexau import Agent, AgentConfig, LLMConfig, Tool
from nexau.archs.tool.builtin.web_tools import google_web_search, web_fetch

# Load tool schemas and bind them to Python functions
web_search_tool = Tool.from_yaml("tools/WebSearch.yaml", binding=google_web_search)
web_read_tool = Tool.from_yaml("tools/WebRead.yaml", binding=web_fetch)

# Configure the LLM
llm_config = LLMConfig(
    model=os.getenv("LLM_MODEL"),
    base_url=os.getenv("LLM_BASE_URL"),
    api_key=os.getenv("LLM_API_KEY"),
)

# Build the agent
agent_config = AgentConfig(
    name="research_agent",
    system_prompt="You are a research agent. Use web_search and web_read to find information.",
    tools=[web_search_tool, web_read_tool],
    llm_config=llm_config,
)
agent = Agent(config=agent_config)

Running an agent

Once you have an Agent instance, call .run() to execute it synchronously or .run_async() for async contexts.
response = agent.run("What's the latest news about AI developments?")
print(response)
Both methods accept an optional context dict (for system prompt templating) and an event_handlers list for streaming callbacks.

AgentConfig reference

name
str
required
A unique identifier for this agent. Used in logs, traces, and multi-agent routing.
system_prompt
str | list
The instructions given to the LLM at the start of every run. Can be a plain string, a path to a file (system_prompt_type: "file"), or a Jinja2 template (system_prompt_type: "jinja").
system_prompt_type
str
default:"string"
Controls how system_prompt is interpreted. One of "string", "file", or "jinja".
llm_config
LLMConfig
The LLM provider and model settings. See LLM Config for all options. If omitted, NexAU reads LLM_MODEL, LLM_BASE_URL, and LLM_API_KEY from the environment.
tools
list[Tool]
default:"[]"
The tools this agent can call. Each Tool combines a YAML schema with a Python function via Tool.from_yaml(...). See Tools.
tool_call_mode
str
default:"structured"
How the agent invokes tools. "structured" uses native function-calling APIs. "xml" uses XML-in-text for models that don’t support native tool use.
max_context_tokens
int
default:"128000"
The token budget for the agent’s conversation context. When the context approaches this limit, context compaction middleware (if configured) trims history.
max_iterations
int
default:"100"
The maximum number of LLM + tool-call cycles per run. The agent stops and returns its current answer when this limit is reached.
skills
list[Skill]
default:"[]"
Skills extend the agent’s capabilities by providing lazily-loaded tool descriptions. See the Skills guide.
middlewares
list
default:"[]"
Middleware instances that intercept model calls and tool executions. See the Middleware guide.
tracers
list[BaseTracer]
default:"[]"
Tracer instances for observability. See the Observability guide.
mcp_servers
list[dict]
default:"[]"
MCP (Model Context Protocol) server configurations. Each entry requires name and type ("stdio", "http", or "sse"). See the MCP guide.