Creating an agent
You can define an agent entirely in Python, or load its configuration from a YAML file.- Python
- YAML
Import
Agent, AgentConfig, LLMConfig, and Tool from nexau, then build a config and instantiate the agent.Running an agent
Once you have anAgent instance, call .run() to execute it synchronously or .run_async() for async contexts.
- Synchronous
- Asynchronous
context dict (for system prompt templating) and an event_handlers list for streaming callbacks.
AgentConfig reference
A unique identifier for this agent. Used in logs, traces, and multi-agent routing.
The instructions given to the LLM at the start of every run. Can be a plain string, a path to a file (
system_prompt_type: "file"), or a Jinja2 template (system_prompt_type: "jinja").Controls how
system_prompt is interpreted. One of "string", "file", or "jinja".The LLM provider and model settings. See LLM Config for all options. If omitted, NexAU reads
LLM_MODEL, LLM_BASE_URL, and LLM_API_KEY from the environment.The tools this agent can call. Each
Tool combines a YAML schema with a Python function via Tool.from_yaml(...). See Tools.How the agent invokes tools.
"structured" uses native function-calling APIs. "xml" uses XML-in-text for models that don’t support native tool use.The token budget for the agent’s conversation context. When the context approaches this limit, context compaction middleware (if configured) trims history.
The maximum number of LLM + tool-call cycles per run. The agent stops and returns its current answer when this limit is reached.
Skills extend the agent’s capabilities by providing lazily-loaded tool descriptions. See the Skills guide.
Middleware instances that intercept model calls and tool executions. See the Middleware guide.
Tracer instances for observability. See the Observability guide.