Skip to main content
NexAU is a general-purpose Python agent framework. You define an agent’s tools, LLM configuration, and behavior — in Python or YAML — then run it locally, as an HTTP API, or as an interactive CLI. NexAU handles the agentic loop, tool dispatch, context management, and observability so you can focus on what the agent should do.

Key capabilities

Agents — Configure agents with a system prompt, tool set, LLM, and middleware. NexAU drives the tool-call loop until the task is complete. Tools — Equip agents with built-in tools (web search, file I/O, shell execution) or write your own Python functions. Tool definitions decouple the schema (YAML) from the implementation (Python function), so you can swap bindings without changing the definition. LLM integration — Connect to any OpenAI-compatible provider: OpenAI, Anthropic, Ollama, or a custom endpoint. Configure model, temperature, token limits, and API type per agent. Middleware — Intercept the agent loop at each step. Use built-in middleware for logging, context compaction, and caching, or write your own hooks. Skills — Inject reusable context and tool documentation into agents at runtime. Compatible with the Claude Skills format. Multi-agent — Compose agents into teams. A main agent can delegate subtasks to sub-agents, each with its own tools and LLM. Transports — Expose your agent over HTTP/SSE or stdio. Run it as a server, integrate it into a product, or interact with it from the CLI.

Where to go next

Quick Start

Build and run your first agent in under five minutes.

Agents

Understand agent configuration and the execution model.

Tools

Use built-in tools and create custom ones.

Multi-Agent

Coordinate multiple agents for complex workflows.