Skip to main content
This guide walks you through installing NexAU and running a research agent that can search the web and read pages.
1

Install NexAU

Install from the latest GitHub release using SSH. Python 3.12 or later is required.
pip install git+ssh://git@github.com/nex-agi/NexAU.git@v0.4.1
Also install python-dotenv so you can load environment variables from a .env file:
pip install python-dotenv
2

Set up environment variables

Create a .env file in your project directory:
.env
LLM_MODEL="gpt-4o"
LLM_BASE_URL="https://api.openai.com/v1"
LLM_API_KEY="your-api-key-here"
SERPER_API_KEY="your-serper-api-key-here"
LLM_BASE_URL accepts any OpenAI-compatible endpoint — OpenAI, Anthropic, Ollama, or your own proxy. SERPER_API_KEY is required for the web search tool. Get one at serper.dev.
3

Define your tools

NexAU decouples tool schemas (YAML) from their implementations (Python). Create a tools/ directory and add two YAML files:
tools/WebSearch.yaml
type: tool
name: WebSearch
description: >-
  Search the web for information.
  Returns search results with titles, URLs, and snippets.
input_schema:
  type: object
  properties:
    query:
      type: string
      description: Search query string
    num_results:
      type: integer
      default: 10
      description: Number of results to return
  required:
    - query
  additionalProperties: false
tools/WebRead.yaml
type: tool
name: WebRead
description: >-
  Fetch and read content from a web URL.
  Returns page title and extracted text content.
input_schema:
  type: object
  properties:
    url:
      type: string
      description: URL to fetch
  required:
    - url
  additionalProperties: false
4

Create the agent

Create research_agent.py in the same directory:
research_agent.py
import os

from nexau import Agent, AgentConfig, LLMConfig, Tool
from nexau.archs.tool.builtin import google_web_search, web_fetch

tools = [
    Tool.from_yaml("tools/WebSearch.yaml", binding=google_web_search),
    Tool.from_yaml("tools/WebRead.yaml", binding=web_fetch),
]

agent_config = AgentConfig(
    name="research_agent",
    system_prompt=(
        "You are a research assistant. "
        "Use WebSearch to find relevant sources, "
        "then WebRead to read the most promising ones. "
        "Synthesize the information into a clear answer and cite your sources."
    ),
    llm_config=LLMConfig(
        model=os.getenv("LLM_MODEL"),
        base_url=os.getenv("LLM_BASE_URL"),
        api_key=os.getenv("LLM_API_KEY"),
        api_type="openai_chat_completion",
        temperature=0.5,
        max_tokens=4096,
    ),
    tools=tools,
)

agent = Agent(config=agent_config)

result = agent.run("What are the latest developments in fusion energy research?")
print(result)
Your project directory should look like this:
my-agent/
├── .env
├── research_agent.py
└── tools/
    ├── WebSearch.yaml
    └── WebRead.yaml
5

Run the agent

Use dotenv run to load your .env file automatically:
dotenv run uv run research_agent.py
6

See the output

The agent searches the web, reads relevant pages, and returns a synthesized answer:
Recent developments in fusion energy include the National Ignition Facility
achieving ignition in December 2022, where the energy output exceeded the
laser energy delivered to the target for the first time...

Sources:
- https://www.energy.gov/...
- https://www.nature.com/...
As the agent works, it calls tools in a loop — searching, fetching, and reasoning — until it has enough information to answer.
Add import logging; logging.basicConfig(level=logging.INFO) at the top of your script to see each tool call and LLM turn printed to the console as the agent runs.

Next steps