Skip to main content
A tool is a Python function paired with a JSON Schema that tells the LLM what the function does and what arguments it accepts. When the LLM decides to use a tool, NexAU validates the arguments against the schema, calls the function, and feeds the result back into the conversation.

Built-in tools

NexAU ships with ready-to-use tools for common tasks. Import the Python function, create a YAML schema for it (see examples/deep_research/tools for reference schemas), and bind them together with Tool.from_yaml.

File tools

Module: nexau.archs.tool.builtin.file_tools
FunctionDescription
read_fileRead text files with pagination.
read_visual_fileRead image and video files for multimodal LLMs (requires a vision model).
write_fileWrite content to a file.
replaceReplace text within a file.
apply_patchApply a unified diff patch to a file.
globFind files by glob pattern.
list_directoryList files in a directory.
read_many_filesRead multiple files at once.
search_file_contentSearch for text across files.

Web tools

Module: nexau.archs.tool.builtin.web_tools
FunctionDescription
google_web_searchSearch the web via Google (requires a Serper API key).
web_fetchFetch and parse text content from a URL.

Shell tools

Module: nexau.archs.tool.builtin.shell_tools
FunctionDescription
run_shell_commandExecute a shell command and return stdout/stderr.

Session tools

Module: nexau.archs.tool.builtin.session_tools
FunctionDescription
write_todosWrite a structured to-do list for the current task.
complete_taskMark a task as complete.
save_memoryPersist a value to the agent’s memory store.
ask_userPause and ask the user a question.

Using built-in tools

from nexau import Agent, AgentConfig, Tool
from nexau.archs.tool.builtin.session_tools import write_todos
from nexau.archs.tool.builtin.web_tools import google_web_search, web_fetch

web_search_tool = Tool.from_yaml("tools/WebSearch.yaml", binding=google_web_search)
web_read_tool = Tool.from_yaml("tools/WebRead.yaml", binding=web_fetch)
todo_write_tool = Tool.from_yaml("tools/TodoWrite.tool.yaml", binding=write_todos)

agent_config = AgentConfig(
    name="web_agent",
    tools=[web_search_tool, web_read_tool, todo_write_tool],
    # ... llm_config, system_prompt, etc.
)
agent = Agent(config=agent_config)

Creating custom tools

You can give your agent any capability by writing a custom tool. The process has three steps: write the Python function, create the YAML schema, then bind them together.
1

Define the Python function

Write a regular Python function with type hints. The docstring tells the LLM how to use the tool.
# my_tools/calculator.py

def simple_calculator(expression: str) -> str:
    """
    Evaluates a simple mathematical expression.
    Supports addition (+), subtraction (-), multiplication (*), and division (/).

    Args:
        expression: The mathematical expression to evaluate (e.g., "10 + 5*2").

    Returns:
        The result of the calculation as a string, or an error message.
    """
    try:
        result = eval(expression, {"__builtins__": None}, {})
        return str(result)
    except Exception as e:
        return f"Error: {e}"
2

Create the YAML schema

The YAML file defines the tool’s name, description, and input schema. The description is what the LLM reads to decide whether to call this tool.
tools/SimpleCalculator.tool.yaml
type: tool
name: SimpleCalculator
description: >-
  A tool to evaluate simple mathematical expressions like "10 + 5*2".
  It supports addition, subtraction, multiplication, and division.

input_schema:
  type: object
  properties:
    expression:
      type: string
      description: The mathematical string to evaluate.
  required:
    - expression
  additionalProperties: false
  $schema: "http://json-schema.org/draft-07/schema#"
The input_schema follows JSON Schema draft-07. Set additionalProperties: false to reject unexpected arguments before they reach your function.
3

Bind the tool to your agent

Load the YAML schema, bind the Python function, and add the tool to your AgentConfig.
from nexau import Agent, AgentConfig, Tool
from my_tools.calculator import simple_calculator

calculator_tool = Tool.from_yaml(
    "tools/SimpleCalculator.tool.yaml",
    binding=simple_calculator,
)

agent_config = AgentConfig(
    name="calculator_agent",
    tools=[calculator_tool],
    # ... llm_config, system_prompt, etc.
)
agent = Agent(config=agent_config)

Frontend-friendly output with returnDisplay

When a tool returns a dict, you can include a returnDisplay field to show a short summary in the UI without sending that text to the LLM.
def my_search_tool(query: str) -> dict:
    results = perform_search(query)
    return {
        "content": json.dumps(results),                    # Sent to the LLM
        "returnDisplay": f"Found {len(results)} results",  # Shown in the UI only
    }
NexAU automatically strips returnDisplay before forwarding the result to the LLM, so it doesn’t consume context tokens. All built-in tools follow this pattern.

Preset parameters with extra_kwargs

Use extra_kwargs to fix certain arguments at bind time so the LLM never needs to supply them — useful for API keys, base URLs, or any configuration that shouldn’t vary per call.
my_tool = Tool.from_yaml(
    "tools/MyService.yaml",
    binding=call_service,
    extra_kwargs={"base_url": "https://api.example.com", "api_key": os.getenv("SERVICE_KEY")},
)
Call-time arguments with the same name override preset values. The reserved keys agent_state and global_storage cannot be used in extra_kwargs.

Deferred tool loading

When your agent has many tools, sending all their schemas to the LLM on every turn wastes context tokens. Mark infrequently used tools with defer_loading: true to keep them out of the LLM’s view until needed. How it works:
  1. Tools with defer_loading: true are registered but excluded from the LLM’s tool list.
  2. NexAU automatically adds a built-in ToolSearch tool to the agent.
  3. When the LLM calls ToolSearch, matching deferred tools are injected into the tool list.
  4. From the next turn, the LLM can call the injected tools directly.
Tool YAML
tools/SlackSendMessage.yaml
type: tool
name: SlackSendMessage
description: "Send a message to a Slack channel"
defer_loading: true       # Not sent to LLM until searched
search_hint: "slack chat" # Optional — improves search relevance

input_schema:
  type: object
  properties:
    channel:
      type: string
    message:
      type: string
  required: [channel, message]
  additionalProperties: false
Agent YAML
tools:
  - name: read_file
    yaml_path: ./tools/ReadFile.yaml
    binding: nexau.archs.tool.builtin.file_tools:read_file
    # No defer_loading — always available

  - name: slack_send
    yaml_path: ./tools/SlackSendMessage.yaml
    binding: my_tools.slack:send_message
    defer_loading: true
    search_hint: "slack chat messaging"
Use deferred loading for MCP integrations, specialized APIs, or any tool the agent rarely needs. Tools with large schemas benefit the most.