LLMConfig tells NexAU which model to use and how to call it. Pass an instance to AgentConfig.llm_config or define it inside your agent YAML.
Key fields
The model name as expected by the provider API (e.g.
"gpt-4o", "claude-3-5-sonnet-20241022", "llama3.2"). If omitted, NexAU reads LLM_MODEL, OPENAI_MODEL, or MODEL from the environment (in that priority order).The API base URL (e.g.
"https://api.openai.com/v1"). If omitted, NexAU reads OPENAI_BASE_URL, BASE_URL, or LLM_BASE_URL from the environment.The API key for authentication. If omitted, NexAU reads
LLM_API_KEY, OPENAI_API_KEY, API_KEY, or ANTHROPIC_API_KEY from the environment (in that priority order).The protocol used to call the model. Defaults to
"openai_chat_completion", which works for OpenAI, Anthropic-compatible proxies, Ollama, and most other providers. Set to "gemini_rest" for direct Gemini REST API access.Sampling temperature (0.0–2.0). Lower values produce more deterministic output; higher values produce more varied output. If omitted, the provider default applies.
Maximum number of tokens to generate per response. If omitted, the provider default applies.
Request timeout in seconds. Defaults to the provider client’s default.
Number of times to retry a failed request before raising an error.
Provider examples
- OpenAI
- Anthropic
- Local / Ollama
Loading credentials from environment variables
Keep credentials out of source code by reading them from the environment. If you omitmodel, base_url, and api_key, NexAU reads them automatically from the environment variables listed above. You can also be explicit:
.env file and load them with python-dotenv:
.env
YAML configuration
When you define an agent in YAML, putllm_config as a nested block. You can include any LLMConfig field here — typically everything except the API key, which you set in Python after loading the file.
agents/my_agent.yaml