Skip to main content
LLMConfig tells NexAU which model to use and how to call it. Pass an instance to AgentConfig.llm_config or define it inside your agent YAML.
from nexau import LLMConfig

llm_config = LLMConfig(
    model="gpt-4o",
    base_url="https://api.openai.com/v1",
    api_key="sk-...",
    temperature=0.5,
    max_tokens=4096,
)

Key fields

model
str
The model name as expected by the provider API (e.g. "gpt-4o", "claude-3-5-sonnet-20241022", "llama3.2"). If omitted, NexAU reads LLM_MODEL, OPENAI_MODEL, or MODEL from the environment (in that priority order).
base_url
str
The API base URL (e.g. "https://api.openai.com/v1"). If omitted, NexAU reads OPENAI_BASE_URL, BASE_URL, or LLM_BASE_URL from the environment.
api_key
str
The API key for authentication. If omitted, NexAU reads LLM_API_KEY, OPENAI_API_KEY, API_KEY, or ANTHROPIC_API_KEY from the environment (in that priority order).
api_type
str
default:"openai_chat_completion"
The protocol used to call the model. Defaults to "openai_chat_completion", which works for OpenAI, Anthropic-compatible proxies, Ollama, and most other providers. Set to "gemini_rest" for direct Gemini REST API access.
temperature
float
Sampling temperature (0.0–2.0). Lower values produce more deterministic output; higher values produce more varied output. If omitted, the provider default applies.
max_tokens
int
Maximum number of tokens to generate per response. If omitted, the provider default applies.
timeout
float
Request timeout in seconds. Defaults to the provider client’s default.
max_retries
int
default:"3"
Number of times to retry a failed request before raising an error.

Provider examples

import os
from nexau import LLMConfig

llm_config = LLMConfig(
    model="gpt-4o",
    base_url="https://api.openai.com/v1",
    api_key=os.getenv("OPENAI_API_KEY"),
    temperature=0.7,
    max_tokens=4096,
)

Loading credentials from environment variables

Keep credentials out of source code by reading them from the environment. If you omit model, base_url, and api_key, NexAU reads them automatically from the environment variables listed above. You can also be explicit:
import os
from nexau import LLMConfig

llm_config = LLMConfig(
    model=os.getenv("LLM_MODEL"),
    base_url=os.getenv("LLM_BASE_URL"),
    api_key=os.getenv("LLM_API_KEY"),
)
Store your values in a .env file and load them with python-dotenv:
.env
LLM_MODEL="gpt-4o"
LLM_BASE_URL="https://api.openai.com/v1"
LLM_API_KEY="sk-..."

YAML configuration

When you define an agent in YAML, put llm_config as a nested block. You can include any LLMConfig field here — typically everything except the API key, which you set in Python after loading the file.
agents/my_agent.yaml
type: agent
name: my_agent
llm_config:
  temperature: 0.7
  max_tokens: 4096
  api_type: openai_chat_completion
import os
from pathlib import Path
from nexau import Agent, AgentConfig, LLMConfig

agent_config = AgentConfig.from_yaml(Path("agents/my_agent.yaml"))
agent_config.llm_config = LLMConfig(
    model=os.getenv("LLM_MODEL"),
    base_url=os.getenv("LLM_BASE_URL"),
    api_key=os.getenv("LLM_API_KEY"),
)

agent = Agent(config=agent_config)
Overriding llm_config after from_yaml is the recommended pattern for keeping model settings in YAML (temperature, max_tokens) while credentials stay in environment variables.