Skip to main content
NexAU supports Jinja2 templating in system prompts. Pass variables at call time via agent.run(..., context={...}) and reference them in your prompt with {{ variable }}.

Basic example

system_prompt = """
Date: {{ date }}
User: {{ username }}
Task: {{ task_description }}

You are an assistant with access to the following environment:
{% for key, value in env_info.items() %}
- {{ key }}: {{ value }}
{% endfor %}
"""

response = agent.run(
    "Complete the project setup",
    context={
        "date": "2024-01-01",
        "username": "alice",
        "task_description": "Set up the CI pipeline",
        "env_info": {
            "working_dir": "/home/alice/project",
            "python_version": "3.12",
        },
    },
)
The context dict is rendered into the system prompt before the first LLM call. Any valid Jinja2 syntax works — conditionals, loops, filters, and macros.

Setting the prompt type

Control how NexAU interprets your system prompt with system_prompt_type in your agent config:
name: my_agent
llm_config:
  model: gpt-4o
system_prompt_type: "jinja"   # render as Jinja2 template
system_prompt: |
  Date: {{ date }}
  User: {{ username }}
  You are a helpful assistant.
ValueBehavior
"string"Prompt is used as-is (default)
"jinja"Prompt is rendered as a Jinja2 template using the context dict
File pathLoads the prompt from the specified file, then applies the selected type

Passing context in async runs

The same context argument works with run_async:
response = await agent.run_async(
    "Generate the weekly report",
    context={
        "date": "2024-01-08",
        "username": "bob",
        "task_description": "Weekly engineering summary",
        "env_info": {
            "working_dir": "/home/bob/reports",
            "python_version": "3.12",
        },
    },
)
Keep your templates focused on the values that change per call. Static context — like base instructions — belongs in the non-templated part of the system prompt.