Agents Module
SDK reference for autonomous agent invocation
The agents module lets you invoke agents programmatically from workflows.
Import
Section titled “Import”from bifrost import agentsMethods
Section titled “Methods”agents.run()
Section titled “agents.run()”Execute an agent autonomously and wait for the result.
async def run( agent_name: str, input: dict | None = None, *, output_schema: dict | None = None, timeout: int = 1800,) -> dict | strParameters
Section titled “Parameters”| Parameter | Type | Description |
|---|---|---|
agent_name | str | Name of the agent to run |
input | dict | Structured input data for the agent |
output_schema | dict | JSON Schema for the expected output (agent will conform its response) |
timeout | int | Maximum seconds to wait (default 1800 = 30 min) |
Returns
Section titled “Returns”- If
output_schemais provided:dictmatching the schema - Otherwise:
str(the agent’s final text response)
Raises
Section titled “Raises”RuntimeError— if the agent run failsValueError— if the agent is not found
Examples
Section titled “Examples”from bifrost import workflow, agents
@workflowasync def classify_ticket(description: str): """Use an AI agent to classify a support ticket.""" result = await agents.run( "ticket-classifier", input={"description": description}, output_schema={ "type": "object", "properties": { "priority": {"type": "string", "enum": ["low", "normal", "high", "urgent"]}, "category": {"type": "string"}, "summary": {"type": "string"} } }, timeout=120, ) return result # {"priority": "high", "category": "billing", "summary": "..."}
@workflowasync def generate_report(): """Use an agent to generate a natural language report.""" report = await agents.run( "report-writer", input={"report_type": "weekly", "department": "engineering"}, ) return {"report": report} # report is a stringHow It Works
Section titled “How It Works”- The SDK sends a request to create an agent run
- The run is queued and picked up by a worker
- The agent loads its system prompt, tools, and knowledge sources
- An LLM tool-calling loop executes: LLM → tool → LLM → …
- The loop ends when the agent produces a final answer or hits budget limits
- The result is returned to the calling workflow
Budget Limits
Section titled “Budget Limits”Agent runs are bounded by two limits set on the agent configuration:
| Limit | Description | Default | Max |
|---|---|---|---|
max_iterations | Max LLM call cycles per run | 50 | 200 |
max_token_budget | Max total tokens per run | 100,000 | 1,000,000 |
If either limit is reached, the run completes with status budget_exceeded.
An optional per-agent max_run_timeout can also be configured in the UI to set a hard time limit on runs.
Delegation
Section titled “Delegation”When an agent has Delegated Agents configured, it automatically receives delegate_to_<agent_name> tools. During execution, the agent can delegate tasks to child agents:
# Parent agent automatically gets delegation tools like:# delegate_to_child_agent(task: str) -> str# These are called by the LLM, not by your workflow codeDelegation is bounded by:
- Max depth: 5 levels of nested delegation
- Timeout: 600 seconds per delegation call
- Child tracking: Each delegation creates a child
AgentRunlinked viaparent_run_id
Cancellation and Rerun
Section titled “Cancellation and Rerun”Agent runs support cancellation and rerun via the API:
import requests
# Cancel a running agentrequests.post(f"{api_url}/api/agent-runs/{run_id}/cancel", headers=headers)
# Rerun a completed/failed/cancelled runresponse = requests.post(f"{api_url}/api/agent-runs/{run_id}/rerun", headers=headers)new_run = response.json()See Also
Section titled “See Also”- Agents and Chat — Agent configuration and chat
- AI Tools — Making workflows callable as agent tools
- AI Module — Direct LLM completions