Skip to content

Using AI in Workflows

AI completions, streaming, and structured outputs

Use the ai module for LLM completions in your workflows.

from bifrost import workflow, ai
@workflow
async def summarize(text: str):
response = await ai.complete(f"Summarize: {text}")
return {"summary": response.content}

The response includes:

  • content - Generated text
  • input_tokens - Tokens in prompt
  • output_tokens - Tokens in response
  • model - Model used

For multi-turn conversations or system prompts:

response = await ai.complete(
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain Kubernetes."}
]
)

Or use the system parameter:

response = await ai.complete(
"Explain Kubernetes.",
system="You are a helpful assistant."
)

For long responses, stream tokens as they’re generated:

from bifrost import ai
@workflow
async def stream_response(prompt: str):
chunks = []
async for chunk in ai.stream(prompt):
chunks.append(chunk.content)
if chunk.done:
print(f"Total tokens: {chunk.input_tokens + chunk.output_tokens}")
return {"response": "".join(chunks)}

Get responses as Pydantic models:

from pydantic import BaseModel
from bifrost import workflow, ai
class TicketAnalysis(BaseModel):
priority: str
category: str
summary: str
@workflow
async def analyze_ticket(description: str):
analysis = await ai.complete(
f"Analyze this ticket: {description}",
response_format=TicketAnalysis
)
return analysis.model_dump()

Override the default model for specific calls:

# Use a specific model
response = await ai.complete(
"Complex task requiring reasoning",
model="gpt-4o"
)
# Or Anthropic
response = await ai.complete(
"Creative writing task",
model="claude-3-5-sonnet-latest"
)

Include context from your knowledge store:

response = await ai.complete(
"What is our refund policy?",
knowledge=["policies", "faq"] # Namespace names
)

Bifrost automatically:

  1. Searches the specified namespaces
  2. Includes relevant documents as context
  3. Generates a response grounded in your data

See Knowledge Bases for setup.

ParameterTypeDescription
promptstrSimple prompt (alternative to messages)
messageslistChat messages with role/content
systemstrSystem prompt
response_formattypePydantic model for structured output
knowledgelist[str]Knowledge namespaces for RAG
modelstrOverride default model
org_idstrOrganization context (auto-set in workflows)
from bifrost import ai
@workflow
async def safe_complete(prompt: str):
try:
response = await ai.complete(prompt)
return {"result": response.content}
except Exception as e:
return {"error": str(e)}