Using AI in Workflows
AI completions, streaming, and structured outputs
Use the ai module for LLM completions in your workflows.
Basic Completion
Section titled “Basic Completion”from bifrost import workflow, ai
@workflowasync def summarize(text: str): response = await ai.complete(f"Summarize: {text}") return {"summary": response.content}The response includes:
content- Generated textinput_tokens- Tokens in promptoutput_tokens- Tokens in responsemodel- Model used
Message Format
Section titled “Message Format”For multi-turn conversations or system prompts:
response = await ai.complete( messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Explain Kubernetes."} ])Or use the system parameter:
response = await ai.complete( "Explain Kubernetes.", system="You are a helpful assistant.")Streaming Responses
Section titled “Streaming Responses”For long responses, stream tokens as they’re generated:
from bifrost import ai
@workflowasync def stream_response(prompt: str): chunks = [] async for chunk in ai.stream(prompt): chunks.append(chunk.content) if chunk.done: print(f"Total tokens: {chunk.input_tokens + chunk.output_tokens}")
return {"response": "".join(chunks)}Structured Outputs
Section titled “Structured Outputs”Get responses as Pydantic models:
from pydantic import BaseModelfrom bifrost import workflow, ai
class TicketAnalysis(BaseModel): priority: str category: str summary: str
@workflowasync def analyze_ticket(description: str): analysis = await ai.complete( f"Analyze this ticket: {description}", response_format=TicketAnalysis ) return analysis.model_dump()Model Overrides
Section titled “Model Overrides”Override the default model for specific calls:
# Use a specific modelresponse = await ai.complete( "Complex task requiring reasoning", model="gpt-4o")
# Or Anthropicresponse = await ai.complete( "Creative writing task", model="claude-3-5-sonnet-latest")With Knowledge Base (RAG)
Section titled “With Knowledge Base (RAG)”Include context from your knowledge store:
response = await ai.complete( "What is our refund policy?", knowledge=["policies", "faq"] # Namespace names)Bifrost automatically:
- Searches the specified namespaces
- Includes relevant documents as context
- Generates a response grounded in your data
See Knowledge Bases for setup.
Parameters
Section titled “Parameters”| Parameter | Type | Description |
|---|---|---|
prompt | str | Simple prompt (alternative to messages) |
messages | list | Chat messages with role/content |
system | str | System prompt |
response_format | type | Pydantic model for structured output |
knowledge | list[str] | Knowledge namespaces for RAG |
model | str | Override default model |
org_id | str | Organization context (auto-set in workflows) |
Error Handling
Section titled “Error Handling”from bifrost import ai
@workflowasync def safe_complete(prompt: str): try: response = await ai.complete(prompt) return {"result": response.content} except Exception as e: return {"error": str(e)}Next Steps
Section titled “Next Steps”- Knowledge Bases - RAG with vector search
- Agents and Chat - Conversational AI