Skip to content

LLM Configuration

Configure AI providers for workflows

Configure OpenAI or Anthropic as your LLM provider to enable AI features in workflows.

ProviderModels
OpenAIGPT-4o, GPT-4, GPT-3.5 Turbo
AnthropicClaude 3.5 Sonnet, Claude Opus 4.5
  1. Navigate to Settings > AI Configuration

  2. Select your provider (OpenAI or Anthropic)

  3. Enter your API key

  4. Click Test Connection to verify and fetch available models

  5. Save the configuration

For knowledge base / RAG features, configure embeddings:

  1. In Settings > AI Configuration, scroll to Embeddings

  2. Choose to use the same provider as LLM or configure separately

  3. For OpenAI, embeddings use text-embedding-3-small (1536 dimensions)

Once configured, use AI in your workflows:

from bifrost import ai
@workflow
async def summarize_ticket(description: str):
response = await ai.complete(f"Summarize this ticket: {description}")
return {"summary": response.content}

Override the default model per-request:

response = await ai.complete(
"Complex analysis task",
model="gpt-4o" # or "claude-3-5-sonnet-latest"
)

Bifrost tracks all AI usage:

  • Input/output tokens per call
  • Cost calculation based on configured pricing
  • Aggregation by workflow, conversation, and organization

View usage reports in Settings > Usage Reports.

Configure per-model pricing in Settings > AI Pricing:

FieldDescription
ProviderOpenAI or Anthropic
ModelModel display name
Input PriceCost per 1M input tokens
Output PriceCost per 1M output tokens