Add AI providers
Configure AI provider API keys at the organization or project level. Organization-level providers are available across all projects. Project-level providers override organization-level keys for that specific project, allowing you to isolate API usage, manage separate billing, or use different credentials per project. You can configure providers in settings or inline from playgrounds and prompt pages for faster setup. See Manage organizations for organization-level configuration or Manage projects for project-level configuration.Supported providers
Standard providers include:- OpenAI (GPT-4o, GPT-4o-mini, o4-mini, etc.).
- Anthropic (Claude 4 Sonnet, Claude 3.5 Sonnet, etc.).
- Google (Gemini 2.5 Flash, Gemini 2.5 Pro, etc.).
- AWS Bedrock (Claude, Llama, Mistral models).
- Azure OpenAI Service.
- Third-party providers (Together AI, Fireworks, Groq, Replicate, etc.).
Add custom providers
Braintrust supports custom AI providers, allowing you to integrate any AI model or endpoint into your evaluation and tracing workflows. See Custom providers for details.Load balance across providers
Configure multiple API keys for the same model to automatically load balance requests:- Add your primary provider key (e.g., OpenAI).
- Add Azure OpenAI as a custom provider for the same models.
- The proxy automatically distributes requests across both.
- Resilience if one provider is down.
- Higher effective rate limits.
- Geographic distribution.
Set up for self-hosted
For self-hosted deployments, configure proxy URLs:- Go to Settings.
- Under Organization, select API URL.
- Enter your URLs:
- API URL: Main API endpoint.
- Proxy URL: AI Proxy endpoint (usually
<API_URL>/v1/proxy). - Realtime URL: Realtime API endpoint.
- Click Save.
Access the proxy
Users and applications access the proxy through configured endpoints:Monitor proxy usage
Track proxy usage across your organization:- Create a project for proxy logs.
- Enable logging by setting the
x-bt-parentheader when calling the proxy. - View logs in the Logs page.
- Create dashboards to track usage, costs, and errors.
Next steps
- Use the AI Proxy for detailed usage instructions
- Manage organizations to configure AI providers
- Deploy prompts that use the proxy
- Monitor deployments to track proxy usage