Skip to content

maximhq/bifrost

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Bifrost

Go Report Card Discord badge Known Vulnerabilities codecov Docker Pulls Run In Postman License

The fastest way to build AI applications that never go down

Bifrost is a high-performance AI gateway that connects you to 10+ providers (OpenAI, Anthropic, Bedrock, and more) through a single API. Get automatic failover, load balancing, and zero-downtime deployments in under 30 seconds.

πŸš€ Just launched: Native MCP (Model Context Protocol) support for seamless tool integration
⚑ Performance: Adds only 11¡s latency while handling 5,000+ RPS
πŸ›‘οΈ Reliability: 100% uptime with automatic provider failover

⚑ Quickstart (30 seconds)

Get started

Go from zero to production-ready AI gateway in under a minute. Here's how:

What You Need

  • Any AI provider API key (OpenAI, Anthropic, Bedrock, etc.)
  • Node.js 18+ installed (or use Docker instead via Docker installation)
  • 20 seconds of your time ⏰

Using Bifrost HTTP Transport

πŸ“– For detailed setup guides with multiple providers, advanced configuration, and language examples, see Quick Start Documentation

Step 1: Start Bifrost

# πŸ”§ Run Bifrost binary
npx @maximhq/bifrost

Step 2: Open the built-in web interface and configure bifrost

# πŸ–₯️ Open the web interface in your browser
open http://localhost:8080

# Or simply open http://localhost:8080 manually in your browser

Step 3: Test it works

curl -X POST http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4o-mini",
    "messages": [
      {"role": "user", "content": "Hello from Bifrost! 🌈"}
    ]
  }'

πŸŽ‰ Boom! You're done!

Your AI gateway is now running with a beautiful web interface. You can:

  • πŸ–₯️ Configure everything visually - No more JSON files!
  • πŸ“Š Monitor requests in real-time - See logs, analytics, and metrics
  • πŸ”„ Add providers and MCP clients on-the-fly - Scale and failover without restarts
  • πŸš€ Drop into existing code - Zero changes to your OpenAI/Anthropic apps

Want more? See our Complete Setup Guide for multi-provider configuration, failover strategies, and production deployment.

πŸ“‘ Table of Contents


✨ Features

  • πŸ–₯️ Built-in Web UI: Visual configuration, real-time monitoring, and analytics dashboard - no config files needed
  • πŸš€ Zero-Config Startup & Easy Integration: Start immediately with dynamic provider configuration, or integrate existing SDKs by simply updating the base_url - one line of code to get running
  • πŸ”„ Multi-Provider Support: Integrate with OpenAI, Anthropic, Amazon Bedrock, Mistral, Ollama, and more through a single API
  • πŸ›‘οΈ Fallback Mechanisms: Automatically retry failed requests with alternative models or providers
  • πŸ”‘ Dynamic Key Management: Rotate and manage API keys efficiently with weighted distribution
  • ⚑ Connection Pooling: Optimize network resources for better performance
  • 🎯 Concurrency Control: Manage rate limits and parallel requests effectively
  • πŸ”Œ Flexible Transports: Multiple transports for easy integration into your infra
  • πŸ—οΈ Plugin First Architecture: No callback hell, simple addition/creation of custom plugins
  • πŸ› οΈ MCP Integration: Built-in Model Context Protocol (MCP) support for external tool integration and execution
  • βš™οΈ Custom Configuration: Offers granular control over pool sizes, network retry settings, fallback providers, and network proxy configurations
  • πŸ“Š Built-in Observability: Native Prometheus metrics out of the box, no wrappers, no sidecars, just drop it in and scrape
  • πŸ”§ SDK Support: Bifrost is available as a Go package, so you can use it directly in your own applications

πŸ—οΈ Repository Structure

Bifrost is built with a modular architecture:

bifrost/
β”œβ”€β”€ ci/                   # CI/CD pipeline scripts and npx configuration
β”‚
β”œβ”€β”€ core/                 # Core functionality and shared components
β”‚   β”œβ”€β”€ providers/        # Provider-specific implementations
β”‚   β”œβ”€β”€ schemas/          # Interfaces and structs used in bifrost
β”‚   β”œβ”€β”€ bifrost.go        # Main Bifrost implementation
β”‚
β”œβ”€β”€ docs/                 # Documentations for Bifrost's configurations and contribution guides
β”‚   └── ...
β”‚
β”œβ”€β”€ tests/                # All test setups related to /core and /transports
β”‚   └── ...
β”‚
β”œβ”€β”€ transports/           # Interface layers (HTTP, gRPC, etc.)
β”‚   β”œβ”€β”€ bifrost-http/     # HTTP transport implementation
β”‚   └── ...
β”‚
β”œβ”€β”€ ui/                  # UI files for the web interface of the HTTP transport
β”‚   └── ...
β”‚
└── plugins/              # Plugin Implementations
    β”œβ”€β”€ maxim/
    └── ...

The system uses a provider-agnostic approach with well-defined interfaces to easily extend to new AI providers. All interfaces are defined in core/schemas/ and can be used as a reference for contributions.


πŸš€ Getting Started

There are three ways to use Bifrost - choose the one that fits your needs:

1. As a Go Package (Core Integration)

For direct integration into your Go applications. Provides maximum performance and control.

πŸ“– 2-Minute Go Package Setup

Quick example:

go get github.com/maximhq/bifrost/core

2. As an HTTP API (Transport Layer)

For language-agnostic integration and microservices architecture.

πŸ“– 30-Second HTTP Transport Setup

Quick example:

npx @maximhq/bifrost

3. As a Drop-in Replacement (Zero Code Changes)

Replace existing OpenAI/Anthropic APIs without changing your application code.

πŸ“– 1-Minute Drop-in Integration

Quick example:

- base_url = "https://api.openai.com"
+ base_url = "http://localhost:8080/openai"

πŸ“Š Performance

Bifrost adds virtually zero overhead to your AI requests. In our sustained 5,000 RPS benchmark (see full methodology in docs/benchmarks.md), the gateway added only 11 Β΅s of overhead per request – that's less than 0.001% of a typical GPT-4o response time.

Translation: Your users won't notice Bifrost is there, but you'll sleep better knowing your AI never goes down.

Metric t3.medium t3.xlarge Ξ”
Added latency (Bifrost overhead) 59 Β΅s 11 Β΅s -81 %
Success rate @ 5 k RPS 100 % 100 % No failed requests
Avg. queue wait time 47 Β΅s 1.67 Β΅s -96 %
Avg. request latency (incl. provider) 2.12 s 1.61 s -24 %

πŸ”‘ Key Performance Highlights

  • Perfect Success Rate – 100 % request success rate on both instance types even at 5 k RPS.
  • Tiny Total Overhead – < 15 Β΅s additional latency per request on average.
  • Efficient Queue Management – just 1.67 Β΅s average wait time on the t3.xlarge test.
  • Fast Key Selection – ~10 ns to pick the right weighted API key.

Bifrost is deliberately configurable so you can dial the speed ↔ memory trade-off:

Config Knob Effect
initial_pool_size How many objects are pre-allocated. Higher = faster, more memory
buffer_size & concurrency Queue depth and max parallel workers (can be set per provider)
Retry / Timeout Tune aggressiveness for each provider to meet your SLOs

Choose higher settings (like the t3.xlarge profile above) for raw speed, or lower ones (t3.medium) for reduced memory footprint – or find the sweet spot for your workload.

Need more numbers? Dive into the full benchmark report for breakdowns of every internal stage (JSON marshalling, HTTP call, parsing, etc.), hardware sizing guides and tuning tips.


πŸ“š Documentation

Everything you need to master Bifrost, from 30-second setup to production-scale deployments.

πŸš€ I want to get started (2 minutes)
🎯 I want to understand what Bifrost can do
βš™οΈ I want to deploy this to production
πŸ“± I'm migrating from another tool

πŸ’¬ Need Help?

πŸ”— Join our Discord for:

  • ❓ Quick setup assistance and troubleshooting
  • πŸ’‘ Best practices and configuration tips
  • 🀝 Community discussions and support
  • πŸš€ Real-time help with integrations

πŸ› οΈ Development & Build Requirements

Cross-Platform Compilation with CGO

Bifrost uses CGO for cross-platform compilation to ensure optimal performance across different architectures. To build Bifrost from source for all supported platforms, you'll need to install the following cross-compilation toolchains via Homebrew:

Required Homebrew Packages

# Install minimal cross-compilation toolchains for all target platforms
brew install FiloSottile/musl-cross/musl-cross mingw-w64

Supported Target Platforms

The build system supports the following platform/architecture combinations:

  • macOS: darwin/amd64, darwin/arm64 (native compilation)
  • Linux: linux/amd64, linux/arm64 (via musl-cross)
  • Windows: windows/amd64 (via mingw-w64)

Compiler Details

Platform Architecture C Compiler C++ Compiler Package Source
Linux amd64 x86_64-linux-musl-gcc x86_64-linux-musl-g++ musl-cross
Linux arm64 aarch64-linux-musl-gcc aarch64-linux-musl-g++ musl-cross
Windows amd64 x86_64-w64-mingw32-gcc x86_64-w64-mingw32-g++ mingw-w64
macOS amd64/arm64 Native system compiler Native system compiler Xcode Command Line Tools

Building from Source

Once you have the required toolchains installed, you can build Bifrost using the provided build script:

# Build for all platforms
./ci/scripts/go-executable-build.sh bifrost-http ./dist/apps/bifrost "" ./transports/bifrost-http

# The script will automatically detect and use the appropriate cross-compilers
# for each target platform

The build script includes:

  • Static linking for Linux builds (using musl libc for maximum compatibility)
  • CGO support for all platforms to ensure optimal performance
  • Automatic compiler detection and validation before building

Prerequisites for Building

  1. Go 1.21+ - Required for building the application
  2. Cross-compilation toolchains - Install via the Homebrew packages above
  3. Git - For cloning and version management

Note: The build process uses fully static linking for Linux builds to ensure maximum compatibility across different distributions. Windows builds use mingw-w64 for cross-compilation from macOS/Linux environments.


🀝 Contributing

See our Contributing Guide for detailed information on how to contribute to Bifrost. We welcome contributions of all kindsβ€”whether it's bug fixes, features, documentation improvements, or new ideas. Feel free to open an issue, and once it's assigned, submit a Pull Request.


πŸ“„ License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.

Built with ❀️ by Maxim