ConduitLLM is a modular .NET platform providing a unified, OpenAI-compatible API gateway for multiple LLM providers. Its architecture is designed for extensibility, robust configuration, and seamless developer experience—including support for multimodal (vision) models and advanced routing.
ConduitLLM
├── ConduitLLM.Configuration # Central configuration and entity management
├── ConduitLLM.Core # Core business logic, interfaces, and routing
├── ConduitLLM.Examples # Example integrations and usage
├── ConduitLLM.Http # OpenAI-compatible HTTP API (REST endpoints)
├── ConduitLLM.Providers # Provider integrations (OpenAI, Anthropic, Gemini, etc.)
├── ConduitLLM.Tests # Automated and integration tests
└── ConduitLLM.WebUI # Blazor-based admin/configuration interface
- Stores provider credentials, model mappings, and global settings
- Manages database schema for configuration and usage tracking
- Defines LLM API models (including multimodal/vision content; message content uses
object?
to support both plain text and multimodal objects) - Implements routing, provider selection, and spend/budget logic
- Interfaces for extensibility (custom providers, routing strategies)
- Exposes OpenAI-compatible endpoints (
/v1/chat/completions
,/v1/models
, etc.) - Handles authentication (virtual keys), rate limiting, and error handling
- Middleware for request tracking, usage, and spend enforcement (now logs detailed usage and analytics, including virtual key and model usage)
- Integrates with multiple LLM providers (OpenAI, Anthropic, Gemini, Cohere, etc.)
- Maps generic model names to provider-specific models
- Supports multimodal (vision) and streaming APIs where available (including flexible object-based content)
- Handles provider-specific request/response formatting
- Modern Blazor web app for admin/configuration
- Organized navigation (Core, Configuration, Keys & Costs, System)
- Pages for provider setup, model mapping, routing, virtual key management, and usage analytics
- Real-time notification for budget, key status, and system health
The router enables intelligent, flexible distribution of requests across model deployments:
- DefaultLLMRouter: Implements routing strategies (simple, random, round-robin), health checks, fallback logic, retry with backoff, and streaming support.
- RouterConfig: Stores routing strategy, model deployments, and fallback settings.
- RouterService: CRUD for deployments and fallback rules; hot-reloadable via WebUI.
Manages API access, budgets, and usage tracking:
- Virtual Key Entity: Tracks spend, rate limits (RPM/RPD), expiration, and status.
- Middleware: Validates keys, enforces spend/rate limits, logs usage.
- Notification System: Alerts for budget, expiration, and usage anomalies.
A .NET Blazor web application for system configuration:
- Database Context: Stores all configuration and usage data.
- Pages: Home/Dashboard, Configuration, Routing, Chat, Virtual Keys, Model Costs, System Info, etc.
- Navigation: Logical grouping for Core, Configuration, Keys & Costs, System.
-
Request Flow:
- Client sends request (OpenAI-compatible, with virtual key)
- System authenticates and validates key, enforces limits
- Router selects model/provider based on mapping and strategy
- Request is formatted and sent to provider
- Response is returned (streaming or standard)
- Usage and spend are tracked; notifications triggered as needed
-
Configuration Flow:
- Admin configures providers, models, routing via WebUI
- Configuration is stored in the database and hot-reloaded
- Virtual keys and budgets are managed from the UI
- Master key for privileged/admin operations
- Virtual key spend and rate limit enforcement
- Secure storage of provider and virtual keys
- Comprehensive request logging and audit trails
- LLM provider APIs (OpenAI, Anthropic, Gemini, etc.) via HTTP/HTTPS
- Database for configuration and usage
- Blazor WebUI for management and analytics
- Notification system for user/system alerts
- OpenAI API compatibility: Use existing OpenAI SDKs and tools
- Multimodal/vision support for compatible models/providers
- Easily add new providers, models, or routing strategies
- Designed for containerization and scalable deployment (official public Docker images available)