Thank you for your interest in KeyCompute. Contributions are welcome across code, documentation, bug reports, feature requests, and project feedback.
This guide is written for the current repository layout and workflow. If a command here conflicts with the codebase, follow the codebase and open a documentation fix.
- Fix bugs or improve existing behavior
- Add tests for uncovered scenarios
- Improve documentation and examples
- Add or refine provider integrations
- Report bugs, edge cases, or usability issues
- Propose new features or architectural improvements
- Read README.md for the product overview and local development commands.
- Search existing issues and pull requests before starting duplicate work.
- Keep changes focused. Small, reviewable pull requests are much easier to merge.
- Backend: Rust, Axum, Tokio
- Frontend: Dioxus 0.7
- Database: PostgreSQL 16+
- Cache / rate limiting: Redis 7+
- Rust stable toolchain
- Docker and Docker Compose for local services
dioxus-clifor frontend development
Install the Dioxus CLI if you plan to work on the web frontend:
curl -sSL http://dioxus.dev/install.sh | shThis is the easiest way to get the project running end to end.
git clone https://github.com/keycompute/keycompute.git
cd keycompute
cp .env.example .env
# Edit .env and replace all placeholder secrets before real deployments
docker compose up -dUse this setup when you want a faster edit / run loop.
- Copy the local config template:
cp config.toml.example config.toml- Start the local dependencies:
docker compose up -d postgres redis- Start the backend:
cargo run -p keycompute-server --features redis- Start the web frontend in another terminal:
API_BASE_URL=http://localhost:3000 dx serve --package web --platform web --addr 0.0.0.0Notes:
- The backend automatically runs embedded SQLx migrations on startup. You do not need a separate migration binary for normal development.
config.tomlis intended for local development. Environment variables override values fromconfig.toml.- If you work on password reset emails or public invite links, set
APP_BASE_URLexplicitly.
keycompute/
├── crates/
│ ├── keycompute-server/ # Axum HTTP service entrypoint
│ ├── keycompute-db/ # Database access and embedded migrations
│ ├── keycompute-auth/ # Authentication and authorization
│ ├── keycompute-routing/ # Model and account routing
│ ├── keycompute-billing/ # Billing and settlement
│ ├── keycompute-distribution/ # Referral distribution
│ ├── keycompute-runtime/ # Runtime state and store backends
│ ├── keycompute-config/ # Config loading and validation
│ ├── keycompute-observability/ # Logging and metrics
│ ├── keycompute-emailserver/ # Email delivery
│ ├── llm-gateway/ # Provider execution gateway
│ └── llm-provider/ # Provider adapters
├── packages/
│ ├── web/ # Dioxus web app
│ ├── ui/ # Shared UI components
│ ├── client-api/ # Client API package and tests
│ ├── desktop/ # Dioxus desktop app
│ └── mobile/ # Dioxus mobile app
├── nginx/ # Reverse proxy config
├── docker-compose.yml
└── .github/workflows/ # CI checks
Please run the same core checks used by CI before opening a pull request.
cargo fmt --all --checkcargo clippy --workspace --exclude desktop --exclude mobile --all-targets --all-features --future-incompat-report -- -D warningscargo test --lib --workspace --exclude desktop --exclude mobile --verbose
cargo test --package client-api --tests --verbose
cargo test --package integration-tests --tests --verbosecargo build --workspace --exclude desktop --exclude mobile --verboseIf your change touches desktop or mobile, run the relevant package commands in addition to the shared workspace checks.
- Follow the existing crate boundaries and dependency direction.
- Prefer small, composable changes over broad refactors.
- Add or update tests when behavior changes.
- Keep logging and error messages actionable.
- This repository uses Dioxus 0.7. Do not introduce older Dioxus APIs.
- Keep shared UI logic in
packages/uiwhen it is platform-agnostic. - Keep web-specific dependencies and behavior in
packages/web.
- Add new migration files under
crates/keycompute-db/src/migrations/. - Update the relevant data models and query code in
crates/keycompute-db/src/models/. - Verify the server still boots cleanly, because migrations run during startup.
When adding a new LLM provider:
- Create or update the provider crate under
crates/llm-provider/. - Implement the traits from
keycompute-provider-trait. - Register the provider in
llm-gatewayand any required server wiring. - Add tests for request mapping, error handling, and any provider-specific behavior.
- Use clear commit messages that explain what changed and why.
- Keep one logical change per commit whenever practical.
- Prefer English commit messages for consistency across the repository.
Example:
feat: add DeepSeek provider streaming support
- implement provider client
- normalize streaming chunks
- add tests and update docs
- Fork the repository and create a branch from
main. - Make your changes and run the relevant checks.
- Push your branch.
- Open a pull request with a clear description.
- The code is formatted with
cargo fmt -
cargo clippypasses for the affected scope - Relevant tests were added or updated
- Relevant test suites pass locally
- Documentation was updated when behavior or setup changed
Include:
- What changed
- Why it changed
- How it was tested
- Any follow-up work or known limitations
Screenshots or API examples are helpful for UI and behavior changes.
Please include:
- A clear description of the problem
- Steps to reproduce
- Expected behavior and actual behavior
- Environment details such as OS, Rust version, and how you started the app
- Relevant logs, traces, screenshots, or error messages
Please describe:
- The use case
- The expected behavior
- Why the feature would be valuable
- Any proposed implementation direction if you already have one
- Use GitHub Issues for bug reports and feature discussions
- Use pull requests for concrete code and documentation changes
- If you find inaccurate docs, documentation-only pull requests are welcome
By contributing to this repository, you agree that your contributions will be licensed under the same MIT License that covers the project.