A full-stack "live race day" F1 application that displays real-time race data, car positions, and leaderboards.
RaceTime sources data from OpenF1 and provides:
- Live track view with car positions and movement trails
- Real-time leaderboard (P1-P20) with gap, team, and tyre compound
- Smooth car movement via a client-side adaptive playback buffer
- Countdown screen to the next scheduled race
The app supports two ingest modes. During development, a dummy poller generates fake race data using the full 22-driver 2026 season grid. With a premium (sponsor) OpenF1 subscription, an MQTT worker connects to OpenF1's MQTT broker for true real-time GPS data.
In both modes the downstream contract is identical: a Redis queue of the 15 most recent snapshots, consumed by stateless API replicas, interpolated server-side for smooth animation, and streamed to the browser via SSE.
api(replicas=3): Stateless FastAPI service. Reads snapshots from Redis, runs them through a spline interpolation layer, and streams interpolated frames to connected browsers via SSE (/api/live/stream). Also exposes/api/health,/api/drivers, and/api/schedule. Polls Redis on each SSE tick.frontend: React Vite SPA. On load, fetches driver colours (/api/drivers) and next race info (/api/schedule). Before a live session, shows a countdown to the next race. When live, opens anEventSourceconnection to the SSE endpoint, buffers received snapshots in an adaptive local playback queue, and renders them oldest-first for smooth car movement. Supports 22 circuits (full 2026 calendar).redis: Shared cache and source of truth. Stores a rolling queue of the 15 most recent snapshots (LPUSH+LTRIM). Acts as the handoff point between the ingest worker and the API, and provides reconnection resilience — new clients receive the last 15 snapshots immediately on connect. Also caches the schedule (static:schedule, 12h TTL) and a heartbeat key (live:heartbeat, 10s TTL).
Ingest (one of):
poller(replicas=1) — Dev mode (--profile dev): Generates fake race data for the 22-driver 2026 F1 grid every 1.25s and pushes to the Redis snapshot queue. Reads a circuit SVG to map lap progress to normalized coordinates; includes a 6-position trail per driver.mqtt-worker(replicas=1) — Premium tier (--profile premium): Subscribes to OpenF1's MQTT broker, assembles race state from GPS location messages, normalizes coordinates using pre-built circuit bounds, and flushes a snapshot to Redis at up to 2 Hz. Enabled viadocker-compose --profile premium up.
Interpolator:
interpolator.py— Lives inside the API process. Sits between the Redis snapshot queue and the SSE output. Buffers a rolling window of 8 snapshots and fits a natural cubic spline per driver using numpy. Emits interpolated frames at ~0.24s intervals (~4 fps effective output), smoothing the ~2 Hz ingest rate into fluid browser animation.
REST Client:
openf1.py— Thin async HTTP client (httpx) used bymqtt_worker.pyduring its REST bootstrap phase to fetch the current session, drivers, positions, and laps from the OpenF1 REST API before MQTT deltas start arriving.
DATA SOURCE
[dev] dummy poller → fake positions every 1.25s (22 drivers, trail included)
[premium] OpenF1 MQTT broker (mqtt.openf1.org:8883)
~3.7 Hz GPS per car → mqtt_worker.py normalizes & assembles
│ flush capped at 2 Hz (SNAPSHOT_INTERVAL = 0.5s)
▼ LPUSH + LTRIM
┌──────────────
│ REDIS
│ live:snapshots (list, last 15)
│ live:heartbeat (string, 10s TTL)
│ static:schedule (string, 12h TTL)
│ • Decouples ingest from API
│ • Survives API restarts
│ • Single source of truth for
│ all API replicas
└──────────────
│ read on each SSE tick
▼
interpolator.py (inside API process)
• Buffers 8-snapshot rolling window
• Fits natural cubic spline per driver
• Emits frames at ~0.24s intervals
│
▼
FastAPI GET /api/live/stream
(SSE — long-lived HTTP, server pushes)
│ text/event-stream
▼
Browser EventSource
• Receives interpolated snapshots
• Adaptive playback queue (target depth: 15)
• Adjusts frame rate 220–280ms to drain queue
• Renders oldest-first → smooth animation
• Auto-reconnects on drop
| Method | Path | Description |
|---|---|---|
GET |
/api/health |
Redis connectivity, heartbeat freshness, snapshot staleness → {status, redis, heartbeat} |
GET |
/api/live/stream |
SSE stream of interpolated snapshots (text/event-stream) |
GET |
/api/drivers |
Current driver list with team name and colour |
GET |
/api/schedule |
Next race info: circuit, date, session name, is_live flag |
Premium switchover:
docker-compose --profile premium upEverything downstream (Redis → interpolator → SSE → browser) is untouched.
When running with a premium OpenF1 subscription, the mqtt-worker replaces the dummy poller and follows this pipeline:
- Reads
OPENF1_USERNAMEandOPENF1_PASSWORDfrom environment / secrets - POSTs via
openf1.pytohttps://api.openf1.org/tokento obtain an access token and expiry (typically 3600s) - Refreshes at ~55 minutes; retries with exponential backoff on failure; falls back to full re-auth on token rejection
On startup (and on restart), the worker performs a one-time REST fetch via openf1.py of:
- Current session metadata
- Driver list
- Latest positions and lap data
This ensures the in-memory state has a valid base before MQTT deltas arrive.
- Connects to
mqtt.openf1.org:8883(MQTT over TLS) - Authenticates with
OPENF1_USERNAMEas username and the OAuth2 token as MQTT password - Subscribes to live topics:
v1/location,v1/laps,v1/sessions,v1/drivers - Reconnects with exponential backoff and re-subscribes on every reconnect
The worker maintains a current race state map:
- Latest position per driver (from
v1/location) - Latest lap/position data per driver (from
v1/laps) - Current session metadata (from
v1/sessions) - Driver metadata (from
v1/drivers)
On v1/location messages, the worker assembles a Snapshot and pushes it to the Redis list (LPUSH live:snapshots + LTRIM to keep last 15). Flushes are rate-limited to 2 Hz (0.5s minimum interval). The snapshot contains:
timestampsession(circuit, name, session_key)positions— per driver:driver_number,driver_code,x_norm,y_norm, plus atrailof the last 6 normalized positions (oldest → newest)leaderboard— position, gap, team, tyre compound per driver
- Writes
live:heartbeatto Redis with a 10s TTL on each healthy flush cycle /api/healthchecks Redis connectivity, heartbeat freshness, and snapshot staleness →{status: "ok"|"stale"|"degraded", redis: "ok"|"down", heartbeat: "ok"|"missing"}- During periods with no active F1 session, the worker writes an explicit "no active session" status so the frontend can display an appropriate state rather than stale data
- Docker & Docker Compose
- Python 3.11+
- Node.js (for frontend development)
- Clone the repository
- Start the services with the dev ingest worker:
docker-compose --profile dev up
The default profile (
docker-compose up) starts onlyredis,api, andfrontend— no data will be generated without--profile dev. - Access the app at http://localhost:5173
- Access the API at http://localhost:8000
- Check health: http://localhost:8000/api/health
- Snapshot schema (Pydantic models)
- Redis snapshot caching
- Live snapshot API endpoints (
/api/health,/api/live/snapshot) - Dummy poller service (generates fake race data)
- Track visualization component
- Leaderboard component
- API polling integration
- Add
trailfield toDriverPositionschema (backend + frontend types) -
circuit_bounds.py— GPS normalization usingscripts/output/bounds.json - Redis snapshot queue (LPUSH + LTRIM, last 5) replacing single key
- SSE endpoint
GET /api/live/streamreplacing polling - Frontend: swap
setInterval→EventSourcewith client-side playback queue -
mqtt_worker.py— full MQTT worker shell (ready for credentials) - Update dummy poller to populate trail data
- Docker Compose
premiumprofile formqtt-workerservice
- K8s manifests for all services
- Traefik ingress routing
- GitHub Actions workflows
- Automated linting, testing, and Docker image builds
- GHCR push with semantic tagging
- Prometheus metrics collection
- Grafana dashboards
- Wire MQTT credentials into
mqtt_worker.py - REST bootstrap on startup (session, drivers, positions)
- MQTT ingest with reconnect logic
-
live:heartbeatkey with TTL - Staleness and heartbeat checks in
/api/health - No-active-session handling
- Redis running locally on
localhost:6379 - Python 3.10+ with dependencies installed
-
Start Redis (if not running):
docker run -d -p 6379:6379 redis:7-alpine
-
Install dependencies:
cd backend pip install -r requirements.txt -
Start the API server:
cd backend PYTHONPATH=. python -m uvicorn app.main:app --port 8000 -
Start the poller (in a new terminal):
cd backend PYTHONPATH=. python -m app.poller -
Verify endpoints:
# Health check (should return {"status":"ok","redis":"ok","heartbeat":"ok"}) curl http://localhost:8000/api/health # Live snapshot (should return race data JSON) curl http://localhost:8000/api/live/snapshot
-
Run tests:
cd backend PYTHONPATH=. python -m pytest tests/ -v
- Backend: FastAPI, Python 3.11, Redis
- Frontend: React 19, Vite, TypeScript
- Data Ingest: MQTT (aiomqtt) for premium, circuit-path simulation for dev
- Infrastructure: Docker, Kubernetes (kind), Traefik
- Monitoring: Prometheus, Grafana
- CI/CD: GitHub Actions, GHCR