This repository contains the same demo site implemented in multiple frameworks and deployed to Cloudflare (free plan).
Framework implementations included:
- React + Vite (pre-rendered MPA) –
react-vite - React + Vite (client-only SPA) –
react-spa - Astro
- Next.js (OpenNext on Workers)
- TanStack Start
- SvelteKit
- Qwik (Qwik City)
- Solid (SolidJS + Vite)
The app is intentionally hybrid:
- SPA-like section (
/chart)- TradingView-ish: interactive canvas chart, symbol switching without a full page reload.
- “App pages” section (
/stays,/stays/:id)- Airbnb-ish: listing index + listing detail pages.
- SSG blog (
/blog,/blog/:slug)- Blog index + post pages.
All apps use the same dataset (a shared workspace package) so the UX and content stay comparable.
We measure (synthetically, in a controlled browser) for each framework deployment:
- TTFB-ish document timing (from the Navigation Timing API)
- Initial load (DOMContentLoaded/load, LCP, etc where available)
- Repeat view / subsequent load (reload within the same browser context)
- Client CPU + memory (CDP Performance metrics + JS heap)
- Chart interaction latency (symbol/timeframe switch + draw time on
/chart)
The benchmark runner lives in bench/.
See docs/metrics-glossary.md for definitions and metric sources.
Notes:
- Browser-based “response time” includes network + TLS + CDN edge variance.
- To reduce noise, the runner runs multiple iterations and summarizes medians.
- You can run against local
wrangler devor against*.workers.dev.
- Node.js 20+
- pnpm (recommended) or npm/yarn
- Cloudflare account (free plan)
- Wrangler CLI (
pnpm add -g wrangleror usenpx wrangler)
pnpm installpnpm -r buildEach app has its own dev / preview scripts. Examples:
pnpm -C apps/react-vite dev
pnpm -C apps/astro dev
pnpm -C apps/sveltekit devEach app has a deploy script that calls wrangler deploy (or the framework’s Cloudflare adapter command).
Example:
pnpm -C apps/react-vite deployEdit bench/bench.config.json to point to your deployed URLs, then run:
pnpm -C bench runProfiles:
--profile parity(forces chart data fetches tono-store)--profile idiomatic(uses framework defaults)--profile mobile-cold(fast-4g throttling + CPU slowdown, warmup disabled)--profile both(default)
This produces:
bench/results.v2.jsonbench/results.v2.md
Throughput (concurrency) check:
pnpm bench:load -- --path /stays --duration 15000 --concurrency 50packages/dataset– shared content (listings + blog posts + price series generator)packages/ui– tiny shared CSS + helpers (optional)apps/*– one app per frameworkbench/– Playwright benchmark runner
For more stable comparisons:
- Use the same custom domain pattern (one per framework), e.g.:
react.example.com,next.example.com, ...
- Disable Cloudflare features that can distort measurements (e.g. Rocket Loader).
- Run benchmarks from the same machine/network.
- Run at least 10 iterations and compare medians.
MIT