A complete end-to-end demonstration of sending OpenTelemetry traces and logs from a frontend and backend application through an OpenTelemetry Collector to Sentry.
This demo shows the complete telemetry flow:
βββββββββββββββ βββββββββββββββ ββββββββββββββββββββ βββββββββββ
β Frontend ββββββββββΆβ Backend ββββββββββΆβ OTEL Collector ββββββββββΆβ Sentry β
β (React) β HTTP β (Express) β OTLP β (Aggregator) β OTLP β (APM) β
β + OTEL β β + OTEL β β (Exporter) β β β
βββββββββββββββ βββββββββββββββ ββββββββββββββββββββ βββββββββββ
β β²
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
OTLP (traces)
- β Frontend tracing: React SPA with OpenTelemetry browser instrumentation
- β Backend tracing: Node.js/Express with automatic HTTP instrumentation
- β Backend logging: Structured logs via OpenTelemetry Logs API
- β Distributed tracing: W3C Trace Context propagation between frontend and backend
- β OTLP export: All telemetry sent via OTLP/HTTP protocol
- β Collector pipeline: OpenTelemetry Collector receives, processes, and exports to Sentry
- β No Sentry SDKs: Pure OpenTelemetry implementation (vendor-neutral)
- β Docker Compose: One command to run everything
-
Frontend (
frontend/)- React 18 + TypeScript + Vite
- OpenTelemetry browser SDK
- Automatic instrumentation: document load, fetch API
- Exports traces to collector via OTLP/HTTP
-
Backend (
backend/)- Node.js 20 + Express + TypeScript
- OpenTelemetry Node.js SDK
- Automatic instrumentation: HTTP, Express
- Exports traces and logs to collector via OTLP/HTTP
-
OpenTelemetry Collector (
otel-collector/)- Official
otel/opentelemetry-collectorimage - Receives: OTLP (HTTP on port 4318, gRPC on port 4317)
- Processes: Batching, resource attributes
- Exports: Sentry via OTLP/HTTP
- Official
- Page Load: Frontend generates a document load span
- User Action: User clicks a button to call the backend
- Frontend Span: Fetch instrumentation creates a span with trace context
- Context Propagation: W3C headers (
traceparent,tracestate) sent to backend - Backend Span: Express instrumentation creates a child span linked to frontend trace
- Backend Logs: Structured logs emitted with trace correlation
- OTLP Export: Both apps send telemetry to collector
- Collector Processing: Batches and enriches telemetry
- Sentry Export: Collector sends everything to Sentry via OTLP
- Docker and Docker Compose
- A Sentry account with a project configured for OpenTelemetry
-
Get your Sentry OTLP credentials:
- Go to Sentry Project Settings β Client Keys (DSN)
- Look for the "OpenTelemetry (OTLP)" section
- Copy your endpoint and authentication header
-
Copy the environment template:
cp env.example .env
-
Edit
.envand add your Sentry configuration:SENTRY_OTLP_ENDPOINT=https://o123456.ingest.sentry.io/api/7891011/integration/otlp SENTRY_AUTH_HEADER=sentry sentry_key=abc123def456
Finding Your Sentry Credentials:
-
OTLP Endpoint:
- Format:
https://o[ORG_ID].ingest.sentry.io/api/[PROJECT_ID]/integration/otlp - Don't include
/v1/logsor/v1/traces- the collector adds these automatically
- Format:
-
Authentication Header:
- Format:
sentry sentry_key=YOUR_PUBLIC_KEY - Extract the public key from your DSN (the part before the
@) - DSN format:
https://PUBLIC_KEY@oORG_ID.ingest.sentry.io/PROJECT_ID
- Format:
Reference:
-
# Build and start all services
docker-compose up --build
# Or run in detached mode
docker-compose up --build -dThis will start:
- Frontend: http://localhost:5173
- Backend (Node.js): http://localhost:4000
- Backend (Rails): http://localhost:5000
- Collector: HTTP on port 4318, gRPC on port 4317
-
Open http://localhost:5173 in your browser
-
Click the test buttons to test Node.js backend:
- Health Check: Simple successful request
- Fetch Data: Successful API call with data
- Slow Request: 1-second delay to see span timing
- Trigger Error: Intentional error to test error tracking
-
Test Rails backend endpoints:
# Ruby data curl http://localhost:5000/api/ruby-data # Slow Ruby endpoint curl http://localhost:5000/api/ruby-slow # Distributed tracing: Rails β Node.js curl http://localhost:5000/api/call-node
- Go to your Sentry project(s)
- Navigate to Performance (or Traces) to see distributed traces
- Look for traces with spans from:
demo-frontendservice (React)demo-backendservice (Node.js)demo-rails-backendservice (Ruby) - polyglot tracing!
- Distributed Trace Example: When Rails calls Node.js (
/api/call-node), you'll see a single trace spanning both:Trace abc123: ββ [Rails] GET /api/call-node β ββ [Rails] call-node-backend β ββ [Node] HTTP GET /api/data β ββ [Node] fetch-data - Check Logs (if available) to see correlated backend logs
- Verify trace IDs match across all services
otel-sentry-demo/
βββ docker-compose.yml # Orchestrates all services
βββ env.example # Sentry configuration template
βββ README.md # This file
β
βββ backend/ # Node.js backend
β βββ Dockerfile
β βββ package.json
β βββ tsconfig.json
β βββ src/
β βββ index.ts # Entry point (imports tracing first)
β βββ tracing.ts # OpenTelemetry tracing setup
β βββ logging.ts # OpenTelemetry logging setup
β βββ app.ts # Express app with instrumented routes
β
βββ frontend/ # React frontend
β βββ Dockerfile
β βββ package.json
β βββ tsconfig.json
β βββ vite.config.ts
β βββ index.html
β βββ src/
β βββ main.tsx # Entry point (initializes OTEL)
β βββ otel-setup.ts # OpenTelemetry browser setup
β βββ App.tsx # UI with API call buttons
β
βββ otel-collector/ # OpenTelemetry Collector
βββ otel-collector-config.yaml # Collector configuration
Backend (local development):
cd backend
npm install
npm run devFrontend (local development):
cd frontend
npm install
npm run devCollector (local):
docker run -p 4317:4317 -p 4318:4318 \
-v $(pwd)/otel-collector/otel-collector-config.yaml:/etc/otel-collector-config.yaml \
-e SENTRY_OTLP_ENDPOINT="your-endpoint" \
-e SENTRY_AUTH_HEADER="Bearer your-token" \
otel/opentelemetry-collector:latest \
--config=/etc/otel-collector-config.yaml# View collector logs to see telemetry flowing through
docker-compose logs -f otel-collector
# View backend logs
docker-compose logs -f backend
# View all logs
docker-compose logs -f-
Check collector health:
curl http://localhost:13133/
-
Verify backend is sending telemetry:
- Check backend logs for "β OpenTelemetry initialized"
- Look for OTLP export messages in collector logs
-
Verify frontend is sending telemetry:
- Open browser DevTools β Console
- Look for "β OpenTelemetry instrumentation initialized"
- Check Network tab for requests to
localhost:4318/v1/traces
-
Test without Sentry:
- The collector config includes a
loggingexporter - Telemetry will be printed to collector stdout for debugging
- This works even if Sentry credentials are invalid
- The collector config includes a
Each user action creates a distributed trace with spans like:
π Trace: user-action: /api/slow
ββ π fetch GET http://localhost:4000/api/slow [demo-frontend]
β ββ π‘ HTTP GET /api/slow [demo-backend]
β β ββ β‘ slow-operation [demo-backend]
ββ β±οΈ Total: ~1.2s
Span Attributes you'll see:
service.name:demo-frontendordemo-backendservice.environment:demohttp.method,http.url,http.status_code- Custom attributes like
demo.sleep_duration_ms
Backend logs appear in Sentry with attributes like:
severityText: INFO, WARN, ERRORbody: Log messageservice.name:demo-backend- Custom attributes:
endpoint,duration_ms, etc. - Trace correlation: Logs include trace IDs so you can jump from log β trace
When you click "Trigger Error":
- An error span is created with
status: ERROR - An exception is recorded on the span
- An error-level log is emitted
- All three are correlated by trace ID
- You'll see the full stack trace in Sentry
The frontend's FetchInstrumentation is configured to propagate trace context:
new FetchInstrumentation({
propagateTraceHeaderCorsUrls: [/localhost:4000/],
})This injects traceparent and tracestate headers into fetch requests. The backend's auto-instrumentation extracts these headers and creates child spans.
Both apps define identical resource attributes so Sentry can group them:
{
'service.name': 'demo-frontend', // or 'demo-backend'
'service.environment': 'demo',
'service.version': '1.0.0',
}- Backend β Collector:
http://otel-collector:4318(internal Docker network) - Frontend β Collector:
http://localhost:4318(browser β host β container) - Collector β Sentry: Configured via
SENTRY_OTLP_ENDPOINTenv var
This demo intentionally does not use @sentry/browser or @sentry/node. Everything is pure OpenTelemetry. This demonstrates:
- Vendor neutrality (can switch from Sentry to Honeycomb/Datadog/etc. by just changing collector config)
- Standard OTLP protocol
- Collector-based architecture (centralized config)
-
Check Sentry credentials:
- Verify
SENTRY_OTLP_ENDPOINTformat - Verify
SENTRY_AUTH_HEADERhas correct auth token - Check Sentry docs for your specific plan's OTLP endpoint format
- Verify
-
Check collector logs:
docker-compose logs otel-collector
- Look for export errors
- Verify data is being received (should see detailed logs)
-
Verify Sentry project supports OTLP:
- Some Sentry plans may have limited OTLP support
- Check your plan's features
- Make sure project is configured for OpenTelemetry (not just error tracking)
- Make sure backend is running:
curl http://localhost:4000/health - Check CORS settings in
backend/src/app.ts - Verify
VITE_BACKEND_URLis correct in frontend environment
- Check collector health:
curl http://localhost:13133/ - Verify ports 4317 and 4318 are exposed
- Check firewall/network settings
- Verify
FetchInstrumentationhas correctpropagateTraceHeaderCorsUrls - Check browser DevTools β Network β request headers for
traceparent - Ensure backend auto-instrumentation is enabled (imported in
index.tsbefore app)
This is a demo project, but feel free to:
- Report issues
- Suggest improvements
- Add additional instrumentation examples
- Improve documentation
MIT License - feel free to use this demo for learning and teaching.
Questions? Check the inline comments in the code for detailed explanations of how each part works.
Next Steps:
- Explore the code to understand the OTEL setup
- Modify
backend/src/app.tsto add custom spans/logs - Experiment with different collector exporters (Jaeger, Prometheus, etc.)
- Add metrics collection (currently only traces and logs)
- Implement sampling strategies in the collector