Low-latency WebSocket signaling backend that brokers WebRTC handshakes for Flow-Like, the visual workflow platform for building explainable automations. This service keeps peers discoverable, exchanges SDP offers/answers, and fans out ICE candidates so Flow-Like sessions come online in real time.
- Bun-based WebSocket server purpose-built for WebRTC signaling workloads.
- Redis-backed pub/sub fanout so ICE/SDP payloads reach every worker instantly.
- Topic-scoped rooms for per-session messaging with presence-derived global participant counts.
- Minimal message schema (
subscribe,unsubscribe,publish,ping) tuned for SDP, ICE, and control pings. - Graceful shutdown that drops presence keys and prevents ghost participants.
- Bun v1.3.1 or newer (runtime + package manager).
- A reachable Redis instance (local or managed).
- Node.js ≥ 18 if you plan to run the Node-based smoke tests under
tests/.
bun installAll behavior is controlled via environment variables. Defaults are sensible for local development:
| Variable | Description | Default |
|---|---|---|
PORT |
HTTP/WebSocket port to listen on | 4444 |
REDIS_URL |
Redis connection string | redis://127.0.0.1:6379 |
SIGNAL_CHANNEL |
Redis pub/sub channel used for fanout | signal:publish |
NODE_ID |
Override the generated node identifier (useful for debugging) | random UUID |
Presence bookkeeping lives under topic:presence:{topic} hashes in Redis. Each node stores a per-topic connection count keyed by its NODE_ID and refreshes the TTL every minute.
REDIS_URL="redis://localhost:6379" bun run server.tsOnce started, connect a WebSocket client to ws://localhost:4444/. The root path upgrades to WebSocket; any other path responds with a plain-text health check (okay).
Published messages are enriched server-side before fanout:
{
"type": "publish",
"topic": "project:123",
"data": { "cursor": [120, 48] },
"clients": 5, // global subscriber count gathered from Redis presence keys
"_origin": "node-uuid" // source node identifier
}Flow-Like wraps device-specific data inside the data field. Typical signaling exchange for a session session:abcd looks like:
{ "type": "publish", "topic": "session:abcd", "data": { "kind": "offer", "sdp": "v=0..." } }
{ "type": "publish", "topic": "session:abcd", "data": { "kind": "answer", "sdp": "v=0..." } }
{ "type": "publish", "topic": "session:abcd", "data": { "kind": "candidate", "candidate": "candidate:0 1 UDP ..." } }Anything placed inside data is relayed untouched except for the server-provided clients and _origin fields, which help peers gauge cluster-wide presence.
Production deployments typically run multiple Bun workers behind PM2 with sticky sessions enabled so each connection stays pinned to a worker. The provided ecosystem.config.cjs boots one Bun process per CPU and wires in sensible defaults:
pm2 start ecosystem.config.cjsEnsure your load balancer forwards X-Forwarded-For headers so PM2’s sticky mode can route reconnections predictably.
The tests/ folder contains Node.js scripts that exercise fanout, multi-worker delivery, and presence accounting. In separate terminals:
- Start the signaling server (
bun run server.ts). - Execute a test script, for example:
node tests/test-fanout.mjs node tests/test-multi-worker.mjs node tests/test-comprehensive.mjs
Each script prints expectations to stdout and exits non-zero if a behavior fails.
For fully automated deployment on a fresh Linux host (Ubuntu/Debian, RHEL, CentOS, Fedora), run:
sudo bash scripts/provision-linux.sh signaling.example.comThe script installs Redis, Node.js, Bun, pm2, and nginx; syncs the repository into /opt/ws-signaling; launches the service under pm2; configures nginx as a WebSocket reverse proxy; saves the pm2 process list; and wires pm2 to restart on boot. Pass your public domain as the first argument and optionally override APP_USER, WORK_DIR, or NODE_MAJOR via environment variables. After provisioning, issue TLS certificates (for example with certbot) and point DNS to the server.
- The server automatically calls
publishon close to decrement topic presence counts; always allow the graceful shutdown handler to run (SIGINT/SIGTERM). - Redis connectivity issues are logged with descriptive tags (
[Redis],[Presence],[Publish]) to simplify scraping. - Health checks can target any non-root path (e.g.,
GET /healthzresponds withokay).
Flow-Like helps teams design, observe, and ship visual workflows with full data lineage. Visit flow-like.com to see how live collaboration, explainable traces, and reusable nodes come together—the signaling service in this repo keeps everyone seeing the same flow in real time.