A high-performance Nostr relay server built with Deno and OpenSearch, optimized for high-throughput event processing, real-time data delivery, and full-text search.
This relay server combines the lightweight efficiency of Deno with the search power of OpenSearch to deliver a scalable Nostr infrastructure solution with native NIP-50 full-text search support. The architecture prioritizes performance, reliability, and operational simplicity.
- Deno Runtime: Modern JavaScript/TypeScript runtime with native HTTP server
capabilities (
deno servewith 16 parallel instances) - Redis Queues: High-performance in-memory queues for message passing and
coordination
nostr:relay:queue: Raw client messages awaiting processingnostr:events:queue: Validated events awaiting batch insertionnostr:responses:{connId}: Responses from workers to specific connections
- Relay Workers: N parallel processes that validate events and handle Nostr protocol logic
- Storage Workers: Dedicated batch processors that pull validated events from Redis and insert into OpenSearch using bulk API
- OpenSearch Database: Full-text search engine optimized for fast queries, full-text search (NIP-50), and comprehensive tag indexing
- WebSocket Protocol: Real-time bidirectional communication with Nostr clients
- Prometheus Metrics: Comprehensive monitoring and performance tracking using prom-client
Nostr Clients
↓ (WebSocket)
Deno Server (16 instances)
↓ (Queue raw messages)
Redis: nostr:relay:queue
↓ (Pull & process)
Relay Workers (N parallel) ← Validate in parallel
↓ (Queue validated events)
Redis: nostr:events:queue
↓ (Batch pull)
Storage Workers (M parallel)
↓ (Bulk insert - up to 1000 events/batch)
OpenSearch
This architecture solves the validation bottleneck by:
- Parallel validation: N relay workers process and validate events concurrently
- Fast message queueing: WebSocket server queues raw messages without blocking (microseconds)
- Batch storage: Storage workers pull 1000 validated events and insert in one OpenSearch bulk request
- Horizontal scaling: 16 Deno instances + N relay workers + M storage workers = massive parallelism
- Shared state via Redis: Workers coordinate through Redis (subscriptions, responses)
- Optimized indexing: All tags are indexed for fast queries, including multi-letter tags
- NIP-01: Basic protocol flow, event validation, and filtering
- NIP-09: Event deletion requests (kind 5 events)
- NIP-11: Relay information document
- NIP-50: Full-text search with advanced sort modes (hot, top, controversial, rising)
- NIP-86: Relay management API with NIP-98 authentication
- NIP-98: HTTP authentication for management API
- Intelligent Rate Limiting: Per-connection limits prevent abuse while maintaining throughput
- Query Optimization: Automatic timeouts, size limits, and result caps protect system resources
- Fast Validation: Rapid rejection of invalid events to minimize processing overhead
- Optimized Queries: OpenSearch provides sub-millisecond queries with proper indexing
- Full-Text Search: Native NIP-50 support with relevance scoring and fuzzy matching
- Advanced Sort Modes: NIP-50 extensions for
sort:hot,sort:top,sort:controversial, andsort:risingto discover trending and popular content - Comprehensive Tag Indexing: All tags indexed for fast filtering, including multi-letter tags
- Bulk Insert: Storage workers use OpenSearch bulk API for high-throughput writes
- Event Age Filtering: Configurable age-based filtering prevents broadcasting of stale events to subscribers, with special handling for ephemeral events
- Event Deletion: NIP-09 deletion requests processed automatically, with pubkey verification and timestamp-based deletion for addressable events
- Connection Management: Robust resource limits and cleanup prevent memory leaks
- Graceful Shutdown: Ensures all buffered events are persisted before termination
- Error Handling: Comprehensive error recovery and logging for operational visibility
- Prometheus Metrics: Industry-standard metrics using prom-client library for monitoring and alerting
- Health Endpoints: Real-time system status and diagnostic information
- Structured Logging: Clear, actionable log messages for troubleshooting
- Deno 1.40 or later
- OpenSearch server (local or remote)
- Redis server (local or remote)
Clone the repository and navigate to the project directory:
git clone <repository-url>
cd otherstuff-relayCreate a .env file based on .env.example:
cp .env.example .envConfigure the following environment variables:
PORT=8000 # HTTP server port# OpenSearch connection URL
# Format: http://[host]:[port] or https://[host]:[port]
OPENSEARCH_URL=http://localhost:9200
# Examples:
# OPENSEARCH_URL=http://localhost:9200
# OPENSEARCH_URL=https://opensearch.example.com:9200
# OpenSearch authentication (optional)
# Leave blank if no authentication is required
OPENSEARCH_USERNAME=
OPENSEARCH_PASSWORD=# Redis connection URL
# Format: redis://[user[:password]@]host[:port][/database]
REDIS_URL=redis://localhost:6379
# Examples:
# REDIS_URL=redis://:password@localhost:6379
# REDIS_URL=redis://localhost:6379/0# Maximum age of events to broadcast to subscribers (in seconds)
# Events older than this will not be broadcast to subscribers
# Ephemeral events (kind 20000-29999) are never stored, only broadcast
# Ephemeral events that are too old will be rejected with a false OK message
# Set to 0 to disable age filtering
# Default: 300 (5 minutes)
BROADCAST_MAX_AGE=300# Comma-separated list of admin pubkeys (hex format) authorized to use the management API
# Leave empty to disable the management API
ADMIN_PUBKEYS=pubkey1,pubkey2,pubkey3
# Relay URL for NIP-98 authentication (optional)
# If set, this URL will be used for validating the 'u' tag in NIP-98 auth events
# instead of the actual request URL. Useful when the relay is behind a reverse proxy
# or load balancer where the internal URL differs from the public URL.
# Example: RELAY_URL=https://relay.example.com/
RELAY_URL=Before running the server, initialize the OpenSearch index:
deno task migrateThis creates the nostr-events index with optimized mappings for:
- Fast tag filtering (all tags indexed)
- Full-text search on content (NIP-50)
- Time-based queries
- Relevance scoring
Find the most popular events based on how many times they are referenced by
other events via e tags:
# Most popular events in the last 24 hours
deno task trends --duration 24h
# Most popular events in the last 7 days, top 50
deno task trends --duration 7d --limit 50
# Most popular kind 1 (text notes) events in the last 7 days
deno task trends --duration 7d --target-kinds 1
# Most popular events referenced by kind 6 (reposts) and kind 7 (reactions)
deno task trends --duration 24h --source-kinds 6,7
# Most popular kind 1 events referenced by kind 1 events (notes citing notes)
deno task trends --duration 7d --source-kinds 1 --target-kinds 1
# Most popular events between specific dates
deno task trends --since 2025-11-01 --until 2025-11-15
# Most popular events since a specific timestamp
deno task trends --since 1700000000
# Just show IDs and counts (faster)
deno task trends --duration 24h --no-event-data
# Show all options
deno task trends --helpSimple (recommended) - Run everything with one command:
deno task startThis starts:
- 1 web server (16 Deno instances via
deno serve) - 16 relay workers (configurable via
NUM_RELAY_WORKERSenv var) - 2 storage workers (configurable via
NUM_STORAGE_WORKERSenv var)
Manual - Run processes separately:
Terminal 1 - Relay worker(s):
deno task relay-workerTerminal 2 - Storage worker(s):
deno task storage-workerTerminal 3 - Web server:
deno task serverAdjust the number of worker processes in .env:
NUM_RELAY_WORKERS=64 # Run 64 relay workers for parallel validation
NUM_STORAGE_WORKERS=4 # Run 4 storage workers for higher write throughputOr set it when running:
NUM_RELAY_WORKERS=8 NUM_STORAGE_WORKERS=4 deno task startThe OpenSearch index is optimized for Nostr event patterns with comprehensive tag indexing and full-text search:
The nostr-events index includes:
Core Fields:
id(keyword) - Event ID, used as document IDpubkey(keyword) - Author's public keycreated_at(long) - Unix timestampkind(integer) - Event kindcontent(text) - Full-text searchable content with custom analyzersig(keyword) - Event signaturetags(keyword array) - Original tag structure
Optimized Tag Fields:
tag_e(keyword array) - Event referencestag_p(keyword array) - Pubkey referencestag_a(keyword array) - Address referencestag_d(keyword array) - Identifier tagstag_t(keyword array) - Hashtagstag_r(keyword array) - URL referencestag_g(keyword array) - Geohash tags
Generic Tag Storage:
tags_flat(nested) - All other tags indexed as{name, value}pairs
Metadata:
indexed_at(long) - Indexing timestamp
- Tag queries: O(log n) lookup using keyword fields
- Multi-letter tags: Fully supported via nested
tags_flatstructure - Full-text search: Native NIP-50 support with relevance scoring
- Time-range queries: Optimized with
created_atfield - Bulk inserts: Up to 1000 events per batch using bulk API
The index is automatically created by running deno task migrate.
- URL:
ws://localhost:8000/ - Protocol: Nostr WebSocket protocol
- Purpose: Real-time event streaming and client communication
- URL:
GET / - Headers:
Accept: application/nostr+json - Response: JSON object with relay capabilities and metadata
- Purpose: Inform clients about relay features, limitations, and policies
Example:
curl -H "Accept: application/nostr+json" http://localhost:8000/Response includes:
- Relay name, description, and contact info
- Supported NIPs
- Server limitations (max message size, subscriptions, etc.)
- Event retention policies
- Whether writes are restricted (allowlist/banlist active)
- URL:
POST / - Headers:
Content-Type: application/nostr+json+rpcAuthorization: Nostr <base64-encoded-auth-event>
- Purpose: Remote relay administration (ban/allow pubkeys, events, kinds, etc.)
- Auth: Requires NIP-98 authentication with admin pubkey
See Admin Guide for details.
- URL:
GET /health - Response: JSON object with system status
- Purpose: Service health monitoring and load balancer checks
- URL:
GET /metrics - Format: Prometheus text format
- Purpose: Performance monitoring and alerting
- Event Ingestion: 10,000+ events/second with parallel validation
- Validation: Scales linearly with number of relay workers
- Bulk Inserts: 1000 events per batch using OpenSearch bulk API
- Query Response: Sub-100ms for most queries with proper indexing
- Full-Text Search: Native relevance scoring with fuzzy matching
- Concurrent Connections: 10,000+ simultaneous WebSocket connections
- Memory Usage: ~50MB per worker process
- CPU Utilization: Distributed across relay workers for parallel validation
- Database Connections: Pooled connections per storage worker
- Queue Latency: < 1ms for message queueing, ~10ms for response delivery
- Index Refresh: 5-second refresh interval balances write performance with search freshness
- Parallel Validation: N relay workers process events concurrently
- Horizontal Scaling: Multiple server instances + workers behind load balancer
- Database Scaling: OpenSearch cluster support for high availability and sharding
- Storage: Time-based sharding available for efficient data management
- Tag Indexing: All tags fully indexed for O(log n) lookups
This relay implements NIP-09 event deletion requests (kind 5 events). Deletion is handled automatically and transparently:
- Automatic Processing: When a kind 5 deletion event is inserted, the relay automatically deletes referenced events before storing the deletion event
- Event References (
etags): Deletes all referenced events that have the same pubkey as the deletion request - Addressable References (
atags): Deletes all versions of the addressable event up to the deletion request'screated_attimestamp - Pubkey Verification: Only events with matching pubkeys are deleted (authors can only delete their own events)
- Deletion Events Preserved: The deletion events themselves are stored and broadcast indefinitely
- Re-insertion Prevention: Deleted events cannot be re-inserted - the relay checks for deletion events before accepting any event
{
"kind": 5,
"pubkey": "<author-pubkey>",
"tags": [
["e", "<event-id-to-delete>"],
["e", "<another-event-id>"],
["a", "30023:<author-pubkey>:my-article"],
["k", "1"],
["k", "30023"]
],
"content": "these posts were published by accident"
}Automatic Deletion (Recommended):
- Simply insert a kind 5 event using
event()oreventBatch() - Deletions are processed automatically
Manual Deletion:
- Use
remove(filters)to delete events matching specific filters - Useful for administrative cleanup or custom deletion logic
- Deletion is not guaranteed across all relays - this is a best-effort mechanism
- Clients may have already received and stored the deleted events
- Deleted events are prevented from being re-inserted
- The relay continues to broadcast deletion events to help propagate deletions
This relay implements NIP-86 for remote relay management via an authenticated HTTP API. Administrators can ban/allow pubkeys, events, kinds, and IPs, as well as configure relay metadata.
Set admin pubkeys in your .env file:
# Comma-separated list of admin pubkeys (hex format)
ADMIN_PUBKEYS=pubkey1,pubkey2,pubkey3
# Optional: Set relay URL for NIP-98 authentication
# Useful when behind a reverse proxy or load balancer
RELAY_URL=https://relay.example.com/
# Optional: Set initial relay metadata
RELAY_NAME=My Nostr Relay
RELAY_DESCRIPTION=A high-performance relay
RELAY_ICON=https://example.com/icon.pngNote about RELAY_URL: When your relay is behind a reverse proxy or load
balancer, the internal request URL (e.g., http://localhost:8000/) will differ
from the public URL (e.g., https://relay.example.com/). Set RELAY_URL to the
public URL so that NIP-98 auth events can use the correct URL in their u tag.
All management API requests require NIP-98 HTTP authentication:
- Create a kind 27235 event with:
utag: Full relay URL (use the configuredRELAY_URLif set, e.g.,https://relay.example.com/)methodtag:POSTpayloadtag: SHA256 hash of request body (hex)
- Base64 encode the event
- Send in
Authorization: Nostr <base64-event>header - Your pubkey must be in the
ADMIN_PUBKEYSlist
Pubkey Management:
banpubkey- Ban a pubkey from publishing eventslistbannedpubkeys- List all banned pubkeysallowpubkey- Add pubkey to allowlist (optional feature)listallowedpubkeys- List allowlisted pubkeys
Event Management:
banevent- Ban a specific event by IDallowevent- Remove event from ban listlistbannedevents- List all banned events
Kind Filtering:
allowkind- Add kind to allowlistdisallowkind- Remove kind from allowlistlistallowedkinds- List allowed kinds
IP Blocking:
blockip- Block an IP addressunblockip- Remove IP from blocklistlistblockedips- List blocked IPs
Relay Metadata:
changerelayname- Update relay namechangerelaydescription- Update relay descriptionchangerelayicon- Update relay icon URL
Discovery:
supportedmethods- List all supported methods
Using curl and nostr-tools:
import { finalizeEvent, generateSecretKey } from "nostr-tools";
import { createHash } from "crypto";
const sk = generateSecretKey();
const body = JSON.stringify({
method: "banpubkey",
params: ["<pubkey-to-ban>", "spam"],
});
const authEvent = finalizeEvent({
kind: 27235,
created_at: Math.floor(Date.now() / 1000),
tags: [
["u", "http://localhost:8000/"],
["method", "POST"],
["payload", createHash("sha256").update(body).digest("hex")],
],
content: "",
}, sk);
const response = await fetch("http://localhost:8000/", {
method: "POST",
headers: {
"Content-Type": "application/nostr+json+rpc",
"Authorization": `Nostr ${btoa(JSON.stringify(authEvent))}`,
},
body,
});
const result = await response.json();
console.log(result); // { result: true }Banlists (Blocklists):
- Events from banned pubkeys are rejected
- Banned events cannot be inserted
- Banned IPs cannot connect (future feature)
Allowlists (Whitelists):
- If allowlist is empty, all pubkeys/kinds are allowed
- If allowlist has entries, only those pubkeys/kinds are allowed
- Useful for private/invite-only relays
Filter Priority:
- Check banned pubkeys → reject if banned
- Check allowlist → reject if not in allowlist (when allowlist is configured)
- Check banned events → reject if banned
- Check allowed kinds → reject if not in allowlist (when allowlist is configured)
- Only pubkeys in
ADMIN_PUBKEYScan access the management API - All requests must have valid NIP-98 authentication
- Auth events must be within 60 seconds of current time
- Payload hash is verified for POST requests
- All management operations are logged with admin pubkey
This relay implements advanced NIP-50 search extensions for discovering trending and popular content.
Supported sort modes:
sort:hot- Recent events with high engagement (recency + popularity)sort:top- Most referenced events (all-time or within time range)sort:controversial- Events with mixed positive/negative reactionssort:rising- Recently created events gaining engagement quickly
Example queries:
// Find the hottest bitcoin discussions
{"kinds": [1], "search": "sort:hot bitcoin", "limit": 50}
// Top events from the last 24 hours
{"kinds": [1], "since": 1700000000, "search": "sort:top", "limit": 100}
// Rising vegan content
{"kinds": [1], "search": "sort:rising vegan", "limit": 50}- NIP-86 & NIP-11 Implementation - Technical details of the NIP-86 and NIP-11 implementations
- Admin Guide - How to manage your relay using the NIP-86 API
- NIP-86 Example Client - Working example of a NIP-86 management client
- NIP-11 Query Script - Simple script to query relay information
lib/
├── config.ts # Environment configuration
├── metrics.ts # Prometheus metrics collection
├── opensearch.ts # OpenSearch relay implementation with NIP-50 and NIP-09 support
└── opensearch.test.ts # Tests for OpenSearch relay
scripts/
├── migrate.ts # Database migration script
└── start.ts # Process manager to run all components
services/
├── server.ts # HTTP server and WebSocket handling
├── relay-worker.ts # Relay worker for parallel message processing & validation
└── storage-worker.ts # Storage worker for batch OpenSearch inserts
- Fork the repository
- Create a feature branch
- Implement your changes with appropriate tests
- Ensure all formatting checks pass
- Submit a pull request with a clear description
- TypeScript: Strict type checking enabled
- Formatting: Deno formatter for consistent code style
- Documentation: Comprehensive inline documentation
- Testing: Unit tests for critical functionality
FROM denoland/deno:latest
WORKDIR /app
COPY . .
EXPOSE 8000
CMD ["deno", "task", "start"]- Reverse Proxy: Use Nginx or similar for SSL termination
- Monitoring: Configure Prometheus and Grafana for metrics visualization
- Logging: Implement centralized log aggregation
- Backups: Regular OpenSearch snapshots for disaster recovery
- Index Management: Configure index lifecycle policies for data retention
- Security: Enable OpenSearch security features and authentication in production
This project is licensed under the AGPLv3 License. See the LICENSE file for details.
For issues, questions, or contributions, please open an issue on the GitHub repository or contact the maintainers.
