diff --git a/packages/web/docs/src/app/product-updates/(posts)/2026-01-07-persisted-documents-l2-cache/page.mdx b/packages/web/docs/src/app/product-updates/(posts)/2026-01-07-persisted-documents-l2-cache/page.mdx new file mode 100644 index 00000000000..880d5951ca9 --- /dev/null +++ b/packages/web/docs/src/app/product-updates/(posts)/2026-01-07-persisted-documents-l2-cache/page.mdx @@ -0,0 +1,96 @@ +--- +title: Layer 2 Cache for Persisted Documents +description: Add a distributed cache layer between in-memory cache and CDN for persisted documents. +date: 2026-01-07 +authors: [adam] +--- + +You can now configure a Layer 2 (L2) cache for persisted documents, reducing CDN calls and improving +performance in serverless and multi-instance deployments. + +## The Problem + +The in-memory LRU cache (L1) for persisted documents has limitations: + +- **Cache lost on restart**: Serverless cold starts, instance restarts, and scaling events clear the + cache +- **No shared state**: Each instance maintains its own cache, leading to redundant CDN calls +- **Scale constraints**: Large supergraphs with many queries can hit LRU cache limits quickly +- **No CDN resilience**: If the CDN is unavailable, there's no fallback layer + +Teams with high-scale requirements often needed to build custom persisted document implementations +to use Redis or other distributed caches. + +## The Solution + +Add a distributed cache (Redis, Memcached, etc.) as an L2 layer between the in-memory cache and CDN: + +**L1 (memory) → L2 (Redis) → CDN** + +```typescript +import { createYoga } from 'graphql-yoga' +import { createClient } from 'redis' +import { useHive } from '@graphql-hive/yoga' + +const redis = createClient({ url: 'redis://localhost:6379' }) +await redis.connect() + +const yoga = createYoga({ + plugins: [ + useHive({ + experimental__persistedDocuments: { + cdn: { + endpoint: 'https://cdn.graphql-hive.com/artifacts/v1/', + accessToken: '' + }, + layer2Cache: { + cache: { + get: key => redis.get(`hive:pd:${key}`), + set: (key, value, opts) => + redis.set(`hive:pd:${key}`, value, opts?.ttl ? { EX: opts.ttl } : {}) + }, + ttlSeconds: 3600, + notFoundTtlSeconds: 60 + } + } + }) + ] +}) +``` + +## Hive Gateway + +For Hive Gateway, enable caching with just TTL options: + +```typescript +persistedDocuments: { + type: 'hive', + endpoint: 'https://cdn.graphql-hive.com/artifacts/v1/', + token: '', + cacheTtlSeconds: 3600, + cacheNotFoundTtlSeconds: 60 +} +``` + +Or via CLI: + +```bash +hive-gateway supergraph \ + --hive-persisted-documents-cache-ttl 3600 \ + --hive-persisted-documents-cache-not-found-ttl 60 +``` + +## Features + +| Option | Description | +| ------------------------------------------------ | ------------------------------------------------------------ | +| `ttlSeconds` / `cacheTtlSeconds` | TTL for found documents | +| `notFoundTtlSeconds` / `cacheNotFoundTtlSeconds` | TTL for not-found documents (negative caching). Default: 60s | +| `waitUntil` | Register cache writes in serverless environments | + +- Apollo Server automatically uses `ctx.cache` as the L2 cache when available. +- Hive Gateway automatically uses the gateway cache when TTL options are provided. + +--- + +- [Learn more about App Deployments](/docs/schema-registry/app-deployments) diff --git a/packages/web/docs/src/content/schema-registry/app-deployments.mdx b/packages/web/docs/src/content/schema-registry/app-deployments.mdx index 91f45a5b997..8abdf9c13a1 100644 --- a/packages/web/docs/src/content/schema-registry/app-deployments.mdx +++ b/packages/web/docs/src/content/schema-registry/app-deployments.mdx @@ -336,6 +336,56 @@ const yoga = createYoga({ For further configuration options, please refer to the [Hive Client API reference](/docs/api-reference/client#persisted-documents). +#### Layer 2 Cache (Optional) + +For serverless environments or multi-instance deployments, you can add a Layer 2 (L2) cache between +the in-memory cache and the CDN. This is useful when in-memory cache is lost between invocations or +to share cached documents across server instances. + +```typescript filename="Persisted Documents with L2 Cache (Redis)" {14-24} +import { createYoga } from 'graphql-hive' +import { createClient } from 'redis' +import { useHive } from '@graphql-hive/yoga' +import schema from './schema.js' + +const redis = createClient({ url: 'redis://localhost:6379' }) +await redis.connect() + +const yoga = createYoga({ + schema, + plugins: [ + useHive({ + enabled: false, + experimental__persistedDocuments: { + cdn: { + endpoint: 'https://cdn.graphql-hive.com/artifacts/v1/', + accessToken: '' + }, + layer2Cache: { + cache: { + get: key => redis.get(`hive:pd:${key}`), + set: (key, value, opts) => + redis.set(`hive:pd:${key}`, value, opts?.ttl ? { EX: opts.ttl } : {}) + }, + ttlSeconds: 3600, // 1 hour for found documents + notFoundTtlSeconds: 60 // 1 minute for not-found (negative caching) + } + } + }) + ] +}) +``` + +The lookup flow is: **L1 (memory) -> L2 (Redis/external) -> CDN** + +| Option | Description | +| -------------------- | ----------------------------------------------------------------------------------------------- | +| `cache.get` | Async function to get a value from the cache. Returns `null` for cache miss. | +| `cache.set` | Async function to set a value in the cache. Receives an optional `ttl` option. | +| `ttlSeconds` | TTL in seconds for successfully found documents. | +| `notFoundTtlSeconds` | TTL in seconds for not-found documents (negative caching). Set to `0` to disable. Default: `60` | +| `waitUntil` | Optional function for serverless environments to ensure cache writes complete. | + {/* Apollo Server */} @@ -372,6 +422,44 @@ const server = new ApolloServer({ For further configuration options, please refer to the [Hive Client API reference](/docs/api-reference/client#persisted-documents). +#### Layer 2 Cache (Optional) + +Apollo Server automatically uses the server's context cache for L2 caching if available. You can +also configure a custom L2 cache: + +```typescript filename="Persisted Documents with L2 Cache" {10-20} +import { createClient } from 'redis' +import { ApolloServer } from '@apollo/server' +import { useHive } from '@graphql-hive/apollo' +import schema from './schema.js' + +const redis = createClient({ url: 'redis://localhost:6379' }) +await redis.connect() + +const server = new ApolloServer({ + schema, + plugins: [ + useHive({ + experimental__persistedDocuments: { + cdn: { + endpoint: 'https://cdn.graphql-hive.com/artifacts/v1/', + accessToken: '' + }, + layer2Cache: { + cache: { + get: key => redis.get(`hive:pd:${key}`), + set: (key, value, opts) => + redis.set(`hive:pd:${key}`, value, opts?.ttl ? { EX: opts.ttl } : {}) + }, + ttlSeconds: 3600, + notFoundTtlSeconds: 60 + } + } + }) + ] +}) +``` + {/* Apollo Router */}