Skip to content

Feature Request: Integrate Cachify with the Built-In Rate Limiter #18

@ahmadnasriya

Description

@ahmadnasriya

Summary

The built-in rate limiter currently uses a simple in-memory store for tracking requests, which means all rate limits are reset on restart. Cachify, on the other hand, already provides a powerful caching layer that supports persistent storage, live updates, and backup/restore.
Integrating the rate limiter with Cachify could make request limits survive cold restarts and benefit from Cachify’s efficient and resilient data layer.


🧩 Problem

Right now, the rate limiter:

  • Stores counters purely in memory (volatile).
  • Loses all rate limit state when the server restarts.
  • Has no built-in backup or restore mechanism.
  • Recomputes rate limits entirely from scratch after a restart, which may cause burst traffic or inconsistent behavior across clustered instances.

Meanwhile, Cachify:

  • Can persist data to disk.
  • Watches for changes and updates files in real time.
  • Supports reliable restore and backup workflows.

💡 Proposed Solution

Allow the rate limiter to use Cachify as its backing store instead of its primitive in-memory map.

Integration ideas:

  1. Introduce a new rate limiter storage driver.

  2. The rate limiter could then:

    • Store limit counters in Cachify’s managed cache.
    • Rely on Cachify’s persistence layer for backup and restore.
    • Automatically reload limit states after restarts.
    • Optionally share rate-limit data across multiple HyperCloud instances.

⚙️ Implementation Notes

  • The integration should remain optional and backwards-compatible.
  • Cachify’s storage adapter should expose simple get/set/delete primitives.
  • Cachify can handle expiration and cleanup based on rate limiter TTLs.
  • The system should still allow memory-only mode for lightweight deployments.

🧭 Benefits

  • Persistence: Rate limits survive cold restarts.
  • Resilience: Backup and restore become trivial through Cachify’s existing tooling.
  • Scalability: Easier to share rate-limit state between instances.
  • Consistency: Unified caching logic across all framework layers.

🧱 Related

Metadata

Metadata

Assignees

Labels

enhancementNew feature or requestfeature requestRequest a featurememory managementImprovements related to managing memory usage effectively.refactorCode changes that improve readability or maintainability.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions