Skip to content

Conversation

@ishandhanani
Copy link
Collaborator

Summary

Add medium field to BlockStored and BlockRemoved event classes in kv_events.py to indicate which storage tier the block resides in. This enables KV-aware routers to track SGLang KV blocks across all hi-cache storage tiers (L1/GPU, L2/Host, L3/Storage).

Changes

  • Add MEDIUM_GPU, MEDIUM_CPU_TIER1, MEDIUM_CPU_TIER2 constants to kv_events.py
  • Add optional medium: Optional[str] field to BlockStored and BlockRemoved classes
  • Update radix_cache.py to emit events with medium="GPU" for L1 cache operations

Compatibility

  • Field is optional (None by default) for backward compatibility
  • Uses vLLM-compatible values: "GPU" (L1), "CPU_TIER1" (L2), "CPU_TIER2" (L3)
  • Events serialize correctly with msgpack

Test Plan

  • All existing unit tests pass (test_radix_cache_unit.py - 35 tests)
  • Verified msgpack serialization/deserialization works correctly
  • Verified events are emitted with medium="GPU" via ZMQ subscriber
  • Backward compatibility maintained (field is optional)

…tracking

Add `medium` field to `BlockStored` and `BlockRemoved` event classes to indicate
which storage tier the block resides in. This enables KV-aware routers to track
SGLang KV blocks across all hi-cache storage tiers.

Changes:
- Add MEDIUM_GPU, MEDIUM_CPU_TIER1, MEDIUM_CPU_TIER2 constants to kv_events.py
- Add optional `medium: str` field to BlockStored and BlockRemoved classes
- Update radix_cache.py to emit events with medium="GPU" for L1 cache

The medium field is optional for backward compatibility and uses vLLM-compatible
values: "GPU" (L1), "CPU_TIER1" (L2), "CPU_TIER2" (L3).
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ishandhanani, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the KV cache event system by introducing a medium field to BlockStored and BlockRemoved events. This field allows for tracking the specific storage tier (e.g., GPU, CPU_TIER1, CPU_TIER2) where a KV block resides, which is crucial for KV-aware routers to manage blocks across hierarchical caches. The changes maintain backward compatibility by making the new field optional and align with existing vLLM conventions for storage tier identification.

Highlights

  • New medium field for KV events: Introduced an optional medium field to both BlockStored and BlockRemoved event classes in kv_events.py to specify the storage tier of the KV block. This field defaults to None for backward compatibility.
  • Storage tier constants: Added new constants MEDIUM_GPU, MEDIUM_CPU_TIER1, and MEDIUM_CPU_TIER2 to kv_events.py to represent different storage tiers, compatible with vLLM's tiering scheme.
  • Radix cache event emission: Updated radix_cache.py to emit BlockStored and BlockRemoved events with medium="GPU" for operations involving the L1 (GPU) cache, ensuring that the storage tier is correctly tracked for GPU-resident blocks.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/srt/disaggregation/kv_events.py
    • Added MEDIUM_GPU, MEDIUM_CPU_TIER1, and MEDIUM_CPU_TIER2 constants.
    • Added medium: Optional[str] = None to the BlockStored class.
    • Added medium: Optional[str] = None to the BlockRemoved class.
  • python/sglang/srt/mem_cache/radix_cache.py
    • Imported the MEDIUM_GPU constant.
    • Modified _record_store_event to pass medium=MEDIUM_GPU when creating BlockStored events.
    • Modified _record_remove_event to pass medium=MEDIUM_GPU when creating BlockRemoved events.
Activity
  • No human activity (comments, reviews, or progress updates) has been recorded on this pull request since its creation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a medium field to KV cache events (BlockStored and BlockRemoved) to track the storage tier of KV blocks. This is a valuable addition for monitoring and managing blocks across different storage tiers. The implementation is correct and maintains backward compatibility by making the new field optional.

My main suggestion is to use a string-based Enum for the storage medium types instead of module-level constants. This would improve type safety and make the code more self-documenting about the allowed values for the medium field. I've added specific comments with code suggestions to implement this.

@ishandhanani
Copy link
Collaborator Author

ishandhanani commented Feb 4, 2026

We're intentionally using simple string constants instead of an Enum here to match vLLM's API exactly. vLLM uses the same pattern (MEDIUM_GPU = "GPU", medium: str | None field type) rather than Enums in their kv_events implementation.

This compatibility is important because Dynamo's KV-aware router parses these events and expects the exact string values ("GPU", "CPU_TIER1", "CPU_TIER2") as documented in vLLM. Using an Enum could potentially change serialization behavior with msgspec/msgpack.

@stmatengss
Copy link
Collaborator

Good catch! LGTM

@stmatengss
Copy link
Collaborator

Using an Enum could potentially change serialization behavior with msgspec/msgpack.

Got it. Enum is unfriendly for (de)serialization


# Medium values for storage tiers (compatible with vLLM)
MEDIUM_GPU = "GPU"
MEDIUM_CPU_TIER1 = "CPU_TIER1"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does CPU_TIER1 means HiRadixCache? And CPU_TIER2 means remote cpu memory pool?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a bit confusing actually.

  • CPU_TIER1 is actually L2 storage. Its the host_to_device and device_to_host buffers in HiRadixCache. Pinned memory
  • CPU_TIER2 is actually L3 storage. This is the remote storage backend

I'm not sure why VLLM does it this way. I am ok with changing this btw

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants