Skip to content

[HiCache]: Support DeepSeek v32 cpu offloading#17415

Merged
xiezhq-hermann merged 7 commits intosgl-project:mainfrom
hzh0425:hicache/support_v32_cpu
Feb 3, 2026
Merged

[HiCache]: Support DeepSeek v32 cpu offloading#17415
xiezhq-hermann merged 7 commits intosgl-project:mainfrom
hzh0425:hicache/support_v32_cpu

Conversation

@hzh0425
Copy link
Collaborator

@hzh0425 hzh0425 commented Jan 20, 2026

Motivation

Recently, there have been many requests for hicache to support DeepSeek v32:

Modifications

Added NSATokenToKVPoolHost to support CPU offloading.

Storage offloading will be supported in the next pull request.

Accuracy Tests

Launch command

nohup python3 -m sglang.launch_server \
      --model-path /home/t4/models/deepSeek-v32-real/ \
      --trust-remote-code \
      --port 8000 \
      --host 0.0.0.0 \
      --context-length 65536 \
      --chunked-prefill-size 65536 \
      --tp-size 8 \
      --chat-template examples/chat_template/tool_chat_template_deepseekv32.jinja \
      --page-size 64 \
      --mem-fraction-static 0.8 \
      --enable-hierarchical-cache --hicache-size 50 --hicache-io-backend direct --hicache-mem-layout page_first_direct --hicache-write-policy write_through >  sglang.out &

gsm8k test command
python benchmark/gsm8k/bench_sglang.py --port 8000 --num-questions 500 --num-shots 48 --parallel 100

First Time gsm8k

Accuracy: 0.960
Invalid: 0.000
Latency: 82.898 s
Output throughput: 575.288 token/s

Flush device cache; retain CPU cache; second run of GSM8k test results.

Accuracy: 0.958
Invalid: 0.000
Latency: 77.394 s
Output throughput: 617.464 token/s

The second run continuously triggers CPU load; here are some logs:
image

Benchmarking and Profiling

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

Co-authored-by: 晟海 <huangtingwei.htw@antgroup.com>
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
@hzh0425 hzh0425 force-pushed the hicache/support_v32_cpu branch from cc4151c to b6688cd Compare January 20, 2026 12:20
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @hzh0425, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the sglang framework's hicache capabilities by enabling CPU offloading for DeepSeek v32 models. It achieves this by integrating a new host-side memory pool specifically designed to handle the unique KV cache and indexer buffer requirements of NSA models. This allows for more efficient memory utilization and broader model support, directly responding to community demand.

Highlights

  • DeepSeek v32 CPU Offloading: This pull request introduces support for CPU offloading for DeepSeek v32 models within the hicache system, addressing multiple user requests.
  • New Host Memory Pool: A new class, NSATokenToKVPoolHost, has been added to manage and transfer KV cache data, including specialized indexer buffers, between the device (GPU) and host (CPU) for Non-Standard Attention (NSA) models like DeepSeek v32.
  • Flexible KV Cache Dimension: The HostKVCache class has been updated to allow overriding the KV cache dimension, providing more flexibility for different model architectures.
  • Unit Tests for NSA Host Transfer: New unit tests have been added to test_nsa_pool_host_unit.py to verify the correct functionality of device-to-host indexer and KV cache data transfers for NSATokenToKVPoolHost using both kernel and direct I/O backends.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@hzh0425
Copy link
Collaborator Author

hzh0425 commented Jan 20, 2026

/tag-and-rerun-ci

@huangtingwei9988 huangtingwei9988 self-assigned this Jan 20, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for CPU offloading for DeepSeek v3.2 by introducing NSATokenToKVPoolHost. The changes are well-structured, with the new class inheriting from a refactored and more extensible MLATokenToKVPoolHost. The inclusion of unit tests for the new functionality is a great practice. My review includes a couple of suggestions to improve code clarity and logging verbosity.

@hzh0425 hzh0425 changed the title Support v32 cpu offloading [HiCache]: Support DeepSeek v32 cpu offloading Jan 20, 2026
@hzh0425
Copy link
Collaborator Author

hzh0425 commented Jan 21, 2026

@huangtingwei9988 @xiezhq-hermann could you take a review

# Conflicts:
#	python/sglang/srt/mem_cache/hiradix_cache.py
@qmzznbxhl
Copy link

I ran a private accuracy check: measure accuracy once, call the flush_cache API, then measure again. The score did not drop noticeably.

The HiRadixCache.reset logic should be adjusted to clear only HBM cache, not DRAM cache. Here is the version I’m proposing:

def reset(self):
        TreeNode.counter = 0
        # During construction, root_node is not initialized yet.
        if not hasattr(self, "root_node"):
            super().reset()
            return

        self.cache_controller.reset()
        # Keep host cache and tree structure; clear only device-side state.
        self.evictable_size_ = 0
        self.protected_size_ = 0
        stack = [self.root_node]
        while stack:
            node = stack.pop()
            node.value = None
            if node is not self.root_node:
                node.lock_ref = 0
            for child in node.children.values():
                stack.append(child)
        self.root_node.value = []
        self.root_node.host_value = self.root_node.host_value or []
        self.root_node.lock_ref = 1
        self.ongoing_write_through.clear()
        self.ongoing_load_back.clear()
        self.ongoing_prefetch.clear()
        self.ongoing_backup.clear()
        self._record_all_cleared_event()

I also ran a small experiment with a total throughput of 30k tokens:

Case 1: HiCache disabled, hit rate 16.1%, TTFT 1.14s
Case 2: HiCache enabled, IO transfer code commented out, hit rate 26.9%, host hit rate 9.80%, TTFT 0.974s
Ignoring IO transfer overhead, HiCache gives about a 0.166s TTFT improvement in this experiment.

123 25a8cfc4146fcacad8fb999eec87d698

@hzh0425 hzh0425 added the ready-to-merge The PR is ready to merge after the CI is green. label Feb 2, 2026
@xiezhq-hermann xiezhq-hermann merged commit 1805943 into sgl-project:main Feb 3, 2026
131 of 150 checks passed
charlesHsuGG pushed a commit to charlesHsuGG/sglang that referenced this pull request Feb 5, 2026
Co-authored-by: 晟海 <huangtingwei.htw@antgroup.com>
sfiisf pushed a commit to sfiisf/sglang that referenced this pull request Feb 5, 2026
Co-authored-by: 晟海 <huangtingwei.htw@antgroup.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

high priority ready-to-merge The PR is ready to merge after the CI is green. run-ci

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants