Skip to content

Conversation

@HolyFalafel
Copy link

@HolyFalafel HolyFalafel commented Dec 3, 2025

Currently disabling matmul and kv_cache in dynamic quantization mode

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR configures dynamic quantization to exclude specific matrix multiplication and key-value cache operations from quantization. The changes prevent quantization of attention mechanism components that may be sensitive to quantization effects.

Key Changes:

  • Added a blocklist to the quantization configuration
  • Excluded Matmul and KVCache operation types from quantization
  • Excluded specific attention-related operations (matmul_qk, matmul_av, k_cache, v_cache) by name

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@github-actions
Copy link

github-actions bot commented Dec 3, 2025

✅ CI Passed

All checks passed successfully against the following vllm commit:
3a7751485b71ce5ef927e4aa03b28602cb90811c

@xuechendi
Copy link
Collaborator

Is it for performance reason?

@HolyFalafel
Copy link
Author

Is it for performance reason?

No, dynamic quantization for KVCache+Matmul is a new feature that we don't want to enable by default

@github-actions
Copy link

github-actions bot commented Dec 4, 2025

✅ CI Passed

All checks passed successfully against the following vllm commit:
899e2ef558e7345b99bc0d53c2e1c60ffdca7470

@github-actions
Copy link

github-actions bot commented Dec 8, 2025

🚧 CI Blocked

The main CI workflow was not started for the following reason:

Your branch is behind the base branch. Please merge or rebase to get the latest changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants