Skip to content

Conversation

@lvyonghuan
Copy link

  • Detect Turing GPUs and use safe LoRA reset method
  • Replace reset_lora() with update_lora_params({}) for Turing
  • Add CUDA synchronization and fallback handling
  • Maintain backward compatibility for non-Turing GPUs

This fixes the 'invalid configuration argument' error at misc_kernels.cu:281 that occurred when removing or switching LoRA models on RTX 2080 Ti and other Turing architecture GPUs.

Solution

  • Detect Turing GPUs using is_turing() utility
  • Use model.update_lora_params({}) instead of model.reset_lora() for Turing GPUs
  • Add CUDA synchronization for stability
  • Implement multi-level fallback handling
  • Maintain backward compatibility for non-Turing GPUs

Testing on RTX 2080 Ti 22g.

Fixes #418

…ch#418)

- Detect Turing GPUs and use safe LoRA reset method
- Replace reset_lora() with update_lora_params({}) for Turing
- Add CUDA synchronization and fallback handling
- Maintain backward compatibility for non-Turing GPUs

This fixes the 'invalid configuration argument' error at misc_kernels.cu:281
that occurred when removing or switching LoRA models on RTX 2080 Ti and
other Turing architecture GPUs.

Fixes nunchaku-tech#418
@bluearraygame-ops bluearraygame-ops mentioned this pull request Sep 26, 2025
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] 2080ti在使用nunchaku节点时,加载Lora会报错

1 participant