Skip to content

Conversation

@Bluear7878
Copy link
Contributor

Closes #479

This pr introduces support for applying LoRA weights to the Qwen-image model. at Comfy

Issues
Please be aware that there is a known limitation in this implementation: the CPU offload mode does not currently work when a LoRA is applied.

I am actively working on resolving this issue and plan to release a fix in a follow-up PR soon.

@NielsGx
Copy link

NielsGx commented Oct 7, 2025

I just tested this (with your other PR for nunchaku itself) and it seems to work fine (see #479 (comment))
(I had to modify this utils.py to say it supports v.1.0.1 though..., but this seems to be an error on the main branch of ComfyUI-nunchaku too)

Though this doesn't seem to support LoRAs that have trigger words ? @Bluear7878
As the node Nunchaku Qwen-Image LoRA Loader doesn't have clip as input and output (unlike default Load LoRA node)

@NielsGx
Copy link

NielsGx commented Oct 7, 2025

Hmm it seems torch.compile is breaking this PR:
image
using https://civitai.com/models/1994714/breast-slider-qwen?modelVersionId=2257815
workflow: Qwen-Image.json

Loras work fine when I don't use torch.compile 😎
But torch.compile is a nice speedboost 😩

@NielsGx
Copy link

NielsGx commented Oct 7, 2025

It seems LoRAs don't unload when I bypass the load lora node.

I need to manually unload the model (comfyui-manager button) to be able to unload any lora.
Otherwise if I run a workflow with a lora, then another one with another, they all load ontop (even if only one lora loader node is active at a time)

This is a pretty big issue, as it behaves weirdly and need workaround and is painful for end-users

From most important to least important fixes needed:

  • Fix LoRAs unloading (check if loras selection has changed from last run and if yes, fully reload model+Loras)
  • Add support for LoRAs with trigger words (patch clip)
  • Fix torch.compile support (not required to make this usable, but would be nice, as it is yet another +15% speedup on my end)

@isaac-mcfadyen
Copy link

As the node Nunchaku Qwen-Image LoRA Loader doesn't have clip as input and output (unlike default Load LoRA node)

I don't believe the text encoder is ever fine-tuned when training a Qwen Image LoRA, so it doesn't need to be passed through the LoRA Apply node (meaning trigger words will work with this PR).

You can see this by looking at the weights of Qwen Image LoRAs that use trigger words on HuggingFace:

https://huggingface.co/flymy-ai/qwen-image-realism-lora/blob/main/flymy_realism.safetensors

Notice that all of the weights are for the diffusion model. The text encoder isn't modified, even though a trigger word is required. Also the workflow uses the LoraLoaderModelOnly node in Comfy, and the official ComfyUI workflows also use LoraLoaderModelOnly when applying LoRAs.

Compare to SDXL:

https://huggingface.co/artificialguybr/StudioGhibli.Redmond-V2/blob/main/StudioGhibli.Redmond-StdGBRRedmAF-StudioGhibli.safetensors

Note that the lora_te weight is present (text encoder). Also note that Comfy needs the regular Load LoRA node for this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature] Qwen image LORA support

3 participants