Qwen image edit doesn't always work on AMD HIP/ROCm #133
Replies: 9 comments
-
|
I recently reverted a few changes that was causing some memory issues. Can you please try again, and if you get the same results, please post a workflow and I will attempt to replicate your issue. Cheers! |
Beta Was this translation helpful? Give feedback.
-
|
My workflow: Settings: The memory situation when it doesn't work: The memory situation when it works: I tried the update but unfortunately it didn't change anything. I can tell that the gpu actually crashes |
Beta Was this translation helpful? Give feedback.
-
|
I think that this is relevant, on the power lora: And later on the ksampler: |
Beta Was this translation helpful? Give feedback.
-
|
Hey, @wasd-tech - I am @italbar over on Discord if you are on that platform and are interested in likely more timely interactions. Still trying to get your workflow going. The CLIP .gguf I am using is giving the workflow fits. I'll post once I can get a standard generation to run. |
Beta Was this translation helpful? Give feedback.
-
|
Of course I can join a discord, just leave the link.
You need another file to use qwen2.5vl with the GGUF format:https://huggingface.co/QuantStack/Qwen-Image-Edit-GGUF I'm afraid this is a problem exclusively related to AMD. If you come to this conclusion, do not hesitate to ask me to do tests and other things. |
Beta Was this translation helpful? Give feedback.
-
|
Hey, @wasd-tech I know you have seen some variable results on your end. I am happy to keep this issue open if there is something we can work on that looks like a MultGPU-specific issue. 😸 |
Beta Was this translation helpful? Give feedback.
-
|
@pollockjj thanks a lot for the help 👍 , After some testing it seems that it's qwen image edit the problem and not the power lora, so I changed the title of the issue. |
Beta Was this translation helpful? Give feedback.
-
|
Since this seems a problem specific with my AMD gpu (ROCm fails to allocate and use so much memory and only with a virtual vram of about 10 it works) I suggest to leave this open for future reference, so people will not open multiple issues on the same thing. I will continually test and see if something changes, maybe with the major release of ROCm 7.0 or different versions of pytorch. @pollockjj Are you okay with this? |
Beta Was this translation helpful? Give feedback.
-
|
Migrating this to a discussion - that way it remains a open, ongoing resource and can remain open indefinitely. Hope that works @wasd-tech. |
Beta Was this translation helpful? Give feedback.


Uh oh!
There was an error while loading. Please reload this page.
-
Hi! I'm a huge fan of this project, it's helped me with video generation on my RX 9070 XT. Recently, I created a workflow for Qwen image editing and wanted to use DistorchLoader to load the GGUF. The problem is that, when it comes to processing the prompt, it loads the clip but also allocates memory for the model because Power LoRA takes both the model and the clip as input. This behaviour completely destroys the workflow because it does not try to use the allocated memory, but instead tries to reload the model. Ultimately, at the ksampler, it just hangs indefinitely.
Beta Was this translation helpful? Give feedback.
All reactions