Skip to content

[enhancement]: Allow stable diffusion to use multi-GPU resources to speed up single image generation #8377

@Neosettler

Description

@Neosettler

Is there an existing issue for this?

  • I have searched the existing issues

Contact Details

No response

What should this feature add?

Data Parallelism has taken off in multiple areas for stable diffusion. VRAM pooling across several GPUs is the most exiting in my opinion but may required significant code refactor. On the other hand, there is low hanging fruits that seems to be worth the effort like this one:

"In this code sample, i refactored the txt2img pipeline, but other pipelines such as img2img are similar concept. in a standard pipeline, diffusers need to generated the text guidance latents vector and unguidance latents for generate one single images, this logic is related to CFG_Scale settings, this step requires the text_encoder and unet to work together, but actually these 2 images can be generated in parallel from different GPU resources and pull then together into one GPU to sum the result."

https://github.com/hellojixian/StableDiffusionParallelPipeline

Hopefully, it may steer the developers towards supporting more "multiple GPUs" features in the future.

Alternatives

No response

Additional Content

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions