Conversation
- Add vulkan backend version to backend_versions.json - Implement Vulkan backend download logic in sd_server.cpp - Support for Windows, Linux, and macOS platforms - Similar pattern to existing ROCm backend - Update test expectations to include vulkan in supported backends - Added vulkan to all relevant hardware profiles in server_system_info.py - Vulkan now available alongside cpu/rocm where appropriate
|
Forgot to add vulkan checks to CI workflow |
| {"amd_dgpu", {"gfx110X", "gfx120X"}}, | ||
| }}, | ||
|
|
||
| // stable-diffusion.cpp - Vulkan backend (cross-platform GPU) |
There was a problem hiding this comment.
Should vulkan be the default over ROCm if its required for complex models to work?
There was a problem hiding this comment.
That makes sense.
|
FYI: #1089 has a bunch of general fixes/improvements for SD support that will make this PR easier to test. |
|
@ramkrishna2910 when testing this PR make sure to use Everything else will fail silently back to CPU, regardless of defaults. #1089 fixes that. |
|
@makn87amd is there a minimum driver requirement? |
| }}, | ||
|
|
||
| // stable-diffusion.cpp - Vulkan backend (cross-platform GPU) | ||
| {"sd-cpp", "vulkan", {"windows"}, { |
There was a problem hiding this comment.
Please add Linux Vulkan: sd-master-636d3cb-bin-Linux-Ubuntu-24.04-x86_64-vulkan.zip
|
@makn87amd given that we are still seeing issues for this on gfx1150 should we wait for that to be resolved? On Halos in any case we would prefer ROCm over vulkan. |
|
I think that's perfectly reasonable. |
Add Vulkan support to sd-cpp.
Vulkan seems to have more robust for SOTA models and may offer different performance compared with ROCm