Skip to content

Conversation

@limam-B
Copy link

@limam-B limam-B commented Nov 23, 2025

What does this PR do?

Adds NVIDIA CDI device (nvidia.com/gpu=all) to WSL2 container creation to enable actual GPU access. Previously, WSL2 containers had GPU environment variables but missing device mounting, causing inference to run on CPU despite showing "GPU Inference" badge.

Screenshot / video of UI

No UI changes, backend fix only

What issues does this PR fix or reference?

Fixes #3431

How to test this PR?

1- Windows 11 + WSL2 with NVIDIA GPU and drivers installed
2- Install NVIDIA Container Toolkit in WSL2 and generate CDI config (nvidia-ctk cdi generate)
3- Enable "Experimental GPU" in AI Lab settings
4- Create a new service with any model
5- In WSL2, run nvidia-smi - should show GPU usage and llama-server process
6- Verify container devices: podman inspect <container-id> | grep -A5 Devices - should show nvidia.com/gpu device (not empty)

Type: 'bind',
});

devices.push({
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: This is a flag not a path to share, what is the rationale to do that ?

Copy link
Author

@limam-B limam-B Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a CDI (Container Device Interface) device identifier. Podman uses nvidia.com/gpu=all as a CDI spec name to automatically mount all NVIDIA GPU devices.
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html
image

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on that link - i'm not that excpert - i belielve that Podman Devices array accepts CDI device names like nvidia.com/gpu=all in PathOnHost , so when Podman sees such format, it automatically resolves it via CDI and mounts all GPU devices.
This is the same pattern used for Linux (as screen shoot above).
The alternative would be using the --device CLI flag, but since we're using the API, this is the equivalent approach.

@jeffmaury jeffmaury requested a review from axel7083 November 26, 2025 15:39
@bmahabirbu bmahabirbu requested a review from ScrewTSW November 30, 2025 20:23
@bmahabirbu
Copy link
Contributor

Can confirm adding nvidia.com/gpu=all works (mimicking what is being done for the nvidia docs and what ramalama uses), but we already have this as part of the driver enablement

devices.push({
            PathOnHost: '/dev/dxg',
            PathInContainer: '/dev/dxg',

Will investigate more but nvidia gpu passthrough was working via wsl before and no code relating to the gpu has changed. I have a hunch it could be something else

I know that the qe team was recently checking out gpu stuff and would appreciate their knowledge on this too! Thank you!

@limam-B
Copy link
Author

limam-B commented Dec 1, 2025

Can confirm adding nvidia.com/gpu=all works (mimicking what is being done for the nvidia docs and what ramalama uses), but we already have this as part of the driver enablement

devices.push({
            PathOnHost: '/dev/dxg',
            PathInContainer: '/dev/dxg',

Will investigate more but nvidia gpu passthrough was working via wsl before and no code relating to the gpu has changed. I have a hunch it could be something else

I know that the qe team was recently checking out gpu stuff and would appreciate their knowledge on this too! Thank you!

Thanks for testing.

Regarding "nvidia gpu passthrough was working via wsl before" - that was with ai-lab-playground-chat-cuda which was based on cuda-ubi9-python-3.9 (full CUDA workbench with libraries embedded):

"labels": {
    "org.opencontainers.image.base.digest": "sha256:b5f15b03e09a5a4193bad4b6027d20098dcc694b82ddb618c22d09f2b8a7723e",
    "org.opencontainers.image.base.name": "quay.io/opendatahub/workbench-images:cuda-ubi9-python-3.9-20231206"
}

https://github.com/containers/podman-desktop-extension-ai-lab-playground-images/pkgs/container/podman-desktop-extension-ai-lab-playground-images%2Fai-lab-playground-chat-cuda/413979254?tag=e85acc66a1849a0c6841cb6d7aa8982e8d1aaa88

The switch to ramalama/cuda-llama-server in e34d59f changed this - the new image expects CDI injection.

  • /dev/dxg - provides GPU hardware access
  • nvidia.com/gpu=all (CDI) - provides the CUDA software stack

The old image had the CUDA stack baked in; the new one doesn't.

@bmahabirbu
Copy link
Contributor

Ah thanks for the in-depth explanation that makes sense!

Copy link
Contributor

@axel7083 axel7083 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have a pretty old issue #1824 on detecting the nvidia CDI

As of today, we do some magic 🪄 trick to let the container access the GPU on WSL, which is not ideal but work for all user even when they do not have CDI installed

I ma okay with this change, if this is not causing errors for user that do not have it

});

devices.push({
PathOnHost: 'nvidia.com/gpu=all',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: what happens if the podman machine do not have the nvidia CDI installed?

Copy link
Author

@limam-B limam-B Dec 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess If CDI isn't configured, Podman will fail to resolve nvidia.com/gpu=all and the container won't start.
But users enabling GPU support should have nvidia-container-toolkit installed which generates the CDI spec.
Maybe should add a check like the Linux case does with isNvidiaCDIConfigured()?
"I'll confirm by reproducing this scenario, drop more later on.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test Scenario: What happens without CDI?

Check current CDI status in Podman machine

podman machine ssh cat /etc/cdi/nvidia.yaml
file exists, CDI configured.

Temporarily disable CDI

  1. SSH into the Podman machine

podman machine ssh

  1. Disable/Backup the CDI config

sudo mv /etc/cdi/nvidia.yaml /etc/cdi/nvidia.yaml.disabled

  1. Exit the SSH session

exit


Test Results

Inference server with [ GPU ENABLED | no CDI ] in AI Lab.

image

Inference server with [ GPU DISABLED | no CDI ] in AI Lab.

image

Why this behavior is correct:

Thanks to the conditional checks at:

image image

The CDI device is only added when GPU is explicitly enabled in settings.


Conclusion:

This is the correct behavior since RamaLama requires CDI:
https://github.com/containers/ramalama/blob/main/docs/ramalama-cuda.7.md

  • CPU mode is unaffected (no CDI device added when GPU is disabled)
  • GPU mode gives clear error when CDI is missing
  • GPU mode works when CDI is properly configured
  • RamaLama requires CDI Documented
  • AI Lab Extension requires CDI Documented

Background:

The "magic trick" in #1824 worked with the old ai-lab-playground-chat-cuda image (CUDA embedded).
RamaLama images expect CDI injection instead , this change happened in e34d59f.

We should update the AI Lab documentation to mention CDI is required for WSL GPU support. Maybe?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

AI Labs Service doesn't use GPU

4 participants