Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update the main branch for 2502 release #504

Merged
merged 15 commits into from
Feb 26, 2025

Conversation

nvliyuan
Copy link
Collaborator

update the main branch for 2502 release.
Please create a merge commit, not squash.

nvauto and others added 15 commits November 25, 2024 06:14
* add license header and check workflow

Signed-off-by: YanxuanLiu <[email protected]>

* correct xml comment format

Signed-off-by: YanxuanLiu <[email protected]>

---------

Signed-off-by: YanxuanLiu <[email protected]>
[auto-merge] branch-24.12 to branch-25.02 [skip ci] [bot]
[auto-merge] branch-24.12 to branch-25.02 [skip ci] [bot]
follow up of NVIDIA/spark-rapids-common#22

to avoid update action details for multiple `spark-rapids*` repos in the
future

Signed-off-by: Peixin Li <[email protected]>
…VIDIA#483)

### Support for running DL Inference notebooks on CSP environments.
- Refactored Triton sections to use PyTriton, a Python API for the
Triton inference server which avoids Docker. Once this PR is merged,
Triton sections no longer need to be skipped in the CI pipeline
@YanxuanLiu .
- Updated notebooks with instructions to run on Databricks/Dataproc
- Updated Torch notebooks with best practices for ahead-of-time TensorRT
compilation.
- Cleaned up README, removing instructions to start Jupyter with PySpark
(we need a cell to attach to standalone for CI/CD anyway, so hoping to
reduce confusion for user).

Notebook outputs are saved from running locally, but all notebooks were
tested on Databricks/Dataproc.

---------

Signed-off-by: Rishi Chandra <[email protected]>
Pin torch<=2.5.1 until torch-tensorrt has a compatible version for torch
2.6.
Add troubleshooting to update libstdc++ for Triton server binary
requirements; can occur if default conda channel is used and not up to
date.
Removed ipykernel from requirements since it's already pulled in by
jupyterlab (and slows pip down a lot).

---------

Signed-off-by: Rishi Chandra <[email protected]>
Consolidate util functions into a server manager class to simplify
usage. (Note that notebooks were rerun but only the Triton utility
invocations are changed).

Also on Dataproc, the utils file needs to be copied driver root dir
instead of the same directory as the notebooks, since sc.addPyFile on
Dataproc only accepts absolute paths from root.

---------

Signed-off-by: Rishi Chandra <[email protected]>
Add deepseek-r1 and gemma-7b LLM batch inference notebooks. 
Updated CSP instructions since these notebooks require >20GB GPU RAM
(A10/L4).

---------

Signed-off-by: Rishi Chandra <[email protected]>
Adding Qwen notebook, mainly to demonstrate how to leverage system
prompts/chat templates for batch inference. Also serves as a non-gated
and faster alternative to Gemma.

---------

Signed-off-by: Rishi Chandra <[email protected]>
The PR is to update the plugin version to v25.02.1
Signed-off-by: liyuan <[email protected]>

Signed-off-by: liyuan <[email protected]>
Use streaming to avoid saving entire Huggingface dataset to disk for
large datasets. Updated diagram for clarity regarding client/server
interaction.

---------

Signed-off-by: Rishi Chandra <[email protected]>
@nvliyuan nvliyuan merged commit 3264e1c into NVIDIA:main Feb 26, 2025
3 of 4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants