Skip to content

Commit

Permalink
cleanups
Browse files Browse the repository at this point in the history
Signed-off-by: Rishi Chandra <[email protected]>
  • Loading branch information
rishic3 committed Feb 20, 2025
1 parent 13ff674 commit 54ac6bc
Show file tree
Hide file tree
Showing 3 changed files with 12 additions and 10 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,11 @@
"source": [
"<img src=\"http://developer.download.nvidia.com/notebooks/dlsw-notebooks/tensorrt_torchtrt_efficientnet/nvidia_logo.png\" width=\"90px\">\n",
"\n",
"# PySpark LLM Inference: DeepSeek-R1\n",
"# PySpark LLM Inference: DeepSeek-R1 Reasoning Q/A\n",
"\n",
"In this notebook, we demonstrate distributed batch inference with [DeepSeek-R1](https://github.com/deepseek-ai/DeepSeek-R1), using open weights on Huggingface.\n",
"\n",
"We use [DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) as demonstration. DeepSeek's distilled models are based on open-source LLMs (such as Llama/Qwen), and are fine-tuned using samples generated by DeepSeek-R1 to perform multi-step reasoning tasks.\n",
"We use [DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) as demonstration. DeepSeek's distilled models are based on open-source LLMs (such as Llama/Qwen), and are fine-tuned using samples generated by DeepSeek-R1. We'll show how to use the model to reason through word problems.\n",
"\n",
"**Note:** Running this model on GPU with 16-bit precision requires **~18GB** of GPU RAM. Make sure your instances have sufficient GPU capacity."
]
Expand Down Expand Up @@ -261,6 +261,7 @@
"outputs": [],
"source": [
"import os\n",
"import pandas as pd\n",
"import datasets\n",
"from datasets import load_dataset\n",
"datasets.disable_progress_bars()"
Expand Down Expand Up @@ -330,7 +331,7 @@
"source": [
"#### Load DataFrame\n",
"\n",
"Load the Orca Math Word Problems dataset from Huggingface and store in a Spark Dataframe."
"Load the first 500 samples of the [Orca Math Word Problems dataset](https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k) from Huggingface and store in a Spark Dataframe."
]
},
{
Expand All @@ -339,8 +340,8 @@
"metadata": {},
"outputs": [],
"source": [
"dataset = load_dataset(\"microsoft/orca-math-word-problems-200k\", split=\"train[:1%]\")\n",
"dataset = dataset.to_pandas()[\"question\"]"
"dataset = load_dataset(\"microsoft/orca-math-word-problems-200k\", split=\"train\", streaming=True)\n",
"dataset = pd.Series([sample[\"question\"] for sample in dataset.take(500)])"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,11 @@
"source": [
"<img src=\"http://developer.download.nvidia.com/notebooks/dlsw-notebooks/tensorrt_torchtrt_efficientnet/nvidia_logo.png\" width=\"90px\">\n",
"\n",
"# PySpark LLM Inference: Gemma-7b\n",
"# PySpark LLM Inference: Gemma-7b Code Comprehension\n",
"\n",
"In this notebook, we demonstrate distributed inference with the Google [Gemma-7b-instruct](https://huggingface.co/google/gemma-7b-it) LLM, using open-weights on Huggingface.\n",
"\n",
"The Gemma-7b-instruct is an instruction-fine-tuned version of the Gemma-7b base model.\n",
"The Gemma-7b-instruct is an instruction-fine-tuned version of the Gemma-7b base model. We'll show how to use the model to perform code comprehension tasks.\n",
"\n",
"**Note:** Running this model on GPU with 16-bit precision requires **~18 GB** of GPU RAM. Make sure your instances have sufficient GPU capacity."
]
Expand Down Expand Up @@ -200,6 +200,7 @@
"outputs": [],
"source": [
"import os\n",
"import pandas as pd\n",
"import datasets\n",
"from datasets import load_dataset\n",
"datasets.disable_progress_bars()"
Expand Down Expand Up @@ -269,7 +270,7 @@
"source": [
"#### Load DataFrame\n",
"\n",
"Load the code comprehension dataset from Huggingface and store in a Spark Dataframe."
"Load the first 500 samples of the [Code Comprehension dataset](https://huggingface.co/datasets/imbue/code-comprehension) from Huggingface and store in a Spark Dataframe."
]
},
{
Expand All @@ -278,8 +279,8 @@
"metadata": {},
"outputs": [],
"source": [
"dataset = load_dataset(\"imbue/code-comprehension\", split=\"train[:1%]\")\n",
"dataset = dataset.to_pandas()[\"question\"]"
"dataset = load_dataset(\"imbue/code-comprehension\", split=\"train\", streaming=True)\n",
"dataset = pd.Series([sample[\"question\"] for sample in dataset.take(500)])"
]
},
{
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 54ac6bc

Please sign in to comment.