Skip to content

Commit 0f57a95

Browse files
authored
Add tutorials for Deploying ComfyUI on Serverless + Running SmolLM3 on Pods with JupyterLab (#342)
1 parent 846ac01 commit 0f57a95

File tree

8 files changed

+599
-299
lines changed

8 files changed

+599
-299
lines changed

docs.json

Lines changed: 8 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -231,11 +231,11 @@
231231
"tab": "Tutorials",
232232
"groups": [
233233
{
234-
"group": "Introduction",
234+
"group": "Get started",
235235
"pages": [
236236
"tutorials/introduction/overview",
237237
{
238-
"group": "Containers",
238+
"group": "Intro to containers",
239239
"pages": [
240240
"tutorials/introduction/containers/overview",
241241
"tutorials/introduction/containers",
@@ -250,6 +250,7 @@
250250
"group": "Serverless",
251251
"pages": [
252252
"tutorials/serverless/run-your-first",
253+
"tutorials/serverless/comfyui",
253254
"tutorials/serverless/generate-sdxl-turbo",
254255
"tutorials/serverless/run-gemma-7b",
255256
"tutorials/serverless/run-ollama-inference"
@@ -258,22 +259,19 @@
258259
{
259260
"group": "Pods",
260261
"pages": [
261-
"tutorials/pods/comfyui",
262262
"tutorials/pods/run-your-first",
263+
"tutorials/pods/comfyui",
263264
"tutorials/pods/run-fooocus",
264265
"tutorials/pods/run-ollama",
265266
"tutorials/pods/build-docker-images",
266267
"tutorials/pods/fine-tune-llm-axolotl"
267268
]
268269
},
269270
{
270-
"group": "SDKs",
271+
"group": "Python SDK",
271272
"pages": [
272-
{
273-
"group": "Python",
274-
"pages": [
275273
{
276-
"group": "Get Started",
274+
"group": "Get started",
277275
"pages": [
278276
"tutorials/sdks/python/get-started/introduction",
279277
"tutorials/sdks/python/get-started/prerequisites",
@@ -282,7 +280,7 @@
282280
]
283281
},
284282
{
285-
"group": "Serverless Handler",
283+
"group": "Serverless handler",
286284
"pages": [
287285
"tutorials/sdks/python/101/hello",
288286
"tutorials/sdks/python/101/local-server-testing",
@@ -293,25 +291,17 @@
293291
]
294292
},
295293
{
296-
"group": "Handler Examples",
294+
"group": "Handler examples",
297295
"pages": [
298296
"tutorials/sdks/python/102/huggingface-models",
299297
"tutorials/sdks/python/102/stable-diffusion-text-to-image"
300298
]
301299
}
302300
]
303-
}
304-
]
305301
},
306302
{
307303
"group": "Migrations",
308304
"pages": [
309-
{
310-
"group": "Banana",
311-
"pages": [
312-
"tutorials/migrations/banana/overview"
313-
]
314-
},
315305
{
316306
"group": "Cog",
317307
"pages": [

pods/connect-to-a-pod.mdx

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,9 +16,11 @@ To connect using the web terminal:
1616

1717
1. Navigate to the [Pods page](https://console.runpod.io/pods) in the Runpod console.
1818
2. Expand the desired Pod and select **Connect**.
19-
3. Select **Open Web Terminal**.
20-
21-
This will open a new tab in your browser with a web terminal session.
19+
3. If your web terminal is **Stopped**, click **Start**.
20+
<Tip>
21+
If clicking **Start** does nothing, try refreshing the page.
22+
</Tip>
23+
4. Click **Open Web Terminal** to open a new tab in your browser with a web terminal session.
2224

2325
## JupyterLab connection
2426

@@ -30,6 +32,9 @@ To connect to JupyterLab (if it's available on your Pod):
3032
2. Once the Pod is running, navigate to the [Pods page](https://console.runpod.io/pods) in the Runpod console.
3133
3. Find the Pod you created and click the **Connect** button. If it's grayed out, your Pod hasn't finished starting up yet.
3234
4. In the window that opens, under **HTTP Services**, look for a link to **Jupyter Lab** (or a similarly named service on the configured HTTP port, often 8888). Click this link to open the JupyterLab workspace in your browser.
35+
<Tip>
36+
If the JupyterLab tab displays a blank page for more than a minute or two, try restarting the Pod and opening it again.
37+
</Tip>
3338
5. Once in JupyterLab, you can create new notebooks (e.g., under **Notebook**, select **Python 3 (ipykernel)**), upload files, and run code interactively.
3439

3540
## SSH terminal connection

tutorials/introduction/overview.mdx

Lines changed: 27 additions & 58 deletions
Original file line numberDiff line numberDiff line change
@@ -1,68 +1,37 @@
11
---
2-
title: "Overview"
2+
title: "Featured tutorials"
3+
sidebarTitle: "Featured tutorials"
4+
description: "Learn how to build and deploy AI applications on Runpod with step-by-step guides."
35
---
46

5-
Learn how to build and deploy applications on the Runpod platform with this set of tutorials. Covering tools, technologies, and deployment methods, including Containers, Docker, and Serverless implementation.
7+
<Note>
8+
This page includes our most recently tested and updated tutorials. Many of our old guides are out of date and include deprecated instructions—we're actively working on updating them.
9+
</Note>
610

7-
## Serverless
11+
This section includes tutorials that will help you build and deploy specialized AI applications on the Runpod platform, covering basic concepts and advanced implementations.
812

9-
Explore how to run and deploy AI applications using Runpod's Serverless platform.
13+
## Get started
1014

11-
### GPUs
15+
If you're new to Runpod, start with these foundational tutorials to understand the platform and deploy your first application:
1216

13-
* [Generate images with SDXL Turbo](/tutorials/serverless/generate-sdxl-turbo): Learn how to build a web application using Runpod's Serverless Workers and SDXL Turbo from Stability AI, a fast text-to-image model, and send requests to an Endpoint to generate images from text-based inputs.
14-
* [Run Google's Gemma model](/tutorials/serverless/run-gemma-7b): Deploy Google's Gemma model on Runpod's vLLM Worker, create a Serverless Endpoint, and interact with the model using OpenAI APIs and Python.
15-
* [Run your first serverless endpoint with Stable Diffusion](/tutorials/serverless/run-your-first): Use Runpod's Stable Diffusion v1 inference endpoint to generate images, set up your serverless worker, start a job, check job status, and retrieve results.
17+
<CardGroup cols={2}>
18+
<Card title="Create an image generation endpoint with Serverless" href="/tutorials/serverless/run-your-first" icon="cloud-bolt" iconType="solid">
19+
Deploy a Stable Diffusion endpoint and generate your first AI image using Serverless.
20+
</Card>
21+
<Card title="Run LLM inference on Pods with JupyterLab" href="/tutorials/pods/run-your-first" icon="text-size" iconType="solid">
22+
Launch JupyterLab on a GPU Pod and run LLM inference using the Python `transformers` library.
23+
</Card>
24+
</CardGroup>
1625

17-
### CPUs
26+
## Deploy ComfyUI
1827

19-
* [Run an Ollama Server on a Runpod CPU](/tutorials/serverless/run-ollama-inference): Set up and run an Ollama server on Runpod CPU for inference with this step-by-step tutorial.
28+
Learn how to deploy ComfyUI on Serverless or Pods and generate images with text-to-image models.
2029

21-
## Pods
22-
23-
Discover how to leverage Runpod Pods to run and manage your AI applications.
24-
25-
### GPUs
26-
27-
* [Fine tune an LLM with Axolotl on Runpod](/tutorials/pods/fine-tune-llm-axolotl): Learn how to fine-tune large language models with Axolotl on Runpod, a streamlined workflow for configuring and training AI models with GPU resources, and explore examples for LLaMA2, Gemma, LLaMA3, and Jamba.
28-
* [Run Fooocus in Jupyter Notebook](/tutorials/pods/run-fooocus): Learn how to run Fooocus, an open-source image generating model, in a Jupyter Notebook and launch the Gradio-based interface in under 5 minutes, with minimal requirements of 4GB Nvidia GPU memory and 8GB system memory.
29-
* [Build Docker Images on Runpod with Bazel](/tutorials/pods/build-docker-images): Learn how to build Docker images on Runpod using Bazel, a powerful build tool for creating consistent and efficient builds.
30-
* [Set up Ollama on your GPU Pod](/tutorials/pods/run-ollama): Set up Ollama, a powerful language model, on a GPU Pod using Runpod, and interact with it through HTTP API requests, harnessing the power of GPU acceleration for your AI projects.
31-
* [Run your first Fast Stable Diffusion with Jupyter Notebook](/tutorials/pods/run-your-first): Deploy a Jupyter Notebook to Runpod and generate your first image with Stable Diffusion in just 20 minutes, requiring Hugging Face user access token, Runpod infrastructure, and basic familiarity with the platform.
32-
33-
## Containers
34-
35-
Understand the use of Docker images and containers within the Runpod ecosystem.
36-
37-
* [Persist data outside of containers](/tutorials/introduction/containers/persist-data): Learn how to persist data outside of containers by creating named volumes, mounting volumes to data directories, and accessing persisted data from multiple container runs and removals in Docker.
38-
* [Containers overview](/tutorials/introduction/containers/overview): Discover the world of containerization with Docker, a platform for isolated environments that package applications, frameworks, and libraries into self-contained containers for consistent and reliable deployment across diverse computing environments.
39-
* [Dockerfile](/tutorials/introduction/containers/create-dockerfiles): Learn how to create a Dockerfile to customize a Docker image and use an entrypoint script to run a command when the container starts, making it a reusable and executable unit for deploying and sharing applications.
40-
* [Docker commands](/tutorials/introduction/containers/docker-commands): Runpod enables BYOC development with Docker, providing a reference sheet for commonly used Docker commands, including login, images, containers, Dockerfile, volumes, network, and execute.
41-
42-
## Integrations
43-
44-
Explore how to integrate Runpod with other tools and platforms like OpenAI, SkyPilot, and Charm's Mods.
45-
46-
### OpenAI
47-
48-
* [Overview](/tutorials/migrations/openai/overview): Use the OpenAI SDK to integrate with your Serverless Endpoints.
49-
50-
### SkyPilot
51-
52-
* [Running Runpod on SkyPilot](/integrations/skypilot): Learn how to deploy Pods from Runpod using SkyPilot.
53-
54-
### Mods
55-
56-
* [Running Runpod on Mods](/integrations/mods): Learn to integrate into Charm's Mods tool chain and use Runpod as the Serverless Endpoint.
57-
58-
## Migration
59-
60-
Learn how to migrate from other tools and technologies to Runpod.
61-
62-
### Cog
63-
64-
* [Cog Migration](/tutorials/migrations/cog/overview): Migrate your Cog model from [Replicate.com](https://www.replicate.com) to Runpod by following this step-by-step guide, covering setup, model identification, Docker image building, and serverless endpoint creation.
65-
66-
### Banana
67-
68-
* [Banana migration](/tutorials/migrations/banana/overview): Quickly migrate from Banana to Runpod with Docker, leveraging a bridge between the two environments for a seamless transition. Utilize a Dockerfile to encapsulate your environment and deploy existing projects to Runpod with minimal adjustments.
30+
<CardGroup cols={2}>
31+
<Card title="Generate images with ComfyUI on Serverless" href="/tutorials/serverless/comfyui" icon="brackets-curly" iconType="solid">
32+
Deploy ComfyUI on Serverless and generate images using JSON workflows.
33+
</Card>
34+
<Card title="Generate images with ComfyUI on Pods" href="/tutorials/pods/comfyui" icon="images" iconType="solid">
35+
Deploy ComfyUI on a GPU Pod and generate images using the ComfyUI web interface.
36+
</Card>
37+
</CardGroup>

tutorials/migrations/banana/overview.mdx

Lines changed: 0 additions & 180 deletions
This file was deleted.

0 commit comments

Comments
 (0)