|
1 | 1 | ---
|
2 |
| -title: "Overview" |
| 2 | +title: "Featured tutorials" |
| 3 | +sidebarTitle: "Featured tutorials" |
| 4 | +description: "Learn how to build and deploy AI applications on Runpod with step-by-step guides." |
3 | 5 | ---
|
4 | 6 |
|
5 |
| -Learn how to build and deploy applications on the Runpod platform with this set of tutorials. Covering tools, technologies, and deployment methods, including Containers, Docker, and Serverless implementation. |
| 7 | +<Note> |
| 8 | +This page includes our most recently tested and updated tutorials. Many of our old guides are out of date and include deprecated instructions—we're actively working on updating them. |
| 9 | +</Note> |
6 | 10 |
|
7 |
| -## Serverless |
| 11 | +This section includes tutorials that will help you build and deploy specialized AI applications on the Runpod platform, covering basic concepts and advanced implementations. |
8 | 12 |
|
9 |
| -Explore how to run and deploy AI applications using Runpod's Serverless platform. |
| 13 | +## Get started |
10 | 14 |
|
11 |
| -### GPUs |
| 15 | +If you're new to Runpod, start with these foundational tutorials to understand the platform and deploy your first application: |
12 | 16 |
|
13 |
| -* [Generate images with SDXL Turbo](/tutorials/serverless/generate-sdxl-turbo): Learn how to build a web application using Runpod's Serverless Workers and SDXL Turbo from Stability AI, a fast text-to-image model, and send requests to an Endpoint to generate images from text-based inputs. |
14 |
| -* [Run Google's Gemma model](/tutorials/serverless/run-gemma-7b): Deploy Google's Gemma model on Runpod's vLLM Worker, create a Serverless Endpoint, and interact with the model using OpenAI APIs and Python. |
15 |
| -* [Run your first serverless endpoint with Stable Diffusion](/tutorials/serverless/run-your-first): Use Runpod's Stable Diffusion v1 inference endpoint to generate images, set up your serverless worker, start a job, check job status, and retrieve results. |
| 17 | +<CardGroup cols={2}> |
| 18 | + <Card title="Create an image generation endpoint with Serverless" href="/tutorials/serverless/run-your-first" icon="cloud-bolt" iconType="solid"> |
| 19 | + Deploy a Stable Diffusion endpoint and generate your first AI image using Serverless. |
| 20 | + </Card> |
| 21 | + <Card title="Run LLM inference on Pods with JupyterLab" href="/tutorials/pods/run-your-first" icon="text-size" iconType="solid"> |
| 22 | + Launch JupyterLab on a GPU Pod and run LLM inference using the Python `transformers` library. |
| 23 | + </Card> |
| 24 | +</CardGroup> |
16 | 25 |
|
17 |
| -### CPUs |
| 26 | +## Deploy ComfyUI |
18 | 27 |
|
19 |
| -* [Run an Ollama Server on a Runpod CPU](/tutorials/serverless/run-ollama-inference): Set up and run an Ollama server on Runpod CPU for inference with this step-by-step tutorial. |
| 28 | +Learn how to deploy ComfyUI on Serverless or Pods and generate images with text-to-image models. |
20 | 29 |
|
21 |
| -## Pods |
22 |
| - |
23 |
| -Discover how to leverage Runpod Pods to run and manage your AI applications. |
24 |
| - |
25 |
| -### GPUs |
26 |
| - |
27 |
| -* [Fine tune an LLM with Axolotl on Runpod](/tutorials/pods/fine-tune-llm-axolotl): Learn how to fine-tune large language models with Axolotl on Runpod, a streamlined workflow for configuring and training AI models with GPU resources, and explore examples for LLaMA2, Gemma, LLaMA3, and Jamba. |
28 |
| -* [Run Fooocus in Jupyter Notebook](/tutorials/pods/run-fooocus): Learn how to run Fooocus, an open-source image generating model, in a Jupyter Notebook and launch the Gradio-based interface in under 5 minutes, with minimal requirements of 4GB Nvidia GPU memory and 8GB system memory. |
29 |
| -* [Build Docker Images on Runpod with Bazel](/tutorials/pods/build-docker-images): Learn how to build Docker images on Runpod using Bazel, a powerful build tool for creating consistent and efficient builds. |
30 |
| -* [Set up Ollama on your GPU Pod](/tutorials/pods/run-ollama): Set up Ollama, a powerful language model, on a GPU Pod using Runpod, and interact with it through HTTP API requests, harnessing the power of GPU acceleration for your AI projects. |
31 |
| -* [Run your first Fast Stable Diffusion with Jupyter Notebook](/tutorials/pods/run-your-first): Deploy a Jupyter Notebook to Runpod and generate your first image with Stable Diffusion in just 20 minutes, requiring Hugging Face user access token, Runpod infrastructure, and basic familiarity with the platform. |
32 |
| - |
33 |
| -## Containers |
34 |
| - |
35 |
| -Understand the use of Docker images and containers within the Runpod ecosystem. |
36 |
| - |
37 |
| -* [Persist data outside of containers](/tutorials/introduction/containers/persist-data): Learn how to persist data outside of containers by creating named volumes, mounting volumes to data directories, and accessing persisted data from multiple container runs and removals in Docker. |
38 |
| -* [Containers overview](/tutorials/introduction/containers/overview): Discover the world of containerization with Docker, a platform for isolated environments that package applications, frameworks, and libraries into self-contained containers for consistent and reliable deployment across diverse computing environments. |
39 |
| -* [Dockerfile](/tutorials/introduction/containers/create-dockerfiles): Learn how to create a Dockerfile to customize a Docker image and use an entrypoint script to run a command when the container starts, making it a reusable and executable unit for deploying and sharing applications. |
40 |
| -* [Docker commands](/tutorials/introduction/containers/docker-commands): Runpod enables BYOC development with Docker, providing a reference sheet for commonly used Docker commands, including login, images, containers, Dockerfile, volumes, network, and execute. |
41 |
| - |
42 |
| -## Integrations |
43 |
| - |
44 |
| -Explore how to integrate Runpod with other tools and platforms like OpenAI, SkyPilot, and Charm's Mods. |
45 |
| - |
46 |
| -### OpenAI |
47 |
| - |
48 |
| -* [Overview](/tutorials/migrations/openai/overview): Use the OpenAI SDK to integrate with your Serverless Endpoints. |
49 |
| - |
50 |
| -### SkyPilot |
51 |
| - |
52 |
| -* [Running Runpod on SkyPilot](/integrations/skypilot): Learn how to deploy Pods from Runpod using SkyPilot. |
53 |
| - |
54 |
| -### Mods |
55 |
| - |
56 |
| -* [Running Runpod on Mods](/integrations/mods): Learn to integrate into Charm's Mods tool chain and use Runpod as the Serverless Endpoint. |
57 |
| - |
58 |
| -## Migration |
59 |
| - |
60 |
| -Learn how to migrate from other tools and technologies to Runpod. |
61 |
| - |
62 |
| -### Cog |
63 |
| - |
64 |
| -* [Cog Migration](/tutorials/migrations/cog/overview): Migrate your Cog model from [Replicate.com](https://www.replicate.com) to Runpod by following this step-by-step guide, covering setup, model identification, Docker image building, and serverless endpoint creation. |
65 |
| - |
66 |
| -### Banana |
67 |
| - |
68 |
| -* [Banana migration](/tutorials/migrations/banana/overview): Quickly migrate from Banana to Runpod with Docker, leveraging a bridge between the two environments for a seamless transition. Utilize a Dockerfile to encapsulate your environment and deploy existing projects to Runpod with minimal adjustments. |
| 30 | +<CardGroup cols={2}> |
| 31 | + <Card title="Generate images with ComfyUI on Serverless" href="/tutorials/serverless/comfyui" icon="brackets-curly" iconType="solid"> |
| 32 | + Deploy ComfyUI on Serverless and generate images using JSON workflows. |
| 33 | + </Card> |
| 34 | + <Card title="Generate images with ComfyUI on Pods" href="/tutorials/pods/comfyui" icon="images" iconType="solid"> |
| 35 | + Deploy ComfyUI on a GPU Pod and generate images using the ComfyUI web interface. |
| 36 | + </Card> |
| 37 | +</CardGroup> |
0 commit comments