From 61f8dbf72cf77affc5205f7edbf68b7ce0bb0ed9 Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Fri, 3 Mar 2023 17:25:16 +0530 Subject: [PATCH 1/2] Update README.md --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 5a0109489e..5d6759fc74 100644 --- a/README.md +++ b/README.md @@ -38,6 +38,8 @@ Note that the way we connect layers is computational efficient. The original SD # Features & News +2023/03/02 - Diffusers now officially support ControlNet. Check it out [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/controlnet) 🧨 Thanks to [takuma104](https://huggingface.co/takuma104) who led this integration. + 2023/02/26 - We released a blog - [Ablation Study: Why ControlNets use deep encoder? What if it was lighter? Or even an MLP?](https://github.com/lllyasviel/ControlNet/discussions/188) 2023/02/20 - Implementation for non-prompt mode released. See also [Guess Mode / Non-Prompt Mode](#guess-anchor). From 00c623d591f1c961bf7fb871c53ce11a2021cf5d Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Fri, 3 Mar 2023 18:16:01 +0530 Subject: [PATCH 2/2] Update README.md --- README.md | 47 ++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 46 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 5d6759fc74..a23ebf317e 100644 --- a/README.md +++ b/README.md @@ -38,7 +38,52 @@ Note that the way we connect layers is computational efficient. The original SD # Features & News -2023/03/02 - Diffusers now officially support ControlNet. Check it out [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/controlnet) 🧨 Thanks to [takuma104](https://huggingface.co/takuma104) who led this integration. +2023/03/02 - Diffusers now officially support ControlNet. Check it out [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/controlnet) 🧨 Thanks to [takuma104](https://huggingface.co/takuma104) who led this integration. Below is an example that shows how to use it: + +```py +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler +from diffusers.utils import load_image +import torch +import cv2 + +# Load an image to extract Canny edge maps. +image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +) + +# Extract Canny edge map. +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold)[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +# Load a `ControlNetModel` pre-trained on edge maps. Then load the pipeline. +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +) + +# Configure the pipeline for speeding things up 🔥 +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) # Fast sampler. +pipe.enable_model_cpu_offload() # Memory optimization +pipe.enable_xformers_memory_efficient_attention() # Memory optimization + +# Generate! +prompt = "rihanna, best quality, extremely detailed" +output = pipe( + prompt, + canny_image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=20, +) +``` + +To know more about all the supported models and other details of `StableDiffusionControlNetPipeline` check out [this blog post](https://huggingface.co/blog/controlnet). + 2023/02/26 - We released a blog - [Ablation Study: Why ControlNets use deep encoder? What if it was lighter? Or even an MLP?](https://github.com/lllyasviel/ControlNet/discussions/188)