This is a dockerized version of the project by AUTOMATIC1111. The features of the UI are described here.
This should go without saying but you need an NVIDIA graphics card with at least 8GB RAM. Rumor has it that you can run it even on 4GB but you will be limited in your results.
Build as you would any normal docker image. Image name can be whatever you want it to be (I called it achaiah.local
in this case) E.g.:
DOCKER_BUILDKIT=0 docker build -t achaiah.local/ai.inference.stable_diffusion_webui:latest -f Dockerfile .
- Create local directories for image output
log
andmodels/Stable-diffusion
:
mkdir -p /your/local/output/path/log
mkdir -p /your/local/output/path/models/Stable-diffusion
-
Get your HuggingFace developer token from HF's website and pass it as an env. variable into the container (see below)
-
With docker running, execute:
docker run \
--name local_diffusion \
-it \
-p 7860:7860 \
--rm \
--init \
--gpus all \
--ipc=host \
--ulimit memlock=-1 \
--ulimit stack=67108864 \
-e WGET_USER_HEADER="Authorization: Bearer <Your Huggingface Token Here>" \
-v </your/local/output/path/log>:/content/stable-diffusion-webui/log \
-v </your/local/output/path/models>:/content/stable-diffusion-webui/models \
achaiah.local/ai.inference.stable_diffusion_webui:latest
Note the -v
arguments. If you want your images to be preserved after docker shuts down you will want to map a local path to the output produced by webui
and to the models that it downloads.
The first run of this container will take a while because it will download all models (~15 GB) and install additional libraries. Afterwards, if you simply restart the container, the changes will be preserved. Alternatively remove the --rm
flag to avoid deleting the container on shutdown.
For devs: there are many other flags available that you can add to runme.sh
. For a full list see this file.
Localtunnel is installed to provide public access to anyone you'd like. To enable it, start docker with -e LT=Y
LTENDPOINT
. If you're running the docker image with -it
above, you will see the endpoint printed in console. If you're running with -d
(detached), you can find the info by entering the running container: docker exec -it local_diffusion /bin/bash
and executing cat /content/stable-diffusion-webui/lt.log
.