124K Stars!!!A Free And Easy-To-Use Web Interface For Stable Diffusion

Stable Diffusion web UI is designed to provide an easy-to-use graphical interface that allows users to interact with the Stable Diffusion model without the need for command-line operations.
Stars: 124k
License: AGPL-3.0
Languages: Python(87.0%), JavaScript(8.8%)
Link: https://github.com/AUTOMATIC1111/stable-diffusion-webui
Features
- Original txt2img and img2img modes
- One-click install and run script (but you still must install Python and git)
- Outpainting
- Inpainting
- Color Sketch
- Prompt Matrix
- Stable Diffusion Upscale
- Loopback, run img2img processing multiple times
- X/Y/Z plot, a way to draw a 3-dimensional plot of images with different parameters
- Textual Inversion
- Resizing aspect ratio options
- Sampling method selection
- Interrupt processing at any time
- 4GB video card support (also reports of 2GB working)
- Correct seeds for batches
- Live prompt token length validation
- Generation parameters
- Read Generation Parameters Button, loads parameters in prompt box to UI
- Settings page
- Running arbitrary Python code from UI (must run with
--allow-code
to enable) - Mouseover hints for most UI elements
- Possible to change defaults/mix/max/step values for UI elements via text config
- Tiling support, a checkbox to create images that can be tiled like textures
- A progress bar and live image generation preview
- Negative prompt, an extra text field that allows you to list what you don’t want to see in the generated image
- Styles, a way to save part of the prompt and easily apply them via dropdown later
- Variations, a way to generate the same image but with tiny differences
- Seed resizing, is a way to generate the same image but at a slightly different resolution
- CLIP interrogator, a button that tries to guess a prompt from an image
- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
- Batch Processing, process a group of files using img2img
- Img2img Alternative, reverse Euler method of cross attention control
- Highres Fix, a convenient option to produce high-resolution pictures in one click without usual distortions
- Reloading checkpoints on the fly
- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
- Custom scripts with many extensions from the community
- Composable-Diffusion
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
- DeepDanbooru integration creates Danbooru style tags for anime prompts
- xformers, major speed increase for select cards: (add
--xformers
to command line args) - via extension: History tab: view, direct, and delete images conveniently within the UI
- Generate forever option
- Training tab
- Clip skip
- Hypernetworks
- Loras (same as Hypernetworks but more pretty)
- A separate UI where you can choose, with preview, which embeddings, hyper networks or Loras to add to your prompt
- Can select to load a different VAE from the settings screen
- Estimated completion time in progress bar
- API
- Support for dedicated inpainting model by RunwayML
- via extension: Aesthetic Gradients, a way to generate images with a specific aesthetic by using clip image embeds (implementation of https://github.com/vicgalle/stable-diffusion-aesthetic-gradients)
- Stable Diffusion 2.0 support — see wiki for instructions
- Alt-Diffusion support — see wiki for instructions
- Now without any bad letters!
- Load checkpoints in safe-tensors format
- Eased resolution restriction: The generated image’s dimensions must be a multiple of 8 rather than 64
- Now with a license!
- Reorder elements in the UI from the settings screen
- Segmind Stable Diffusion support
Installation and Running
Make sure the required dependencies are met and follow the instructions available for:
- NVidia (recommended)
- AMD GPUs.
- Intel CPUs, Intel GPUs (both integrated and discrete) (external wiki page)
Alternatively, use online services (like Google Colab):
Installation on Windows 10/11 with NVidia-GPUs using the release package
- Download
sd.webui.zip
from v1.0.0-pre and extract its contents. - Run
update.bat
. - Run
run.bat
.
Automatic Installation on Windows
- Install Python 3.10.6 (The newer version of Python does not support torch), checking “Add Python to PATH”.
- Install git.
- Download the stable-diffusion-webui repository, for example by running
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
. - Run
webui-user.bat
from Windows Explorer as a normal, non-administrator, user.
Automatic Installation on Linux
- Install the dependencies:
# Debian-based:
sudo apt install wget git python3 python3-venv libgl1 libglib2.0-0
# Red Hat-based:
sudo dnf install wget git python3 gperftools-libs libglvnd-glx
# openSUSE-based:
sudo zypper install wget git python3 libtcmalloc4 libglvnd
# Arch-based:
sudo pacman -S wget git python3
- Navigate to the directory you would like the webui to be installed and execute the following command:
wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh
- Run
webui.sh
. - Check
webui-user.sh
for options.
Related Story