πŸ“–
Wiki
CNCFSkywardAIHuggingFaceLinkedInKaggleMedium
  • Home
    • πŸš€About
  • πŸ‘©β€πŸ’»πŸ‘©Freesoftware
    • πŸ‰The GNU Hurd
      • πŸ˜„The files extension
      • πŸ“½οΈTutorial for starting
      • 🚚Continue Working for the Hurd
      • πŸš΄β€β™‚οΈcgo
        • πŸ‘―β€β™€οΈStatically VS Dynamically binding
        • 🧌Different ways in binding
        • πŸ‘¨β€πŸ’»Segfault
      • πŸ›ƒRust FFI
    • πŸ§šπŸ»β€β™‚οΈProgramming
      • πŸ“–Introduction to programming
      • πŸ“–Mutable Value Semantics
      • πŸ“–Linked List
      • πŸ“–Rust
        • πŸ“–Keyword dyn
        • πŸ“–Tonic framework
        • πŸ“–Tokio
        • πŸ“–Rust read files
  • πŸ›€οΈAI techniques
    • πŸ—„οΈframework
      • 🧷pytorch
      • πŸ““Time components
      • πŸ““burn
    • 🍑Adaptation
      • 🎁LoRA
        • ℹ️Matrix Factorization
        • πŸ“€SVD
          • ✝️Distillation of SVD
          • 🦎Eigenvalues of a covariance matrix
            • 🧧Eigenvalues
            • πŸͺCovariance Matrix
        • πŸ›«Checkpoint
      • 🎨PEFT
    • πŸ™‹β€β™‚οΈTraining
      • πŸ›»Training with QLoRA
      • 🦌Deep Speed
    • 🧠Stable Diffusion
      • πŸ€‘Stable Diffusion model
      • πŸ“ΌStable Diffusion v1 vs v2
      • πŸ€Όβ€β™€οΈThe important parameters for stunning AI image
      • ⚾Diffusion in image
      • 🚬Classifier Free Guidance
      • ⚜️Denoising strength
      • πŸ‘·Stable Diffusion workflow
      • πŸ“™LoRA(Stable Diffusion)
      • πŸ—ΊοΈDepth maps
      • πŸ“‹CLIP
      • βš•οΈEmbeddings
      • πŸ• VAE
      • πŸ’₯Conditioning
      • 🍁Diffusion sampling/samplers
      • πŸ₯ Prompt
      • πŸ˜„ControlNet
        • πŸͺ‘Settings Explained
        • 🐳ControlNet with models
    • πŸ¦™Large Language Model
      • ☺️SMID
      • πŸ‘¨β€πŸŒΎARM NEON
      • 🍊Metal
      • 🏁BLAS
      • πŸ‰ggml
      • πŸ’»llama.cpp
      • 🎞️Measuring model quality
      • πŸ₯žType for NNC
      • πŸ₯žToken
      • πŸ€Όβ€β™‚οΈDoc Retrieval && QA with LLMs
      • Hallucination(AI)
    • 🐹diffusers
      • πŸ’ͺDeconstruct the Stable Diffusion pipeline
  • 🎹Implementing
    • πŸ‘¨β€πŸ’»diffusers
      • πŸ“–The Annotated Diffusion Model
  • 🧩Trending
    • πŸ“–Trending
      • πŸ“–Vector database
      • 🍎Programming Languages
        • πŸ“–Go & Rust manage their memories
        • πŸ“–Performance of Rust and Python
        • πŸ“–Rust ownership and borrowing
      • πŸ“–Neural Network
        • 🎹Sliding window/convolutional filter
      • Quantum Machine Learning
  • 🎾Courses Collection
    • πŸ“–Courses Collection
      • πŸ“šAcademic In IT
        • πŸ“Reflective Writing
      • πŸ“–UCB
        • πŸ“–CS 61A
          • πŸ“–Computer Science
          • πŸ“–Scheme
          • πŸ“–Python
          • πŸ“–Data Abstraction
          • πŸ“–Object-Oriented Programming
          • πŸ“–Interpreters
          • πŸ“–Streams
      • 🍎MIT Algorithm Courses
        • 0️MIT 18.01
          • 0️Limits and continuity
          • 1️Derivatives
          • 3️Integrals
        • 1️MIT 6.042J
          • πŸ”’Number Theory
          • πŸ“ŠGraph Theory
            • 🌴Graph and Trees
            • 🌲Shortest Paths and Minimum Spanning Trees
        • 2️MIT 6.006
          • Intro and asymptotic notation
          • Sorting and Trees
            • Sorting
            • Trees
          • Hashing
          • Graphs
          • Shortest Paths
          • Dynamic Programming
          • Advanced
        • 3️MIT 6.046J
          • Divide and conquer
          • Dynamic programming
          • Greedy algorithms
          • Graph algorithms
Powered by GitBook
On this page
  • Overview
  • What is the Sampling?
  • Noise schedule
  • Samplers overview
  • Old-School ODE solvers
  • Ancestral samplers
  • Karras noise schedule
  • DDIM and PLMS
  • DPM and DPM++
  • UniPC
  • k-diffusion
  • Credit

Was this helpful?

Edit on GitHub
  1. AI techniques
  2. Stable Diffusion

Diffusion sampling/samplers

PreviousConditioningNextPrompt

Last updated 1 year ago

Was this helpful?

Overview

There are many sampling methods available in AUTOMATIC1111.

  • What are samplers?

  • How do they work?

  • What is the difference between them?

  • Which one should you use?

What is the Sampling?

Below is a sampling process in action. The sampler gradually produces cleaner and cleaner images.

While the framework is the same, there are many ways to carry out this denoising process. It is often a trade-off between speed and accuracy.

Noise schedule

At each step, the sampler's job is to produce an image with a noise level matching the noise schedule.

What's the effect of increasing the number of sampling steps? A smaller noise reduction between each step. This helps to reduce the truncation error of the sampling.

Compare the noise schedules of 15 steps and 30 steps below.

Samplers overview

What are the differences between them?

Old-School ODE solvers

Euler

The simplest possible solver

Heun

A more accurate but slower version of Euler

LMS

(Linear multi-step method) Same speed as Euler but (supposedly) more accurate.

Ancestral samplers

An ancestral sampler adds noise to the image at each sampling step. They are stochastic samplers because the sampling outcome has some randomness to it.

Be aware that many others are also stochastic samplers, even though their names do not have an "a" in them.

The drawback of using an ancestral sampler is that the image would not converge. Compare the images generated using Euler a and Euler below.

Images generated with Euler "a" do not converge at high sampling steps. In contrast, images from Euler converge well.

For reproducibility, it is desirable to have the image converge. If you want to generate slight variations, you should use variational seed[#TODO].

Karras noise schedule

DDIM and PLMS

They are generally seen as outdated and not widely used anymore.

DPM and DPM++

DPM (Diffusion probabilistic model solver) and DPM++ are new samplers designed for diffusion models released in 2022. They represent a family of solvers of similar architecture.

DPM and DPM2 are similar except for DPM2 being second order (More accurate but slower).

DPM++ is an improvement over DPM.

DPM adaptive adjusts the step size adaptively. It can be slow since it doesn’t guarantee finishing within the number of sampling steps.

UniPC

k-diffusion

Credit

The produce an image, Stable Diffusion first generates a completely random image in the latent space. The then estimates the noise of the image. The predicted noise is subtracted from the image. This process is repeated a dozen times. In the end, we will get a clean image.

This process is called sampling because Stable Diffusion generates a new sample image in each step. The method used in sampling is called the sampler of sampling method.

The noisy image gradually turns into a clear one in the picture above. The . The noise is highest at the first step and gradually reduces to zero at the last step.

The samplers with the label "Karras" use the noise schedule recommended in the . As you can see the noise step sizes are smaller near the end. They found that this improves the quality of images.

DDIM (Denoising Diffusion Implicit Model) and PLMS (Pseudo Linear Multi-Step method) were the samplers shipped with the original . DDIM is one of the first samplers designed for diffusion models. PLMS is a newer and faster alternative to DDIM.

(Unified Predictor-Corrector) is a new sampler released in 2023. Inspired by the predictor-corrector method in ODE solvers, it can achieve high-quality image generation in 5-10 steps.

It simply refers to Katherine Crowson’s GitHub repository and the samplers associated with it. The repository implements the samplers studied in the article. Basically, all samplers in AUTOMATIC1111 except DDIM, PLMS, and UniPC are borrowed from k-diffusion.

πŸ›€οΈ
🧠
🍁
denoising
Karras article
Stable Diffusion v1
UniPC
k-diffusion
Karras 2022
LogoStable Diffusion Samplers: A Comprehensive Guide - Stable Diffusion ArtStable Diffusion Art
Images after each denoising step.
Noise schedule for 15 sampling steps.
Noise schedule for 30 sampling steps.
Samplers in AUTOMATIC1111
Euler a does not converge. (sample steps 2 – 40)
Euler converges. (sampling steps 2-40)
Comparison between the default and Karras noise schedule.
The sampler is responsible for carrying out the steps.
denoising
noise schedule controls the noise level at each sampling step
noise predictor