πŸ“–
Wiki
CNCFSkywardAIHuggingFaceLinkedInKaggleMedium
  • Home
    • πŸš€About
  • πŸ‘©β€πŸ’»πŸ‘©Freesoftware
    • πŸ‰The GNU Hurd
      • πŸ˜„The files extension
      • πŸ“½οΈTutorial for starting
      • 🚚Continue Working for the Hurd
      • πŸš΄β€β™‚οΈcgo
        • πŸ‘―β€β™€οΈStatically VS Dynamically binding
        • 🧌Different ways in binding
        • πŸ‘¨β€πŸ’»Segfault
      • πŸ›ƒRust FFI
    • πŸ§šπŸ»β€β™‚οΈProgramming
      • πŸ“–Introduction to programming
      • πŸ“–Mutable Value Semantics
      • πŸ“–Linked List
      • πŸ“–Rust
        • πŸ“–Keyword dyn
        • πŸ“–Tonic framework
        • πŸ“–Tokio
        • πŸ“–Rust read files
  • πŸ›€οΈAI techniques
    • πŸ—„οΈframework
      • 🧷pytorch
      • πŸ““Time components
      • πŸ““burn
    • 🍑Adaptation
      • 🎁LoRA
        • ℹ️Matrix Factorization
        • πŸ“€SVD
          • ✝️Distillation of SVD
          • 🦎Eigenvalues of a covariance matrix
            • 🧧Eigenvalues
            • πŸͺCovariance Matrix
        • πŸ›«Checkpoint
      • 🎨PEFT
    • πŸ™‹β€β™‚οΈTraining
      • πŸ›»Training with QLoRA
      • 🦌Deep Speed
    • 🧠Stable Diffusion
      • πŸ€‘Stable Diffusion model
      • πŸ“ΌStable Diffusion v1 vs v2
      • πŸ€Όβ€β™€οΈThe important parameters for stunning AI image
      • ⚾Diffusion in image
      • 🚬Classifier Free Guidance
      • ⚜️Denoising strength
      • πŸ‘·Stable Diffusion workflow
      • πŸ“™LoRA(Stable Diffusion)
      • πŸ—ΊοΈDepth maps
      • πŸ“‹CLIP
      • βš•οΈEmbeddings
      • πŸ• VAE
      • πŸ’₯Conditioning
      • 🍁Diffusion sampling/samplers
      • πŸ₯ Prompt
      • πŸ˜„ControlNet
        • πŸͺ‘Settings Explained
        • 🐳ControlNet with models
    • πŸ¦™Large Language Model
      • ☺️SMID
      • πŸ‘¨β€πŸŒΎARM NEON
      • 🍊Metal
      • 🏁BLAS
      • πŸ‰ggml
      • πŸ’»llama.cpp
      • 🎞️Measuring model quality
      • πŸ₯žType for NNC
      • πŸ₯žToken
      • πŸ€Όβ€β™‚οΈDoc Retrieval && QA with LLMs
      • Hallucination(AI)
    • 🐹diffusers
      • πŸ’ͺDeconstruct the Stable Diffusion pipeline
  • 🎹Implementing
    • πŸ‘¨β€πŸ’»diffusers
      • πŸ“–The Annotated Diffusion Model
  • 🧩Trending
    • πŸ“–Trending
      • πŸ“–Vector database
      • 🍎Programming Languages
        • πŸ“–Go & Rust manage their memories
        • πŸ“–Performance of Rust and Python
        • πŸ“–Rust ownership and borrowing
      • πŸ“–Neural Network
        • 🎹Sliding window/convolutional filter
      • Quantum Machine Learning
  • 🎾Courses Collection
    • πŸ“–Courses Collection
      • πŸ“šAcademic In IT
        • πŸ“Reflective Writing
      • πŸ“–UCB
        • πŸ“–CS 61A
          • πŸ“–Computer Science
          • πŸ“–Scheme
          • πŸ“–Python
          • πŸ“–Data Abstraction
          • πŸ“–Object-Oriented Programming
          • πŸ“–Interpreters
          • πŸ“–Streams
      • 🍎MIT Algorithm Courses
        • 0️MIT 18.01
          • 0️Limits and continuity
          • 1️Derivatives
          • 3️Integrals
        • 1️MIT 6.042J
          • πŸ”’Number Theory
          • πŸ“ŠGraph Theory
            • 🌴Graph and Trees
            • 🌲Shortest Paths and Minimum Spanning Trees
        • 2️MIT 6.006
          • Intro and asymptotic notation
          • Sorting and Trees
            • Sorting
            • Trees
          • Hashing
          • Graphs
          • Shortest Paths
          • Dynamic Programming
          • Advanced
        • 3️MIT 6.046J
          • Divide and conquer
          • Dynamic programming
          • Greedy algorithms
          • Graph algorithms
Powered by GitBook
On this page
  • Using Preview with the ControlNet plugin
  • Preprocessor and models
  • OpenPose preprocessors
  • Tile resample model
  • Reference preprocessors
  • Canny edge detector(preprocessor+model)
  • Depth preprocessor
  • Line Art preprocessors
  • ControlNet Inpainting
  • Credit

Was this helpful?

Edit on GitHub
  1. AI techniques
  2. Stable Diffusion
  3. ControlNet

ControlNet with models

PreviousSettings ExplainedNextLarge Language Model

Last updated 1 year ago

Was this helpful?

Using Preview with the ControlNet plugin

The first step is to choose a preprocessor and it is helpful to turn on the preview so we know what the preprocessor is doing. Once the preprocessing is done, the original image is discarded, and only the preprocessed image will be used for ControlNet.

Reduce the Control Weight will help with color issues or other artifacts

Preprocessor and models

Once we choose a preprocessor, we must pick the correct model. We need to do is select the model with the same starting keyword as the preprocessor.

OpenPose preprocessors

  • OpenPose: eyes, noses, neck, shoulder, elbow, wrist, knees, and ankles

  • OpenPose_face: Openpose +facial detail

  • OpenPose_hand:OpenPose+hands and fingers

  • OpenPose_faceonly: facial details only

  • OpenPose_full: All of the above

Tile resample model

The Tile resample model is used for adding details to an image. It is often used to enlarge an image simultaneously.

Reference preprocessors

Reference is a new set of preprocessors that let us generate images similar to the reference image. The Stable Diffusion model and the prompt will still influence the images.

Reference preprocessors do not use a control model. We only need to select the preprocessor but not the model. And there are 3 reference preprocessors:

  • Reference adain

  • Reference only

    • Link the reference image directly to the attention layers

  • Reference adain+attn

    • Combination of above

Canny edge detector(preprocessor+model)

It extracts the outlines of an image. It is useful for retaining the composition of the original image.

Depth preprocessor

The depth preprocessor guesses the depth information from the reference image.

  • Depth Midas

    • A classic depth estimator

  • Depth Leres

    • More details but also tend to render background

  • Depth Leres++

    • Even more details

  • Zoe:

    • The level of detail sits between Midas and Leres.

Line Art preprocessors

Line Art renders the outline of an image. It attempts to convert it to a sample drawing. The are a few line art preprocessors

  • Line art anime: Anime-style lines

  • Line art anime denoise: Anime-style lines with fewer details

  • Line art realistic: Realistic-style lines

  • Line art coarse: Realistic-style lines with heavier weight

ControlNet Inpainting

ControlNet inpainting lets you use high denoising strength in inpainting to generate large variations without sacrificing consistency with the picture.

Credit

Style transfer via

is a general-purpose, old-school edge detector.

πŸ›€οΈ
🧠
πŸ˜„
🐳
with an upscaler
Adaptive Instance Normalization
Canny edge detector
LogoControlNet v1.1: A complete guide - Stable Diffusion ArtStable Diffusion Art