πŸ“–
Wiki
CNCFSkywardAIHuggingFaceLinkedInKaggleMedium
  • Home
    • πŸš€About
  • πŸ‘©β€πŸ’»πŸ‘©Freesoftware
    • πŸ‰The GNU Hurd
      • πŸ˜„The files extension
      • πŸ“½οΈTutorial for starting
      • 🚚Continue Working for the Hurd
      • πŸš΄β€β™‚οΈcgo
        • πŸ‘―β€β™€οΈStatically VS Dynamically binding
        • 🧌Different ways in binding
        • πŸ‘¨β€πŸ’»Segfault
      • πŸ›ƒRust FFI
    • πŸ§šπŸ»β€β™‚οΈProgramming
      • πŸ“–Introduction to programming
      • πŸ“–Mutable Value Semantics
      • πŸ“–Linked List
      • πŸ“–Rust
        • πŸ“–Keyword dyn
        • πŸ“–Tonic framework
        • πŸ“–Tokio
        • πŸ“–Rust read files
  • πŸ›€οΈAI techniques
    • πŸ—„οΈframework
      • 🧷pytorch
      • πŸ““Time components
      • πŸ““burn
    • 🍑Adaptation
      • 🎁LoRA
        • ℹ️Matrix Factorization
        • πŸ“€SVD
          • ✝️Distillation of SVD
          • 🦎Eigenvalues of a covariance matrix
            • 🧧Eigenvalues
            • πŸͺCovariance Matrix
        • πŸ›«Checkpoint
      • 🎨PEFT
    • πŸ™‹β€β™‚οΈTraining
      • πŸ›»Training with QLoRA
      • 🦌Deep Speed
    • 🧠Stable Diffusion
      • πŸ€‘Stable Diffusion model
      • πŸ“ΌStable Diffusion v1 vs v2
      • πŸ€Όβ€β™€οΈThe important parameters for stunning AI image
      • ⚾Diffusion in image
      • 🚬Classifier Free Guidance
      • ⚜️Denoising strength
      • πŸ‘·Stable Diffusion workflow
      • πŸ“™LoRA(Stable Diffusion)
      • πŸ—ΊοΈDepth maps
      • πŸ“‹CLIP
      • βš•οΈEmbeddings
      • πŸ• VAE
      • πŸ’₯Conditioning
      • 🍁Diffusion sampling/samplers
      • πŸ₯ Prompt
      • πŸ˜„ControlNet
        • πŸͺ‘Settings Explained
        • 🐳ControlNet with models
    • πŸ¦™Large Language Model
      • ☺️SMID
      • πŸ‘¨β€πŸŒΎARM NEON
      • 🍊Metal
      • 🏁BLAS
      • πŸ‰ggml
      • πŸ’»llama.cpp
      • 🎞️Measuring model quality
      • πŸ₯žType for NNC
      • πŸ₯žToken
      • πŸ€Όβ€β™‚οΈDoc Retrieval && QA with LLMs
      • Hallucination(AI)
    • 🐹diffusers
      • πŸ’ͺDeconstruct the Stable Diffusion pipeline
  • 🎹Implementing
    • πŸ‘¨β€πŸ’»diffusers
      • πŸ“–The Annotated Diffusion Model
  • 🧩Trending
    • πŸ“–Trending
      • πŸ“–Vector database
      • 🍎Programming Languages
        • πŸ“–Go & Rust manage their memories
        • πŸ“–Performance of Rust and Python
        • πŸ“–Rust ownership and borrowing
      • πŸ“–Neural Network
        • 🎹Sliding window/convolutional filter
      • Quantum Machine Learning
  • 🎾Courses Collection
    • πŸ“–Courses Collection
      • πŸ“šAcademic In IT
        • πŸ“Reflective Writing
      • πŸ“–UCB
        • πŸ“–CS 61A
          • πŸ“–Computer Science
          • πŸ“–Scheme
          • πŸ“–Python
          • πŸ“–Data Abstraction
          • πŸ“–Object-Oriented Programming
          • πŸ“–Interpreters
          • πŸ“–Streams
      • 🍎MIT Algorithm Courses
        • 0️MIT 18.01
          • 0️Limits and continuity
          • 1️Derivatives
          • 3️Integrals
        • 1️MIT 6.042J
          • πŸ”’Number Theory
          • πŸ“ŠGraph Theory
            • 🌴Graph and Trees
            • 🌲Shortest Paths and Minimum Spanning Trees
        • 2️MIT 6.006
          • Intro and asymptotic notation
          • Sorting and Trees
            • Sorting
            • Trees
          • Hashing
          • Graphs
          • Shortest Paths
          • Dynamic Programming
          • Advanced
        • 3️MIT 6.046J
          • Divide and conquer
          • Dynamic programming
          • Greedy algorithms
          • Graph algorithms
Powered by GitBook
On this page
  • What is SVD?
  • Formula
  • SVD Works Detail
  • Reducing the dimensionality
  • Denoising a dataset

Was this helpful?

Edit on GitHub
  1. AI techniques
  2. Adaptation
  3. LoRA

SVD

Singular Value Decomposition(A mathematical technique)

What is SVD?

Singular value decomposition (SVD) is a matrix factorization technique that can be used to reduce the dimensionality, denoise, and visualize data. It is a fundamental technique in machine learning and data science.

It works by decomposing a matrix into three matrices:

  • A diagonal matrix of singular values

  • A matrix of left singular vectors

  • A matrix of right singular vectors

Formula

Given a matrix A, the SVD factorizes it into the following form:

A=UΞ£VTA=UΞ£V^TA=UΞ£VT

Where:

  • U is an orthogonal matrix whose columns represent the left singular vectors of A.

  • Ξ£ is a diagonal matrix with non-negative values, known as singular values.

  • V^T is the transpose of an orthogonal matrix V, whose columns represent the right singular vectors of A.

The singular values in Ξ£ are arranged in descending order along the diagonal. They indicate the importance or significance of the corresponding singular vectors in U and V^T. The first singular vector pair captures the most significant pattern or structure in the matrix, while subsequent pairs capture less important patterns.

SVD Works Detail

The left and right singular vectors are the eigenvectors of the covariance matrix.

Singular values

Left Singular Vectors

The left singular vectors are the eigenvectors of the covariance matrix corresponding to the singular values. The left singular vectors are arranged in the same order as the singular values. The left singular vector corresponding to the first singular value is the direction of the first principal component, the left singular vector corresponding to the second singular value is the direction of the second principal component, and so on.

Right Singular Vectors

The right singular vectors are the eigenvectors of the covariance matrix transposed corresponding to the singular values. The right singular vectors are arranged in the same order as the singular values. The right singular vector corresponding to the first singular value is the direction of the first principal component, the right singular vector corresponding to the second singular value is the direction of the second principal component, and so on.

Reducing the dimensionality

SVD can be used to reduce the dimensionality of a dataset by projecting the data onto a lower-dimensional subspace. The subspace is spanned by the left singular vectors corresponding to the largest singular values. The smaller singular values are discarded, as they account for less of the variance of the data.

Denoising a dataset

SVD can also be used to denoise a dataset. The noise in the data is typically concentrated in the directions of the smaller singular values. By discarding the smaller singular values, the noise in the data can be significantly reduced.

PreviousMatrix FactorizationNextDistillation of SVD

Last updated 1 year ago

Was this helpful?

The singular values are the square roots of of the data. The eigenvalues of the covariance matrix are the variances of the data along each principal component. The singular values are arranged in descending order, with the largest singular value corresponding to the first principal component, the second largest singular value corresponding to the second principal component, and so on.

πŸ›€οΈ
🍑
🎁
πŸ“€
the eigenvalues of the covariance matrix