πŸ“–
Wiki
CNCFSkywardAIHuggingFaceLinkedInKaggleMedium
  • Home
    • πŸš€About
  • πŸ‘©β€πŸ’»πŸ‘©Freesoftware
    • πŸ‰The GNU Hurd
      • πŸ˜„The files extension
      • πŸ“½οΈTutorial for starting
      • 🚚Continue Working for the Hurd
      • πŸš΄β€β™‚οΈcgo
        • πŸ‘―β€β™€οΈStatically VS Dynamically binding
        • 🧌Different ways in binding
        • πŸ‘¨β€πŸ’»Segfault
      • πŸ›ƒRust FFI
    • πŸ§šπŸ»β€β™‚οΈProgramming
      • πŸ“–Introduction to programming
      • πŸ“–Mutable Value Semantics
      • πŸ“–Linked List
      • πŸ“–Rust
        • πŸ“–Keyword dyn
        • πŸ“–Tonic framework
        • πŸ“–Tokio
        • πŸ“–Rust read files
  • πŸ›€οΈAI techniques
    • πŸ—„οΈframework
      • 🧷pytorch
      • πŸ““Time components
      • πŸ““burn
    • 🍑Adaptation
      • 🎁LoRA
        • ℹ️Matrix Factorization
        • πŸ“€SVD
          • ✝️Distillation of SVD
          • 🦎Eigenvalues of a covariance matrix
            • 🧧Eigenvalues
            • πŸͺCovariance Matrix
        • πŸ›«Checkpoint
      • 🎨PEFT
    • πŸ™‹β€β™‚οΈTraining
      • πŸ›»Training with QLoRA
      • 🦌Deep Speed
    • 🧠Stable Diffusion
      • πŸ€‘Stable Diffusion model
      • πŸ“ΌStable Diffusion v1 vs v2
      • πŸ€Όβ€β™€οΈThe important parameters for stunning AI image
      • ⚾Diffusion in image
      • 🚬Classifier Free Guidance
      • ⚜️Denoising strength
      • πŸ‘·Stable Diffusion workflow
      • πŸ“™LoRA(Stable Diffusion)
      • πŸ—ΊοΈDepth maps
      • πŸ“‹CLIP
      • βš•οΈEmbeddings
      • πŸ• VAE
      • πŸ’₯Conditioning
      • 🍁Diffusion sampling/samplers
      • πŸ₯ Prompt
      • πŸ˜„ControlNet
        • πŸͺ‘Settings Explained
        • 🐳ControlNet with models
    • πŸ¦™Large Language Model
      • ☺️SMID
      • πŸ‘¨β€πŸŒΎARM NEON
      • 🍊Metal
      • 🏁BLAS
      • πŸ‰ggml
      • πŸ’»llama.cpp
      • 🎞️Measuring model quality
      • πŸ₯žType for NNC
      • πŸ₯žToken
      • πŸ€Όβ€β™‚οΈDoc Retrieval && QA with LLMs
      • Hallucination(AI)
    • 🐹diffusers
      • πŸ’ͺDeconstruct the Stable Diffusion pipeline
  • 🎹Implementing
    • πŸ‘¨β€πŸ’»diffusers
      • πŸ“–The Annotated Diffusion Model
  • 🧩Trending
    • πŸ“–Trending
      • πŸ“–Vector database
      • 🍎Programming Languages
        • πŸ“–Go & Rust manage their memories
        • πŸ“–Performance of Rust and Python
        • πŸ“–Rust ownership and borrowing
      • πŸ“–Neural Network
        • 🎹Sliding window/convolutional filter
      • Quantum Machine Learning
  • 🎾Courses Collection
    • πŸ“–Courses Collection
      • πŸ“šAcademic In IT
        • πŸ“Reflective Writing
      • πŸ“–UCB
        • πŸ“–CS 61A
          • πŸ“–Computer Science
          • πŸ“–Scheme
          • πŸ“–Python
          • πŸ“–Data Abstraction
          • πŸ“–Object-Oriented Programming
          • πŸ“–Interpreters
          • πŸ“–Streams
      • 🍎MIT Algorithm Courses
        • 0️MIT 18.01
          • 0️Limits and continuity
          • 1️Derivatives
          • 3️Integrals
        • 1️MIT 6.042J
          • πŸ”’Number Theory
          • πŸ“ŠGraph Theory
            • 🌴Graph and Trees
            • 🌲Shortest Paths and Minimum Spanning Trees
        • 2️MIT 6.006
          • Intro and asymptotic notation
          • Sorting and Trees
            • Sorting
            • Trees
          • Hashing
          • Graphs
          • Shortest Paths
          • Dynamic Programming
          • Advanced
        • 3️MIT 6.046J
          • Divide and conquer
          • Dynamic programming
          • Greedy algorithms
          • Graph algorithms
Powered by GitBook
On this page
  • What is hallucination?
  • What is the difference between human hallucination and AI hallucination?
  • How to avoid hallucination?
  • Reference

Was this helpful?

Edit on GitHub
  1. AI techniques
  2. Large Language Model

Hallucination(AI)

PreviousDoc Retrieval && QA with LLMsNextdiffusers

Last updated 1 year ago

Was this helpful?

When I am trying to solve the AI community issues on GitHub. I saw some issues about the hallucination but most of new user they are not familiar with this term. They beleive that hallucination is a bug of the API service. Moreover, according to Wekipedia. By 2023, analysts considered frequent hallucination to be a major problem in LLM tehchnology. And I have many of experience on hallucination on copilot. So I believe we need to know more about hallucination.

What is hallucination?

A hallucination or artificial hallucination (also called confabulation or delusion) is a confident response by an AI that dies not seem to be justified by its training data (Wikipedia). Maybe you have seen with Github copilot, sometimes if you input the comment of the code, it will generate the tons of same content and repeat the same content. I think this is a direct example of hallucination.

But there is another example of hallucination. If you input the code of the comment, it will generate the code that is not related to the comment. For example, if you input the comment of the code, it will generate the code of the comment. This is a indirect example of hallucination.

In my opinion, these two examples are all hallucination. But the first one is more obvious than the second one. So, here is a potential issue. If we trust AI model and let it do all the tasks that we do not know the correct result. If the AI model has hallucination, we will trust the wrong result. This may cause the serious issue. So, we need to know more about hallucination and how to avoid it.

What is the difference between human hallucination and AI hallucination?

According to the Wikipedia, one key difference between human hallucination and AI hallucination is that human hallucination is usually associated with false , but an AI hallucination is associated with the category of unjustified responses or beliefs.

How to avoid hallucination?

According to the Wikipedia, the hallucination phenomenon is still not completely understood. According to my experience on maintaining the neutral language model processing service, I think we need to make sure the prompt with the most valid format. And it also should include useful information.

Maybe you can imagine that you are trying to describe a question to a expert in the specific domain. You should be professional to describe the issue to him or her. This is also means that we need to know more than before, also we have AI technology to help us. We still to learn more than before.

Reference

πŸ›€οΈ
πŸ¦™
percepts
Hallucination (artificial intelligence)