# diffusers

## Diffusers

### Overview

Huggingce Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. It can be used to:

* The simple inference solution
* Training customize diffusion model

It focus on:

* Usability over performance
* Simple over easy
* Customizability over abstractions

### The main components

It has three main componenets:

* State-of-the-art `diffusion pipelines` for interence with code
* Interchangeable `noise schedulers` for balancing trade-offs between generation speed and quality
* Pretrained `models` that can be used as building blocks, and combined with schedulers, for creating end-to-end diffusion systems

## Reference

[Huggingface diffusers](https://huggingface.co/docs/diffusers/index) [Diffusers intro](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://aisuko.gitbook.io/wiki/ai-techniques/diffusers.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
