Diffusers 文件

入門:使用混合推理進行 VAE 解碼

Hugging Face's logo
加入 Hugging Face 社群

並獲得增強的文件體驗

開始使用

入門:使用混合推理進行 VAE 解碼

VAE 解碼是擴散模型的重要組成部分——將潛在表示轉換為影像或影片。

記憶體

這些表格展示了使用 SD v1 和 SD XL 進行 VAE 解碼在不同 GPU 上的 VRAM 要求。

對於大多數這些 GPU,記憶體使用百分比決定了其他模型(文字編碼器、UNet/Transformer)必須解除安裝,或者必須使用平鋪解碼,這會增加時間並影響質量。

SD v1.5
GPU 解析度 時間(秒) 記憶體 (%) 平鋪時間(秒) 平鋪記憶體 (%)
NVIDIA GeForce RTX 4090 512x512 0.031 5.60% 0.031 (0%) 5.60%
NVIDIA GeForce RTX 4090 1024x1024 0.148 20.00% 0.301 (+103%) 5.60%
NVIDIA GeForce RTX 4080 512x512 0.05 8.40% 0.050 (0%) 8.40%
NVIDIA GeForce RTX 4080 1024x1024 0.224 30.00% 0.356 (+59%) 8.40%
NVIDIA GeForce RTX 4070 Ti 512x512 0.066 11.30% 0.066 (0%) 11.30%
NVIDIA GeForce RTX 4070 Ti 1024x1024 0.284 40.50% 0.454 (+60%) 11.40%
NVIDIA GeForce RTX 3090 512x512 0.062 5.20% 0.062 (0%) 5.20%
NVIDIA GeForce RTX 3090 1024x1024 0.253 18.50% 0.464 (+83%) 5.20%
NVIDIA GeForce RTX 3080 512x512 0.07 12.80% 0.070 (0%) 12.80%
NVIDIA GeForce RTX 3080 1024x1024 0.286 45.30% 0.466 (+63%) 12.90%
NVIDIA GeForce RTX 3070 512x512 0.102 15.90% 0.102 (0%) 15.90%
NVIDIA GeForce RTX 3070 1024x1024 0.421 56.30% 0.746 (+77%) 16.00%
SDXL
GPU 解析度 時間(秒) 記憶體消耗 (%) 平鋪時間(秒) 平鋪記憶體 (%)
NVIDIA GeForce RTX 4090 512x512 0.057 10.00% 0.057 (0%) 10.00%
NVIDIA GeForce RTX 4090 1024x1024 0.256 35.50% 0.257 (+0.4%) 35.50%
NVIDIA GeForce RTX 4080 512x512 0.092 15.00% 0.092 (0%) 15.00%
NVIDIA GeForce RTX 4080 1024x1024 0.406 53.30% 0.406 (0%) 53.30%
NVIDIA GeForce RTX 4070 Ti 512x512 0.121 20.20% 0.120 (-0.8%) 20.20%
NVIDIA GeForce RTX 4070 Ti 1024x1024 0.519 72.00% 0.519 (0%) 72.00%
NVIDIA GeForce RTX 3090 512x512 0.107 10.50% 0.107 (0%) 10.50%
NVIDIA GeForce RTX 3090 1024x1024 0.459 38.00% 0.460 (+0.2%) 38.00%
NVIDIA GeForce RTX 3080 512x512 0.121 25.60% 0.121 (0%) 25.60%
NVIDIA GeForce RTX 3080 1024x1024 0.524 93.00% 0.524 (0%) 93.00%
NVIDIA GeForce RTX 3070 512x512 0.183 31.80% 0.183 (0%) 31.80%
NVIDIA GeForce RTX 3070 1024x1024 0.794 96.40% 0.794 (0%) 96.40%

可用 VAE

端點 模型
Stable Diffusion v1 https://q1bj3bpq6kzilnsu.us-east-1.aws.endpoints.huggingface.cloud stabilityai/sd-vae-ft-mse
Stable Diffusion XL https://x2dmsqunjd6k9prw.us-east-1.aws.endpoints.huggingface.cloud madebyollin/sdxl-vae-fp16-fix
Flux https://whhx50ex1aryqvw6.us-east-1.aws.endpoints.huggingface.cloud black-forest-labs/FLUX.1-schnell
HunyuanVideo https://o7ywnmrahorts457.us-east-1.aws.endpoints.huggingface.cloud hunyuanvideo-community/HunyuanVideo

模型支援可以在這裡請求。

程式碼

從 `main` 安裝 `diffusers` 來執行程式碼:`pip install git+https://github.com/huggingface/diffusers@main`

一個輔助方法簡化了與混合推理的互動。

from diffusers.utils.remote_utils import remote_decode

基本示例

這裡,我們展示瞭如何在隨機張量上使用遠端 VAE。

程式碼
image = remote_decode(
    endpoint="https://q1bj3bpq6kzilnsu.us-east-1.aws.endpoints.huggingface.cloud/",
    tensor=torch.randn([1, 4, 64, 64], dtype=torch.float16),
    scaling_factor=0.18215,
)

Flux 的用法略有不同。Flux 潛在向量是打包的,因此我們需要傳送 `height` 和 `width`。

程式碼
image = remote_decode(
    endpoint="https://whhx50ex1aryqvw6.us-east-1.aws.endpoints.huggingface.cloud/",
    tensor=torch.randn([1, 4096, 64], dtype=torch.float16),
    height=1024,
    width=1024,
    scaling_factor=0.3611,
    shift_factor=0.1159,
)

最後,一個 HunyuanVideo 的例子。

程式碼
video = remote_decode(
    endpoint="https://o7ywnmrahorts457.us-east-1.aws.endpoints.huggingface.cloud/",
    tensor=torch.randn([1, 16, 3, 40, 64], dtype=torch.float16),
    output_type="mp4",
)
with open("video.mp4", "wb") as f:
    f.write(video)

生成

但我們希望在實際的 Pipeline 上使用 VAE 來獲得實際影像,而不是隨機噪聲。以下示例展示瞭如何使用 SD v1.5 實現這一點。

程式碼
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5",
    torch_dtype=torch.float16,
    variant="fp16",
    vae=None,
).to("cuda")

prompt = "Strawberry ice cream, in a stylish modern glass, coconut, splashing milk cream and honey, in a gradient purple background, fluid motion, dynamic movement, cinematic lighting, Mysterious"

latent = pipe(
    prompt=prompt,
    output_type="latent",
).images
image = remote_decode(
    endpoint="https://q1bj3bpq6kzilnsu.us-east-1.aws.endpoints.huggingface.cloud/",
    tensor=latent,
    scaling_factor=0.18215,
)
image.save("test.jpg")

這是另一個 Flux 的示例。

程式碼
from diffusers import FluxPipeline

pipe = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-schnell",
    torch_dtype=torch.bfloat16,
    vae=None,
).to("cuda")

prompt = "Strawberry ice cream, in a stylish modern glass, coconut, splashing milk cream and honey, in a gradient purple background, fluid motion, dynamic movement, cinematic lighting, Mysterious"

latent = pipe(
    prompt=prompt,
    guidance_scale=0.0,
    num_inference_steps=4,
    output_type="latent",
).images
image = remote_decode(
    endpoint="https://whhx50ex1aryqvw6.us-east-1.aws.endpoints.huggingface.cloud/",
    tensor=latent,
    height=1024,
    width=1024,
    scaling_factor=0.3611,
    shift_factor=0.1159,
)
image.save("test.jpg")

這是 HunyuanVideo 的示例。

程式碼
from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel

model_id = "hunyuanvideo-community/HunyuanVideo"
transformer = HunyuanVideoTransformer3DModel.from_pretrained(
    model_id, subfolder="transformer", torch_dtype=torch.bfloat16
)
pipe = HunyuanVideoPipeline.from_pretrained(
    model_id, transformer=transformer, vae=None, torch_dtype=torch.float16
).to("cuda")

latent = pipe(
    prompt="A cat walks on the grass, realistic",
    height=320,
    width=512,
    num_frames=61,
    num_inference_steps=30,
    output_type="latent",
).frames

video = remote_decode(
    endpoint="https://o7ywnmrahorts457.us-east-1.aws.endpoints.huggingface.cloud/",
    tensor=latent,
    output_type="mp4",
)

if isinstance(video, bytes):
    with open("video.mp4", "wb") as f:
        f.write(video)

排隊

使用遠端 VAE 的一個巨大好處是我們可以排隊多個生成請求。當前潛在向量正在處理解碼時,我們可以已經排隊另一個請求。這有助於提高併發性。

程式碼
import queue
import threading
from IPython.display import display
from diffusers import StableDiffusionPipeline

def decode_worker(q: queue.Queue):
    while True:
        item = q.get()
        if item is None:
            break
        image = remote_decode(
            endpoint="https://q1bj3bpq6kzilnsu.us-east-1.aws.endpoints.huggingface.cloud/",
            tensor=item,
            scaling_factor=0.18215,
        )
        display(image)
        q.task_done()

q = queue.Queue()
thread = threading.Thread(target=decode_worker, args=(q,), daemon=True)
thread.start()

def decode(latent: torch.Tensor):
    q.put(latent)

prompts = [
    "Blueberry ice cream, in a stylish modern glass , ice cubes, nuts, mint leaves, splashing milk cream, in a gradient purple background, fluid motion, dynamic movement, cinematic lighting, Mysterious",
    "Lemonade in a glass, mint leaves, in an aqua and white background, flowers, ice cubes, halo, fluid motion, dynamic movement, soft lighting, digital painting, rule of thirds composition, Art by Greg rutkowski, Coby whitmore",
    "Comic book art, beautiful, vintage, pastel neon colors, extremely detailed pupils, delicate features, light on face, slight smile, Artgerm, Mary Blair, Edmund Dulac, long dark locks, bangs, glowing, fashionable style, fairytale ambience, hot pink.",
    "Masterpiece, vanilla cone ice cream garnished with chocolate syrup, crushed nuts, choco flakes, in a brown background, gold, cinematic lighting, Art by WLOP",
    "A bowl of milk, falling cornflakes, berries, blueberries, in a white background, soft lighting, intricate details, rule of thirds, octane render, volumetric lighting",
    "Cold Coffee with cream, crushed almonds, in a glass, choco flakes, ice cubes, wet, in a wooden background, cinematic lighting, hyper realistic painting, art by Carne Griffiths, octane render, volumetric lighting, fluid motion, dynamic movement, muted colors,",
]

pipe = StableDiffusionPipeline.from_pretrained(
    "Lykon/dreamshaper-8",
    torch_dtype=torch.float16,
    vae=None,
).to("cuda")

pipe.unet = pipe.unet.to(memory_format=torch.channels_last)
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)

_ = pipe(
    prompt=prompts[0],
    output_type="latent",
)

for prompt in prompts:
    latent = pipe(
        prompt=prompt,
        output_type="latent",
    ).images
    decode(latent)

q.put(None)
thread.join()

整合

< > 在 GitHub 上更新

© . This site is unofficial and not affiliated with Hugging Face, Inc.