Diffusers 文件

HiDreamImageTransformer2DModel

Hugging Face's logo
加入 Hugging Face 社群

並獲得增強的文件體驗

開始使用

HiDreamImageTransformer2DModel

來自 HiDream-I1 的用於影像類資料的 Transformer 模型。

該模型可以透過以下程式碼片段載入。

from diffusers import HiDreamImageTransformer2DModel

transformer = HiDreamImageTransformer2DModel.from_pretrained("HiDream-ai/HiDream-I1-Full", subfolder="transformer", torch_dtype=torch.bfloat16)

為 HiDream-I1 載入 GGUF 量化檢查點

可以使用 ~FromOriginalModelMixin.from_single_file 載入 HiDreamImageTransformer2DModel 的 GGUF 檢查點。

import torch
from diffusers import GGUFQuantizationConfig, HiDreamImageTransformer2DModel

ckpt_path = "https://huggingface.co/city96/HiDream-I1-Dev-gguf/blob/main/hidream-i1-dev-Q2_K.gguf"
transformer = HiDreamImageTransformer2DModel.from_single_file(
    ckpt_path,
    quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
    torch_dtype=torch.bfloat16
)

HiDreamImageTransformer2DModel

class diffusers.HiDreamImageTransformer2DModel

< >

( patch_size: typing.Optional[int] = None in_channels: int = 64 out_channels: typing.Optional[int] = None num_layers: int = 16 num_single_layers: int = 32 attention_head_dim: int = 128 num_attention_heads: int = 20 caption_channels: typing.List[int] = None text_emb_dim: int = 2048 num_routed_experts: int = 4 num_activated_experts: int = 2 axes_dims_rope: typing.Tuple[int, int] = (32, 32) max_resolution: typing.Tuple[int, int] = (128, 128) llama_layers: typing.List[int] = None force_inference_output: bool = False )

Transformer2DModelOutput

class diffusers.models.modeling_outputs.Transformer2DModelOutput

< >

( sample: torch.Tensor )

引數

  • sample (形狀為 (batch_size, num_channels, height, width)torch.Tensor 或當 Transformer2DModel 為離散時,形狀為 (batch size, num_vector_embeds - 1, num_latent_pixels)) — 在 encoder_hidden_states 輸入條件下輸出的隱藏狀態。如果是離散的,則返回未去噪的潛在畫素的機率分佈。

Transformer2DModel 的輸出。

< > 在 GitHub 上更新

© . This site is unofficial and not affiliated with Hugging Face, Inc.