Diffusers 文件

HunyuanVideoTransformer3D模型

Hugging Face's logo
加入 Hugging Face 社群

並獲得增強的文件體驗

開始使用

HunyuanVideoTransformer3D模型

騰訊在 HunyuanVideo: A Systematic Framework For Large Video Generative Models 中介紹了用於3D影片類資料的擴散Transformer模型。

該模型可以透過以下程式碼片段載入。

from diffusers import HunyuanVideoTransformer3DModel

transformer = HunyuanVideoTransformer3DModel.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder="transformer", torch_dtype=torch.bfloat16)

HunyuanVideoTransformer3D模型

diffusers.HunyuanVideoTransformer3DModel

< >

( in_channels: int = 16 out_channels: int = 16 num_attention_heads: int = 24 attention_head_dim: int = 128 num_layers: int = 20 num_single_layers: int = 40 num_refiner_layers: int = 2 mlp_ratio: float = 4.0 patch_size: int = 2 patch_size_t: int = 1 qk_norm: str = 'rms_norm' guidance_embeds: bool = True text_embed_dim: int = 4096 pooled_projection_dim: int = 768 rope_theta: float = 256.0 rope_axes_dim: typing.Tuple[int] = (16, 56, 56) image_condition_type: typing.Optional[str] = None )

引數

  • in_channels (int, 預設為 16) — 輸入中的通道數量。
  • out_channels (int, 預設為 16) — 輸出中的通道數量。
  • num_attention_heads (int, 預設為 24) — 多頭注意力中使用的頭數量。
  • attention_head_dim (int, 預設為 128) — 每個頭中的通道數量。
  • num_layers (int, 預設為 20) — 雙流塊的層數。
  • num_single_layers (int, 預設為 40) — 單流塊的層數。
  • num_refiner_layers (int, 預設為 2) — 精煉器塊的層數。
  • mlp_ratio (float, 預設為 4.0) — 前饋網路中隱藏層大小與輸入大小的比率。
  • patch_size (int, 預設為 2) — 補丁嵌入層中使用的空間補丁大小。
  • patch_size_t (int, 預設為 1) — 補丁嵌入層中使用的時間補丁大小。
  • qk_norm (str, 預設為 rms_norm) — 注意力層中查詢和鍵投影使用的歸一化方式。
  • guidance_embeds (bool, 預設為 True) — 是否在模型中使用指導嵌入。
  • text_embed_dim (int, 預設為 4096) — 文字編碼器中文字嵌入的輸入維度。
  • pooled_projection_dim (int, 預設為 768) — 文字嵌入池化投影的維度。
  • rope_theta (float, 預設為 256.0) — RoPE 層中使用的 theta 值。
  • rope_axes_dim (Tuple[int], 預設為 (16, 56, 56)) — RoPE 層中使用的軸維度。
  • image_condition_type (str, 可選, 預設為 None) — 使用的影像條件型別。如果為 None,則不使用影像條件。如果為 latent_concat,則影像與潛在流連線。如果為 token_replace,則影像用於替換潛在流中的第一幀標記並應用條件。

用於 HunyuanVideo 中的影片類資料的 Transformer 模型。

設定注意力處理器

< >

( processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]] )

引數

  • processor (AttentionProcessordict 或僅 AttentionProcessor) — 將設定為 所有 Attention 層的處理器類的例項化或處理器類字典。

    如果 processor 是一個字典,則鍵需要定義到相應交叉注意力處理器的路徑。強烈建議在設定可訓練的注意力處理器時使用此方法。

設定用於計算注意力的注意力處理器。

Transformer2DModelOutput

diffusers.models.modeling_outputs.Transformer2DModelOutput

< >

( sample: torch.Tensor )

引數

  • sample (torch.Tensor,形狀為 (batch_size, num_channels, height, width)(batch size, num_vector_embeds - 1, num_latent_pixels) 如果 Transformer2DModel 是離散的) — 在 encoder_hidden_states 輸入條件下輸出的隱藏狀態。如果是離散的,則返回未去噪潛在畫素的機率分佈。

Transformer2DModel 的輸出。

< > 在 GitHub 上更新

© . This site is unofficial and not affiliated with Hugging Face, Inc.