Diffusers 文件
HunyuanDiT2DModel
並獲得增強的文件體驗
開始使用
HunyuanDiT2DModel
來自 Hunyuan-DiT 的 2D 資料擴散 Transformer 模型。
HunyuanDiT2DModel
class diffusers.HunyuanDiT2DModel
< source 檢視原始檔 >( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: typing.Optional[int] = None patch_size: typing.Optional[int] = None activation_fn: str = 'gelu-approximate' sample_size = 32 hidden_size = 1152 num_layers: int = 28 mlp_ratio: float = 4.0 learn_sigma: bool = True cross_attention_dim: int = 1024 norm_type: str = 'layer_norm' cross_attention_dim_t5: int = 2048 pooled_projection_dim: int = 1024 text_len: int = 77 text_len_t5: int = 256 use_style_cond_and_image_meta_size: bool = True )
引數
- num_attention_heads (
int
, 可選, 預設值為 16) — 用於多頭注意力機制的頭數。 - attention_head_dim (
int
, 可選, 預設值為 88) — 每個注意力頭的通道數。 - in_channels (
int
, 可選) — 輸入和輸出的通道數(如果輸入是連續的,請指定)。 - patch_size (
int
, 可選) — 用於輸入的塊大小。 - activation_fn (
str
, 可選, 預設值為"geglu"
) — 前饋網路中使用的啟用函式。 - sample_size (
int
, 可選) — 潛在影像的寬度。在訓練過程中是固定的,因為它用於學習多個位置嵌入。 - dropout (
float
, 可選, 預設值為 0.0) — 使用的 dropout 機率。 - cross_attention_dim (
int
, 可選) — Clip 文字嵌入的維度數。 - hidden_size (
int
, 可選) — 條件嵌入層中的隱藏層大小。 - num_layers (
int
, 可選, 預設值為 1) — 要使用的 Transformer 塊層數。 - mlp_ratio (
float
, 可選, 預設值為 4.0) — 隱藏層大小與輸入大小之比。 - learn_sigma (
bool
, 可選, 預設值為True
) — 是否預測方差。 - cross_attention_dim_t5 (
int
, 可選) — T5 文字嵌入的維度數。 - pooled_projection_dim (
int
, 可選) — 池化投影的尺寸。 - text_len (
int
, 可選) — Clip 文字嵌入的長度。 - text_len_t5 (
int
, 可選) — T5 文字嵌入的長度。 - use_style_cond_and_image_meta_size (
bool
, 可選) — 是否使用樣式條件和影像元大小。版本 <=1.1 為 True,版本 >= 1.2 為 False。
HunYuanDiT:具有 Transformer 主幹的擴散模型。
繼承 ModelMixin 和 ConfigMixin 以相容 diffusers 的 StableDiffusionPipeline 取樣器。
啟用前向分塊
< source 檢視原始檔 >( chunk_size: typing.Optional[int] = None dim: int = 0 )
設定注意力處理器以使用分塊前饋層。
forward
< source 檢視原始檔 >( hidden_states timestep encoder_hidden_states = None text_embedding_mask = None encoder_hidden_states_t5 = None text_embedding_mask_t5 = None image_meta_size = None style = None image_rotary_emb = None controlnet_block_samples = None return_dict = True )
引數
- hidden_states (
torch.Tensor
,形狀為(批次大小, 維度, 高度, 寬度)
) — 輸入張量。 - timestep (
torch.LongTensor
, 可選) — 用於指示去噪步驟。 - encoder_hidden_states (
torch.Tensor
,形狀為(批次大小, 序列長度, 嵌入維度)
, 可選) — 交叉注意力層的條件嵌入。這是BertModel
的輸出。 - text_embedding_mask — torch.Tensor 形狀為
(批次, 鍵令牌)
的注意力掩碼應用於encoder_hidden_states
。這是BertModel
的輸出。 - encoder_hidden_states_t5 (
torch.Tensor
,形狀為(批次大小, 序列長度, 嵌入維度)
, 可選) — 交叉注意力層的條件嵌入。這是 T5 文字編碼器的輸出。 - text_embedding_mask_t5 — torch.Tensor 形狀為
(批次, 鍵令牌)
的注意力掩碼應用於encoder_hidden_states
。這是 T5 文字編碼器的輸出。 - image_meta_size (torch.Tensor) — 用於指示影像大小的條件嵌入
- style — torch.Tensor: 指示風格的條件嵌入
- image_rotary_emb (
torch.Tensor
) — 在注意力計算期間應用於查詢和鍵張量的影像旋轉嵌入。 - return_dict — bool 是否返回字典。
HunyuanDiT2DModel 的 forward 方法。
啟用融合 QKV 投影。對於自注意力模組,所有投影矩陣(即查詢、鍵、值)都將融合。對於交叉注意力模組,鍵和值投影矩陣將融合。
此 API 是 🧪 實驗性的。
設定注意力處理器
< source 檢視原始檔 >( processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]] )
設定用於計算注意力的注意力處理器。
停用自定義注意力處理器並設定預設注意力實現。