Diffusers 文件

FluxTransformer2DModel

Hugging Face's logo
加入 Hugging Face 社群

並獲得增強的文件體驗

開始使用

FluxTransformer2DModel

來自 Flux 的影像類資料的 Transformer 模型。

FluxTransformer2DModel

class diffusers.FluxTransformer2DModel

< >

( patch_size: int = 1 in_channels: int = 64 out_channels: typing.Optional[int] = None num_layers: int = 19 num_single_layers: int = 38 attention_head_dim: int = 128 num_attention_heads: int = 24 joint_attention_dim: int = 4096 pooled_projection_dim: int = 768 guidance_embeds: bool = False axes_dims_rope: typing.Tuple[int, int, int] = (16, 56, 56) )

引數

  • patch_size (int, 預設為 1) — 用於將輸入資料分成小塊的塊大小。
  • in_channels (int, 預設為 64) — 輸入中的通道數。
  • out_channels (int, 可選, 預設為 None) — 輸出中的通道數。如果未指定,則預設為 in_channels
  • num_layers (int, 預設為 19) — 要使用的雙流 DiT 塊層數。
  • num_single_layers (int, 預設為 38) — 要使用的單流 DiT 塊層數。
  • attention_head_dim (int, 預設為 128) — 每個注意力頭的維度。
  • num_attention_heads (int, 預設為 24) — 要使用的注意力頭數。
  • joint_attention_dim (int, 預設為 4096) — 用於聯合注意力(encoder_hidden_states 的嵌入/通道維度)的維度數。
  • pooled_projection_dim (int, 預設為 768) — 用於池化投影的維度數。
  • guidance_embeds (bool, 預設為 False) — 是否對模型的指導蒸餾變體使用指導嵌入。
  • axes_dims_rope (Tuple[int], 預設為 (16, 56, 56)) — 用於旋轉位置嵌入的維度。

Flux 中引入的 Transformer 模型。

參考:https://blackforestlabs.ai/announcing-black-forest-labs/

前向傳播

< >

( hidden_states: Tensor encoder_hidden_states: Tensor = None pooled_projections: Tensor = None timestep: LongTensor = None img_ids: Tensor = None txt_ids: Tensor = None guidance: Tensor = None joint_attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None controlnet_block_samples = None controlnet_single_block_samples = None return_dict: bool = True controlnet_blocks_repeat: bool = False )

引數

  • hidden_states (torch.Tensor,形狀為 (batch_size, image_sequence_length, in_channels)) — 輸入 hidden_states
  • encoder_hidden_states (torch.Tensor,形狀為 (batch_size, text_sequence_length, joint_attention_dim)) — 要使用的條件嵌入(從提示等輸入條件計算的嵌入)。
  • pooled_projections (torch.Tensor,形狀為 (batch_size, projection_dim)) — 從輸入條件的嵌入投影的嵌入。
  • timestep (torch.LongTensor) — 用於指示去噪步驟。
  • block_controlnet_hidden_states — (list of torch.Tensor): 如果指定,將新增到變壓器塊殘差中的張量列表。
  • joint_attention_kwargs (dict, 可選) — 如果指定,將傳遞給 diffusers.models.attention_processorself.processor 定義的 AttentionProcessor 的 kwargs 字典。
  • return_dict (bool, 可選, 預設為 True) — 是否返回 ~models.transformer_2d.Transformer2DModelOutput 而不是普通元組。

FluxTransformer2DModel 的 forward 方法。

融合 qkv 投影

< >

( )

啟用融合 QKV 投影。對於自注意力模組,所有投影矩陣(即查詢、鍵、值)都將融合。對於交叉注意力模組,鍵和值投影矩陣將融合。

此 API 是 🧪 實驗性的。

設定注意力處理器

< >

( processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]] )

引數

  • processor (AttentionProcessor 字典或僅 AttentionProcessor) — 例項化的處理器類或處理器類字典,將設定為**所有** Attention 層的處理器。

    如果 processor 是一個字典,則鍵需要定義到相應交叉注意力處理器的路徑。強烈建議在設定可訓練注意力處理器時這樣做。

設定用於計算注意力的注意力處理器。

unfuse_qkv_projections

< >

( )

如果啟用了,則停用融合的 QKV 投影。

此 API 是 🧪 實驗性的。

< > 在 GitHub 上更新

© . This site is unofficial and not affiliated with Hugging Face, Inc.