Diffusers 文件

SanaTransformer2DModel

Hugging Face's logo
加入 Hugging Face 社群

並獲得增強的文件體驗

開始使用

SanaTransformer2DModel

來自 SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers 的 SanaTransformer2DModel 是一個用於2D資料的擴散 Transformer 模型,由 NVIDIA 和 MIT HAN Lab 的 Enze Xie、Junsong Chen、Junyu Chen、Han Cai、Haotian Tang、Yujun Lin、Zhekai Zhang、Muyang Li、Ligeng Zhu、Yao Lu、Song Han 共同推出。

論文摘要如下:

我們推出了 Sana,這是一個文字到影像框架,能夠高效生成高達 4096×4096 解析度的影像。Sana 能夠以驚人的速度合成高解析度、高質量影像,並實現強大的文字-影像對齊,甚至可以在筆記型電腦 GPU 上部署。核心設計包括:(1) 深度壓縮自編碼器:與傳統僅將影像壓縮 8 倍的自編碼器不同,我們訓練了一個能夠將影像壓縮 32 倍的自編碼器,有效減少了潛在令牌的數量。(2) 線性 DiT:我們用線性注意力替換了 DiT 中所有香草注意力,這種注意力在高解析度下更高效,且不犧牲質量。(3) 僅解碼器文字編碼器:我們用現代的僅解碼器小型 LLM 替換了 T5 作為文字編碼器,並設計了帶有上下文學習的複雜人類指令,以增強影像-文字對齊。(4) 高效訓練和取樣:我們提出了 Flow-DPM-Solver 以減少採樣步驟,並透過高效的字幕標註和選擇來加速收斂。因此,Sana-0.6B 在與現代巨型擴散模型(例如 Flux-12B)競爭時具有很強的競爭力,其模型尺寸小 20 倍,吞吐量快 100 倍以上。此外,Sana-0.6B 可以在 16GB 筆記型電腦 GPU 上部署,生成 1024×1024 解析度的影像所需時間不到 1 秒。Sana 以低成本實現了內容創作。程式碼和模型將公開發布。

該模型可以透過以下程式碼片段載入。

from diffusers import SanaTransformer2DModel

transformer = SanaTransformer2DModel.from_pretrained("Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers", subfolder="transformer", torch_dtype=torch.bfloat16)

SanaTransformer2DModel

class diffusers.SanaTransformer2DModel

< >

( in_channels: int = 32 out_channels: typing.Optional[int] = 32 num_attention_heads: int = 70 attention_head_dim: int = 32 num_layers: int = 20 num_cross_attention_heads: typing.Optional[int] = 20 cross_attention_head_dim: typing.Optional[int] = 112 cross_attention_dim: typing.Optional[int] = 2240 caption_channels: int = 2304 mlp_ratio: float = 2.5 dropout: float = 0.0 attention_bias: bool = False sample_size: int = 32 patch_size: int = 1 norm_elementwise_affine: bool = False norm_eps: float = 1e-06 interpolation_scale: typing.Optional[int] = None guidance_embeds: bool = False guidance_embeds_scale: float = 0.1 qk_norm: typing.Optional[str] = None timestep_scale: float = 1.0 )

引數

  • in_channels (int, 預設為 32) — 輸入中的通道數。
  • out_channels (int, 可選, 預設為 32) — 輸出中的通道數。
  • num_attention_heads (int, 預設為 70) — 用於多頭注意力的頭數。
  • attention_head_dim (int, 預設為 32) — 每個注意力頭中的通道數。
  • num_layers (int, 預設為 20) — Transformer 塊的層數。
  • num_cross_attention_heads (int, 可選, 預設為 20) — 用於交叉注意力的頭數。
  • cross_attention_head_dim (int, 可選, 預設為 112) — 每個交叉注意力頭中的通道數。
  • cross_attention_dim (int, 可選, 預設為 2240) — 交叉注意力輸出中的通道數。
  • caption_channels (int, 預設為 2304) — 字幕嵌入中的通道數。
  • mlp_ratio (float, 預設為 2.5) — 在 GLUMBConv 層中使用的擴充套件比例。
  • dropout (float, 預設為 0.0) — dropout 機率。
  • attention_bias (bool, 預設為 False) — 是否在注意力層中使用偏置。
  • sample_size (int, 預設為 32) — 輸入潛在的基礎大小。
  • patch_size (int, 預設為 1) — 補丁嵌入層中使用的補丁大小。
  • norm_elementwise_affine (bool, 預設為 False) — 是否在歸一化層中使用逐元素仿射。
  • norm_eps (float, 預設為 1e-6) — 歸一化層的 epsilon 值。
  • qk_norm (str, 可選, 預設為 None) — 用於查詢和鍵的歸一化方式。
  • timestep_scale (float, 預設為 1.0) — 用於時間步的比例。

Sana 模型系列中引入的 2D Transformer 模型。

設定注意力處理器

< >

( processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]] )

引數

  • processor (AttentionProcessordict 或僅 AttentionProcessor) — 將作為 所有 Attention 層的處理器而設定的例項化處理器類或處理器類字典。

    如果 processor 是一個字典,則鍵需要定義到相應交叉注意力處理器的路徑。在設定可訓練注意力處理器時,強烈建議這樣做。

設定用於計算注意力的注意力處理器。

Transformer2DModelOutput

class diffusers.models.modeling_outputs.Transformer2DModelOutput

< >

( sample: torch.Tensor )

引數

  • sample (形狀為 (batch_size, num_channels, height, width)torch.Tensor 或如果 Transformer2DModel 是離散的,則為 (batch_size, num_vector_embeds - 1, num_latent_pixels)) — 在 encoder_hidden_states 輸入上輸出的隱藏狀態。如果為離散,則返回未加噪潛在畫素的機率分佈。

Transformer2DModel 的輸出。

< > 在 GitHub 上更新

© . This site is unofficial and not affiliated with Hugging Face, Inc.