Diffusers 文件

CogView3PlusTransformer2DModel

Hugging Face's logo
加入 Hugging Face 社群

並獲得增強的文件體驗

開始使用

CogView3PlusTransformer2DModel

來自 CogView3Plus 的 2D 資料擴散 Transformer 模型在清華大學和智譜AI 的 CogView3: Finer and Faster Text-to-Image Generation via Relay Diffusion 中被介紹。

該模型可以透過以下程式碼片段載入。

from diffusers import CogView3PlusTransformer2DModel

transformer = CogView3PlusTransformer2DModel.from_pretrained("THUDM/CogView3Plus-3b", subfolder="transformer", torch_dtype=torch.bfloat16).to("cuda")

CogView3PlusTransformer2DModel

class diffusers.CogView3PlusTransformer2DModel

< >

( patch_size: int = 2 in_channels: int = 16 num_layers: int = 30 attention_head_dim: int = 40 num_attention_heads: int = 64 out_channels: int = 16 text_embed_dim: int = 4096 time_embed_dim: int = 512 condition_dim: int = 256 pos_embed_max_size: int = 128 sample_size: int = 128 )

引數

  • patch_size (int, 預設為 2) — 在補丁嵌入層中使用的補丁大小。
  • in_channels (int, 預設為 16) — 輸入中的通道數。
  • num_layers (int, 預設為 30) — 要使用的 Transformer 塊層數。
  • attention_head_dim (int, 預設為 40) — 每個頭的通道數。
  • num_attention_heads (int, 預設為 64) — 多頭注意力使用的頭數。
  • out_channels (int, 預設為 16) — 輸出中的通道數。
  • text_embed_dim (int, 預設為 4096) — 文字編碼器文字嵌入的輸入維度。
  • time_embed_dim (int, 預設為 512) — 時間步嵌入的輸出維度。
  • condition_dim (int, 預設為 256) — 輸入 SDXL 風格解析度條件(original_size、target_size、crop_coords)的嵌入維度。
  • pos_embed_max_size (int, 預設為 128) — 位置嵌入的最大解析度,從中獲取形狀為 H x W 的切片並新增到輸入打補丁的潛變數中,其中 HW 分別是潛在變數的高度和寬度。值為 128 意味著影像生成的最大支援高度和寬度為 128 * vae_scale_factor * patch_size => 128 * 8 * 2 => 2048
  • sample_size (int, 預設為 128) — 輸入潛變數的基礎解析度。如果在生成期間未提供高度/寬度,則此值用於確定解析度為 sample_size * vae_scale_factor => 128 * 8 => 1024

CogView3: Finer and Faster Text-to-Image Generation via Relay Diffusion 中引入的 Transformer 模型。

forward

< >

( hidden_states: Tensor encoder_hidden_states: Tensor timestep: LongTensor original_size: Tensor target_size: Tensor crop_coords: Tensor return_dict: bool = True ) torch.Tensor~models.transformer_2d.Transformer2DModelOutput

引數

  • hidden_states (torch.Tensor) — 形狀為 (批大小, 通道, 高度, 寬度) 的輸入 hidden_states
  • encoder_hidden_states (torch.Tensor) — 形狀為 (批大小, 序列長度, text_embed_dim) 的條件嵌入(從提示等輸入條件計算的嵌入)
  • timestep (torch.LongTensor) — 用於指示去噪步驟。
  • original_size (torch.Tensor) — CogView3 使用 SDXL 風格的微條件來表示原始影像大小,如 https://huggingface.co/papers/2307.01952 第 2.2 節所述。
  • target_size (torch.Tensor) — CogView3 使用 SDXL 風格的微條件來表示目標影像大小,如 https://huggingface.co/papers/2307.01952 第 2.2 節所述。
  • crop_coords (torch.Tensor) — CogView3 使用 SDXL 風格的微條件來表示裁剪座標,如 https://huggingface.co/papers/2307.01952 第 2.2 節所述。
  • return_dict (bool, 可選, 預設為 True) — 是否返回 ~models.transformer_2d.Transformer2DModelOutput 而不是普通元組。

返回

torch.Tensor~models.transformer_2d.Transformer2DModelOutput

使用提供的輸入作為條件去噪後的潛在變數。

CogView3PlusTransformer2DModel 的 forward 方法。

設定注意力處理器

< >

( processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]] )

引數

  • processor (dict of AttentionProcessor 或 僅 AttentionProcessor) — 將設定為 所有 Attention 層的處理器的例項化處理器類或處理器類字典。

    如果 processor 是一個字典,則鍵需要定義到相應交叉注意力處理器的路徑。強烈建議在設定可訓練注意力處理器時使用此方法。

設定用於計算注意力的注意力處理器。

Transformer2DModelOutput

class diffusers.models.modeling_outputs.Transformer2DModelOutput

< >

( sample: torch.Tensor )

引數

  • sample (形狀為 (批大小, 通道數, 高度, 寬度)torch.Tensor;如果 Transformer2DModel 是離散的,則為 (批大小, 向量嵌入數 - 1, 潛在畫素數)) — 在 encoder_hidden_states 輸入上進行條件化的隱藏狀態輸出。如果是離散的,則返回未去噪潛在畫素的機率分佈。

Transformer2DModel 的輸出。

< > 在 GitHub 上更新

© . This site is unofficial and not affiliated with Hugging Face, Inc.