Diffusers 文件
SD3ControlNetModel
並獲得增強的文件體驗
開始使用
SD3ControlNetModel
SD3ControlNetModel 是 Stable Diffusion 3 的 ControlNet 實現。
ControlNet 模型由 Lvmin Zhang、Anyi Rao、Maneesh Agrawala 在向文字到影像擴散模型新增條件控制中提出。它透過對模型進行額外輸入(如邊緣圖、深度圖、分割圖和姿態檢測的關鍵點)的條件設定,從而對文字到影像生成提供更大程度的控制。
論文摘要如下:
我們提出了 ControlNet,一種神經網路架構,用於向大型預訓練的文字到影像擴散模型新增空間條件控制。ControlNet 鎖定生產就緒的大型擴散模型,並重用它們在數十億影像上預訓練的深度且魯棒的編碼層作為強大的骨幹,以學習各種條件控制。神經網路架構透過“零卷積”(零初始化卷積層)連線,這些卷積層從零開始逐步增加引數,並確保不會有害噪聲影響微調。我們使用 Stable Diffusion 測試了各種條件控制,例如邊緣、深度、分割、人體姿態等,使用單一或多個條件,有或沒有提示。我們表明 ControlNet 的訓練對於小型(<50k)和大型(>1m)資料集都非常魯棒。大量結果表明,ControlNet 可能有助於更廣泛地應用以控制影像擴散模型。
從原始格式載入
預設情況下,SD3ControlNetModel 應使用 from_pretrained() 載入。
from diffusers import StableDiffusion3ControlNetPipeline
from diffusers.models import SD3ControlNetModel, SD3MultiControlNetModel
controlnet = SD3ControlNetModel.from_pretrained("InstantX/SD3-Controlnet-Canny")
pipe = StableDiffusion3ControlNetPipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", controlnet=controlnet)
SD3ControlNetModel
class diffusers.SD3ControlNetModel
< 源 >( sample_size: int = 128 patch_size: int = 2 in_channels: int = 16 num_layers: int = 18 attention_head_dim: int = 64 num_attention_heads: int = 18 joint_attention_dim: int = 4096 caption_projection_dim: int = 1152 pooled_projection_dim: int = 2048 out_channels: int = 16 pos_embed_max_size: int = 96 extra_conditioning_channels: int = 0 dual_attention_layers: typing.Tuple[int, ...] = () qk_norm: typing.Optional[str] = None pos_embed_type: typing.Optional[str] = 'sincos' use_pos_embed: bool = True force_zeros_for_pooled_projection: bool = True )
引數
- sample_size (
int
, 預設為128
) — 潛在空間資料的寬度/高度。由於用於學習多個位置嵌入,因此在訓練期間是固定的。 - patch_size (
int
, 預設為2
) — 將輸入資料轉換為小塊的塊大小。 - in_channels (
int
, 預設為16
) — 輸入中的潛在通道數。 - num_layers (
int
, 預設為18
) — 要使用的 transformer 塊層數。 - attention_head_dim (
int
, 預設為64
) — 每個注意力頭的通道數。 - num_attention_heads (
int
, 預設為18
) — 多頭注意力使用的頭數。 - joint_attention_dim (
int
, 預設為4096
) — 用於聯合文字-影像注意力的嵌入維度。 - caption_projection_dim (
int
, 預設為1152
) — 標題嵌入的嵌入維度。 - pooled_projection_dim (
int
, 預設為2048
) — 池化文字投影的嵌入維度。 - out_channels (
int
, 預設為16
) — 輸出中的潛在通道數。 - pos_embed_max_size (
int
, 預設為96
) — 位置嵌入的最大潛在高度/寬度。 - extra_conditioning_channels (
int
, 預設為0
) — 用於補丁嵌入的額外條件通道數。 - dual_attention_layers (
Tuple[int, ...]
, 預設為()
) — 要使用的雙流 transformer 塊數。 - qk_norm (
str
, 可選, 預設為None
) — 用於注意力層中查詢和鍵的歸一化方式。如果為None
,則不使用歸一化。 - pos_embed_type (
str
, 預設為"sincos"
) — 要使用的位置嵌入型別。可在"sincos"
和None
之間選擇。 - use_pos_embed (
bool
, 預設為True
) — 是否使用位置嵌入。 - force_zeros_for_pooled_projection (
bool
, 預設為True
) — 是否強制池化投影嵌入為零。這在管道中透過讀取 ControlNet 模型的配置值進行處理。
用於 Stable Diffusion 3 的 ControlNet 模型。
前向
< 源 >( hidden_states: Tensor controlnet_cond: Tensor conditioning_scale: float = 1.0 encoder_hidden_states: Tensor = None pooled_projections: Tensor = None timestep: LongTensor = None joint_attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None return_dict: bool = True )
引數
- hidden_states (形狀為
(batch size, channel, height, width)
的torch.Tensor
) — 輸入的hidden_states
。 - controlnet_cond (
torch.Tensor
) — 形狀為(batch_size, sequence_length, hidden_size)
的條件輸入張量。 - conditioning_scale (
float
, 預設為1.0
) — ControlNet 輸出的比例因子。 - encoder_hidden_states (形狀為
(batch size, sequence_len, embed_dims)
的torch.Tensor
) — 要使用的條件嵌入(從輸入條件(如提示)計算出的嵌入)。 - pooled_projections (形狀為
(batch_size, projection_dim)
的torch.Tensor
) — 從輸入條件的嵌入投影而來的嵌入。 - timestep (
torch.LongTensor
) — 用於指示去噪步驟。 - joint_attention_kwargs (
dict
, 可選) — 一個 kwargs 字典,如果指定,將傳遞給 diffusers.models.attention_processor 中定義的self.processor
的AttentionProcessor
。 - return_dict (
bool
, 可選, 預設為True
) — 是否返回~models.transformer_2d.Transformer2DModelOutput
而不是普通元組。
SD3Transformer2DModel 前向方法。
設定注意力處理器
< 源 >( processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]] )
設定用於計算注意力的注意力處理器。
SD3ControlNetOutput
class diffusers.models.controlnets.SD3ControlNetOutput
< 源 >( controlnet_block_samples: typing.Tuple[torch.Tensor] )