Diffusers 文件

一致性解碼器

Hugging Face's logo
加入 Hugging Face 社群

並獲得增強的文件體驗

開始使用

一致性解碼器

一致性解碼器可用於解碼 StableDiffusionPipeline 中去噪 UNet 的潛在空間。該解碼器在 DALL-E 3 技術報告中引入。

原始程式碼庫可在 openai/consistencydecoder 找到。

目前僅支援 2 次迭代的推理。

沒有 madebyollinmrsteyk此問題 上的幫助,該管道無法貢獻。

ConsistencyDecoderVAE

class diffusers.ConsistencyDecoderVAE

< >

( scaling_factor: float = 0.18215 latent_channels: int = 4 sample_size: int = 32 encoder_act_fn: str = 'silu' encoder_block_out_channels: typing.Tuple[int, ...] = (128, 256, 512, 512) encoder_double_z: bool = True encoder_down_block_types: typing.Tuple[str, ...] = ('DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D') encoder_in_channels: int = 3 encoder_layers_per_block: int = 2 encoder_norm_num_groups: int = 32 encoder_out_channels: int = 4 decoder_add_attention: bool = False decoder_block_out_channels: typing.Tuple[int, ...] = (320, 640, 1024, 1024) decoder_down_block_types: typing.Tuple[str, ...] = ('ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D') decoder_downsample_padding: int = 1 decoder_in_channels: int = 7 decoder_layers_per_block: int = 3 decoder_norm_eps: float = 1e-05 decoder_norm_num_groups: int = 32 decoder_num_train_timesteps: int = 1024 decoder_out_channels: int = 6 decoder_resnet_time_scale_shift: str = 'scale_shift' decoder_time_embedding_type: str = 'learned' decoder_up_block_types: typing.Tuple[str, ...] = ('ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D') )

與 DALL-E 3 配合使用的一致性解碼器。

示例

>>> import torch
>>> from diffusers import StableDiffusionPipeline, ConsistencyDecoderVAE

>>> vae = ConsistencyDecoderVAE.from_pretrained("openai/consistency-decoder", torch_dtype=torch.float16)
>>> pipe = StableDiffusionPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5", vae=vae, torch_dtype=torch.float16
... ).to("cuda")

>>> image = pipe("horse", generator=torch.manual_seed(0)).images[0]
>>> image

包裝器

< >

( *args **kwargs )

disable_slicing

< >

( )

停用切片 VAE 解碼。如果之前啟用了 enable_slicing,此方法將恢復一步計算解碼。

disable_tiling

< >

( )

停用平鋪 VAE 解碼。如果之前啟用了 enable_tiling,此方法將恢復一步計算解碼。

enable_slicing

< >

( )

啟用切片 VAE 解碼。啟用此選項後,VAE 會將輸入張量分片,分步計算解碼。這有助於節省一些記憶體並允許更大的批次大小。

enable_tiling

< >

( use_tiling: bool = True )

啟用平鋪 VAE 解碼。啟用此選項後,VAE 將把輸入張量分割成瓦片,分多步計算編碼和解碼。這對於節省大量記憶體和處理更大的影像非常有用。

forward

< >

( sample: Tensor sample_posterior: bool = False return_dict: bool = True generator: typing.Optional[torch._C.Generator] = None ) DecoderOutputtuple

引數

  • sample (torch.Tensor) — 輸入樣本。
  • sample_posterior (bool, 可選, 預設為 False) — 是否從後驗取樣。
  • return_dict (bool, 可選, 預設為 True) — 是否返回 DecoderOutput 而不是普通元組。
  • generator (torch.Generator, 可選, 預設為 None) — 用於取樣的生成器。

返回

DecoderOutputtuple

如果 return_dict 為 True,則返回 DecoderOutput,否則返回普通 tuple

設定注意力處理器

< >

( processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]] )

引數

  • processor (AttentionProcessordict 或僅 AttentionProcessor) — 將設定為 所有 Attention 層的處理器例項類或處理器類字典。

    如果 processor 是一個字典,則鍵需要定義到相應交叉注意力處理器的路徑。強烈建議在設定可訓練注意力處理器時這樣做。

設定用於計算注意力的注意力處理器。

set_default_attn_processor

< >

( )

停用自定義注意力處理器並設定預設注意力實現。

分塊編碼

< >

( x: Tensor return_dict: bool = True ) ConsistencyDecoderVAEOutputtuple

引數

  • x (torch.Tensor) — 輸入影像批次。
  • return_dict (bool, 可選, 預設為 True) — 是否返回 ConsistencyDecoderVAEOutput 而不是普通元組。

返回

ConsistencyDecoderVAEOutputtuple

如果 return_dict 為 True,則返回 ConsistencyDecoderVAEOutput,否則返回普通 tuple

使用分塊編碼器編碼一批影像。

啟用此選項後,VAE 將輸入張量分割成圖塊,分步計算編碼。這對於保持記憶體使用量與影像大小無關非常有用。分塊編碼的最終結果與非分塊編碼不同,因為每個圖塊都使用不同的編碼器。為避免分塊偽影,圖塊會重疊並融合在一起以形成平滑的輸出。您仍然可能會在輸出中看到圖塊大小的變化,但它們的可見性會大大降低。

< > 在 GitHub 上更新

© . This site is unofficial and not affiliated with Hugging Face, Inc.