AWS Trainium & Inferentia 文件
Flux
並獲得增強的文件體驗
開始使用
Flux
Flux 是一系列基於擴散轉換器的文字到影像生成模型。
我們建議使用 inf2.24xlarge
例項,並將張量並行大小設定為 8,用於模型編譯和推理。
匯出到 Neuron
- 選項 1:CLI
optimum-cli export neuron --model black-forest-labs/FLUX.1-dev --tensor_parallel_size 8 --batch_size 1 --height 1024 --width 1024 --num_images_per_prompt 1 --torch_dtype bfloat16 flux_dev_neuron/
- 選項 2:Python API
from optimum.neuron import NeuronFluxPipeline
if __name__ == "__main__":
compiler_args = {"auto_cast": "none"}
input_shapes = {"batch_size": 1, "height": 1024, "width": 1024}
pipe = NeuronFluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
torch_dtype=torch.bfloat16,
export=True,
tensor_parallel_size=8,
**compiler_args,
**input_shapes
)
# Save locally
pipe.save_pretrained("flux_dev_neuron_1024_tp8/")
# Upload to the HuggingFace Hub
pipe.push_to_hub(
"flux_dev_neuron_1024_tp8/", repository_id="Jingya/FLUX.1-dev-neuronx-1024x1024-tp8" # Replace with your HF Hub repo id
)
Guidance-distilled
- Guidance-distilled 變體需要大約 50 個取樣步驟才能生成高質量的影像。
from optimum.neuron import NeuronFluxPipeline
pipe = NeuronFluxPipeline.from_pretrained("flux_dev_neuron_1024_tp8/")
prompt = "A cat holding a sign that says hello world"
out = pipe(
prompt,
guidance_scale=3.5,
num_inference_steps=50,
generator=torch.Generator("cpu").manual_seed(0)
).images[0]
out.save("flux_optimum.png")

Timestep-distilled
- max_sequence_length 不能超過 256。
- guidance_scale 需要為 0。
- 由於這是一個 timestep-distilled 模型,它受益於更少的取樣步驟。
optimum-cli export neuron --model black-forest-labs/FLUX.1-schnell --tensor_parallel_size 8 --batch_size 1 --height 1024 --width 1024 --num_images_per_prompt 1 --sequence_length 256 --torch_dtype bfloat16 flux_schnell_neuron_1024_tp8/
import torch
from optimum.neuron import NeuronFluxPipeline
pipe = NeuronFluxPipeline.from_pretrained("flux_schnell_neuron_1024_tp8")
prompt = "A cat holding a sign that says hello world"
out = pipe(prompt, max_sequence_length=256, num_inference_steps=4).images[0]

NeuronFluxPipeline
用於文字到影像生成的 Flux 管道。
class optimum.neuron.NeuronFluxPipeline
< 原始碼 >( config: dict[str, typing.Any] configs: dict[str, 'PretrainedConfig'] neuron_configs: dict[str, 'NeuronDefaultConfig'] data_parallel_mode: typing.Literal['none', 'unet', 'transformer', 'all'] scheduler: diffusers.schedulers.scheduling_utils.SchedulerMixin | None vae_decoder: torch.jit._script.ScriptModule | NeuronModelVaeDecoder text_encoder: torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None text_encoder_2: torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None unet: torch.jit._script.ScriptModule | NeuronModelUnet | None = None transformer: torch.jit._script.ScriptModule | NeuronModelTransformer | None = None vae_encoder: torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None image_encoder: torch.jit._script.ScriptModule | None = None safety_checker: torch.jit._script.ScriptModule | None = None tokenizer: transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None tokenizer_2: transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None feature_extractor: transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None controlnet: torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: bool | None = None model_save_dir: str | pathlib.Path | tempfile.TemporaryDirectory | None = None model_and_config_save_paths: dict[str, tuple[str, pathlib.Path]] | None = None )
NeuronFluxInpaintPipeline
用於影像修復的 Flux 管道。
class optimum.neuron.NeuronFluxInpaintPipeline
< 原始碼 >( config: dict[str, typing.Any] configs: dict[str, 'PretrainedConfig'] neuron_configs: dict[str, 'NeuronDefaultConfig'] data_parallel_mode: typing.Literal['none', 'unet', 'transformer', 'all'] scheduler: diffusers.schedulers.scheduling_utils.SchedulerMixin | None vae_decoder: torch.jit._script.ScriptModule | NeuronModelVaeDecoder text_encoder: torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None text_encoder_2: torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None unet: torch.jit._script.ScriptModule | NeuronModelUnet | None = None transformer: torch.jit._script.ScriptModule | NeuronModelTransformer | None = None vae_encoder: torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None image_encoder: torch.jit._script.ScriptModule | None = None safety_checker: torch.jit._script.ScriptModule | None = None tokenizer: transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None tokenizer_2: transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None feature_extractor: transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None controlnet: torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: bool | None = None model_save_dir: str | pathlib.Path | tempfile.TemporaryDirectory | None = None model_and_config_save_paths: dict[str, tuple[str, pathlib.Path]] | None = None )
使用 NeuronFluxInpaintPipeline
,傳入原始影像和您希望在原始影像中替換的蒙版。然後使用提示中描述的內容替換蒙版區域。
from diffusers.utils import load_image
from optimum.neuron import NeuronFluxInpaintPipeline
pipe = NeuronFluxInpaintPipeline.from_pretrained("Jingya/Flux.1-Schnell-1024x1024-neuronx-tp8")
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
source = load_image(img_url)
mask = load_image(mask_url)
images = pipe(prompt=prompt, image=source, mask_image=mask, max_sequence_length=256)
您希望我們在 🤗Optimum-neuron
中支援哪些其他擴散功能?請在 Optimum-neuron
Github 倉庫 提交問題或在 HuggingFace 社群論壇 與我們討論,謝謝 🤗!