Transformers 文件

ExecuTorch

Hugging Face's logo
加入 Hugging Face 社群

並獲得增強的文件體驗

開始使用

ExecuTorch

ExecuTorch 是一個平臺,使 PyTorch 訓練和推理程式能夠在移動和邊緣裝置上執行。它由 torch.compiletorch.export 提供支援,以實現效能和部署。

您可以將 ExecuTorch 與 Transformers 和 torch.export 配合使用。`convert_and_export_with_cache()` 方法將 PreTrainedModel 轉換為可匯出模組。在底層,它使用 torch.export 匯出模型,確保與 ExecuTorch 的相容性。

import torch
from transformers import LlamaForCausalLM, AutoTokenizer, GenerationConfig
from transformers.integrations.executorch import(
    TorchExportableModuleWithStaticCache,
    convert_and_export_with_cache
)

generation_config = GenerationConfig(
    use_cache=True,
    cache_implementation="static",
    cache_config={
        "batch_size": 1,
        "max_cache_len": 20,
    }
)

tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B", pad_token="</s>", padding_side="right")
model = LlamaForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B", device_map="auto", torch_dtype=torch.bfloat16, attn_implementation="sdpa", generation_config=generation_config)

exported_program = convert_and_export_with_cache(model)

匯出的 PyTorch 模型現在可以與 ExecuTorch 一起使用了。使用 TorchExportableModuleWithStaticCache 包裝模型以生成文字。

prompts = ["Simply put, the theory of relativity states that "]
prompt_tokens = tokenizer(prompts, return_tensors="pt", padding=True).to(model.device)
prompt_token_ids = prompt_tokens["input_ids"]

generated_ids = TorchExportableModuleWithStaticCache.generate(
    exported_program=exported_program, prompt_token_ids=prompt_token_ids, max_new_tokens=20,
)
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(generated_text)
['Simply put, the theory of relativity states that 1) the speed of light is the']
< > 在 GitHub 上更新

© . This site is unofficial and not affiliated with Hugging Face, Inc.