AWS Trainium & Inferentia 文件

Sentence Transformers 🤗

Hugging Face's logo
加入 Hugging Face 社群

並獲得增強的文件體驗

開始使用

Sentence Transformers 🤗

SentenceTransformers 🤗 是一個用於最先進的句子、文字和影像嵌入的 Python 框架。它可用於使用 Sentence Transformer 模型計算嵌入或使用 Cross-Encoder(又稱 reranker)模型計算相似度分數。這開啟了廣泛的應用,包括語義搜尋、語義文字相似度和釋義挖掘。Optimum Neuron 提供 API,以簡化在 AWS Neuron 裝置上使用 SentenceTransformers。

匯出到 Neuron

選項 1:命令列介面

  • 示例 - 文字嵌入
optimum-cli export neuron -m BAAI/bge-large-en-v1.5 --sequence_length 384 --batch_size 1 --task feature-extraction bge_emb_neuron/
  • 示例 - 影像搜尋
optimum-cli export neuron -m sentence-transformers/clip-ViT-B-32 --sequence_length 64 --text_batch_size 3 --image_batch_size 1 --num_channels 3 --height 224 --width 224 --task feature-extraction --subfolder 0_CLIPModel clip_emb_neuron/

選項 2:Python API

  • 示例 - 文字嵌入
from optimum.neuron import NeuronModelForSentenceTransformers

# configs for compiling model
input_shapes = {
    "batch_size": 1,
    "sequence_length": 384,
}
compiler_args = {"auto_cast": "matmul", "auto_cast_type": "bf16"}

neuron_model = NeuronModelForSentenceTransformers.from_pretrained(
    "BAAI/bge-large-en-v1.5", 
    export=True, 
    **input_shapes,
    **compiler_args,
)

# Save locally
neuron_model.save_pretrained("bge_emb_neuron/")

# Upload to the HuggingFace Hub
neuron_model.push_to_hub(
    "bge_emb_neuron/", repository_id="optimum/bge-base-en-v1.5-neuronx"  # Replace with your HF Hub repo id
)
  • 示例 - 影像搜尋
from optimum.neuron import NeuronModelForSentenceTransformers

# configs for compiling model
input_shapes = {
    "num_channels": 3,
    "height": 224,
    "width": 224,
    "text_batch_size": 3,
    "image_batch_size": 1,
    "sequence_length": 64,
}
compiler_args = {"auto_cast": "matmul", "auto_cast_type": "bf16"}

neuron_model = NeuronModelForSentenceTransformers.from_pretrained(
    "sentence-transformers/clip-ViT-B-32", 
    subfolder="0_CLIPModel", 
    export=True, 
    dynamic_batch_size=False, 
    **input_shapes,
    **compiler_args,
)

# Save locally
neuron_model.save_pretrained("clip_emb_neuron/")

# Upload to the HuggingFace Hub
neuron_model.push_to_hub(
    "clip_emb_neuron/", repository_id="optimum/clip_vit_emb_neuronx"  # Replace with your HF Hub repo id
)

NeuronModelForSentenceTransformers

class optimum.neuron.NeuronModelForSentenceTransformers

< >

( model: ScriptModule config: PretrainedConfig model_save_dir: str | pathlib.Path | tempfile.TemporaryDirectory | None = None model_file_name: str | None = None preprocessors: list | None = None neuron_config: NeuronDefaultConfig | None = None **kwargs )

引數

  • config (transformers.PretrainedConfig) — PretrainedConfig 是模型配置類,包含模型的所有引數。使用配置檔案進行初始化不會載入與模型相關的權重,只加載配置。請檢視 optimum.neuron.modeling.NeuronTracedModel.from_pretrained 方法以載入模型權重。
  • model (torch.jit._script.ScriptModule) — torch.jit._script.ScriptModule 是包含由 neuron(x) 編譯器編譯的 NEFF(Neuron 可執行檔案格式)的 TorchScript 模組。

用於 Sentence Transformers 的 Neuron 模型。

此模型繼承自 ~neuron.modeling.NeuronTracedModel。請檢視超類文件以瞭解庫為其所有模型實現的通用方法(例如下載或儲存)。

Neuron 裝置上的 Sentence Transformers 模型。

forward

< >

( input_ids: Tensor attention_mask: Tensor pixel_values: torch.Tensor | None = None token_type_ids: torch.Tensor | None = None **kwargs )

引數

  • input_ids (形狀為 (batch_size, sequence_length)torch.Tensor) — 詞彙表中輸入序列標記的索引。可以使用 AutoTokenizer 獲取索引。有關詳細資訊,請參閱 PreTrainedTokenizer.encodePreTrainedTokenizer.__call__什麼是輸入 ID?
  • attention_mask (形狀為 (batch_size, sequence_length)torch.Tensor | None,預設為 None) — 用於避免對填充標記索引執行注意力操作的掩碼。掩碼值選擇在 [0, 1] 之間:
  • token_type_ids (形狀為 (batch_size, sequence_length)torch.Tensor | None,預設為 None) — 片段標記索引,用於指示輸入的第一部分和第二部分。索引選擇範圍為 [0, 1]

NeuronModelForSentenceTransformers forward 方法覆蓋了 __call__ 特殊方法。它只接受編譯步驟中跟蹤的輸入。推理期間提供的任何額外輸入都將被忽略。要包含額外輸入,請使用這些輸入重新編譯模型。

文字示例

>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForSentenceTransformers

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bge-base-en-v1.5-neuronx")
>>> model = NeuronModelForSentenceTransformers.from_pretrained("optimum/bge-base-en-v1.5-neuronx")

>>> inputs = tokenizer("In the smouldering promise of the fall of Troy, a mythical world of gods and mortals rises from the ashes.", return_tensors="pt")

>>> outputs = model(**inputs)
>>> token_embeddings = outputs.token_embeddings
>>> sentence_embedding = = outputs.sentence_embedding

影像示例

>>> from PIL import Image
>>> from transformers import AutoProcessor
>>> from sentence_transformers import util
>>> from optimum.neuron import NeuronModelForSentenceTransformers

>>> processor = AutoProcessor.from_pretrained("optimum/clip_vit_emb_neuronx")
>>> model = NeuronModelForSentenceTransformers.from_pretrained("optimum/clip_vit_emb_neuronx")
>>> util.http_get("https://github.com/UKPLab/sentence-transformers/raw/master/examples/sentence_transformer/applications/image-search/two_dogs_in_snow.jpg", "two_dogs_in_snow.jpg")
>>> inputs = processor(
>>>     text=["Two dogs in the snow", 'A cat on a table', 'A picture of London at night'], images=Image.open("two_dogs_in_snow.jpg"), return_tensors="pt", padding=True
>>> )

>>> outputs = model(**inputs)
>>> cos_scores = util.cos_sim(outputs.image_embeds, outputs.text_embeds)  # Compute cosine similarities

© . This site is unofficial and not affiliated with Hugging Face, Inc.