AWS Trainium & Inferentia 文件

LoRA for Neuron

Hugging Face's logo
加入 Hugging Face 社群

並獲得增強的文件體驗

開始使用

LoRA for Neuron

為在 AWS Trainium 裝置上進行分散式訓練而最佳化的 LoRA(低秩自適應)實現。該模組提供高效的引數高效微調,並支援張量並行和序列並行。

PEFT 模型類

NeuronPeftModel

class optimum.neuron.peft.NeuronPeftModel

< >

( model: PreTrainedModel peft_config: PeftConfig adapter_name: str = 'default' autocast_adapter_dtype: bool = True **kwargs: Any )

NeuronPeftModelForCausalLM

class optimum.neuron.peft.NeuronPeftModelForCausalLM

< >

( model: PreTrainedModel peft_config: PeftConfig adapter_name: str = 'default' autocast_adapter_dtype: bool = True **kwargs: Any )

LoRA 層實現

基礎 LoRA 層

class optimum.neuron.peft.tuners.lora.layer.NeuronLoraLayer

< >

( base_layer: Module ephemeral_gpu_offload: bool = False **kwargs )

並行線性 LoRA

class optimum.neuron.peft.tuners.lora.layer.ParallelLinear

< >

( base_layer adapter_name: str r: int = 0 lora_alpha: int = 1 lora_dropout: float = 0.0 fan_in_fan_out: bool = False is_target_conv_1d_layer: bool = False init_lora_weights: bool | str = True use_rslora: bool = False use_dora: bool = False lora_bias: bool = False **kwargs )

GQA QKV 列並行 LoRA

class optimum.neuron.peft.tuners.lora.layer.GQAQKVColumnParallelLinear

< >

( base_layer adapter_name: str r: int = 0 lora_alpha: int = 1 lora_dropout: float = 0.0 fan_in_fan_out: bool = False is_target_conv_1d_layer: bool = False init_lora_weights: bool | str = True use_rslora: bool = False use_dora: bool = False lora_bias: bool = False **kwargs )

並行嵌入 LoRA

class optimum.neuron.peft.tuners.lora.layer.ParallelEmbedding

< >

( base_layer: Module adapter_name: str r: int = 0 lora_alpha: int = 1 lora_dropout: float = 0.0 fan_in_fan_out: bool = False init_lora_weights: bool | str = True use_rslora: bool = False use_dora: bool = False lora_bias: bool = False **kwargs )

LoRA 模型

NeuronLoraModel

class optimum.neuron.peft.tuners.NeuronLoraModel

< >

( model config adapter_name low_cpu_mem_usage: bool = False )

實用函式

get_peft_model

optimum.neuron.peft.get_peft_model

< >

( model: PreTrainedModel peft_config: PeftConfig adapter_name: str = 'default' mixed: bool = False autocast_adapter_dtype: bool = True revision: str | None = None low_cpu_mem_usage: bool = False )

架構支援

Neuron LoRA 實現支援以下並行層型別

  • ColumnParallelLinear:用於沿輸出維度拆分權重的層
  • RowParallelLinear:用於沿輸入維度拆分權重的層
  • ParallelEmbedding:用於在不同等級之間分佈的嵌入層
  • GQAQKVColumnParallelLinear:用於具有挑戰性張量並行配置的分組查詢注意力投影

每種層型別都有相應的 LoRA 實現,在增加低秩自適應功能的同時保持並行化策略。

主要功能

  • 分散式訓練:完全支援張量並行和序列並行
  • 檢查點合併:分片和合並檢查點之間的自動轉換
  • 權重轉換:與模型權重轉換規範無縫整合
  • 相容性:適用於 Optimum Neuron 中所有支援的自定義建模架構

© . This site is unofficial and not affiliated with Hugging Face, Inc.