Optimum 文件
最佳化
並獲得增強的文件體驗
開始使用
最佳化
轉換
class optimum.fx.optimization.Transformation
< source >( )
一個 torch.fx 圖轉換。
它必須實現 transform() 方法,並用作可呼叫物件。
__call__
< source >( graph_module: GraphModule lint_and_recompile: bool = True ) → torch.fx.GraphModule
get_transformed_nodes
< source >( graph_module: GraphModule ) → List[torch.fx.Node]
將一個節點標記為已透過此轉換進行轉換。
transform
< source >( graph_module: GraphModule ) → torch.fx.GraphModule
transformed
< source >( node: Node ) → bool
可逆轉換
class optimum.fx.optimization.ReversibleTransformation
< source >( )
一個可逆的 torch.fx 圖轉換。
它必須實現 transform() 和 reverse() 方法,並用作可呼叫物件。
__call__
< source >( graph_module: GraphModule lint_and_recompile: bool = True reverse: bool = False ) → torch.fx.GraphModule
將節點標記為已恢復到原始狀態。
reverse
< source >( graph_module: GraphModule ) → torch.fx.GraphModule
optimum.fx.optimization.compose
< source >( *args: Transformation inplace: bool = True )
引數
- args (Transformation) — 要組合的轉換。
- inplace (
bool
, 預設為True
) — 結果轉換是否應該就地執行,或者建立一個新的圖模組。
組合一系列轉換。
示例
>>> from transformers import BertModel
>>> from transformers.utils.fx import symbolic_trace
>>> from optimum.fx.optimization import ChangeTrueDivToMulByInverse, MergeLinears, compose
>>> model = BertModel.from_pretrained("bert-base-uncased")
>>> traced = symbolic_trace(
... model,
... input_names=["input_ids", "attention_mask", "token_type_ids"],
... )
>>> composition = compose(ChangeTrueDivToMulByInverse(), MergeLinears())
>>> transformed_model = composition(traced)
轉換
class optimum.fx.optimization.MergeLinears
< source >( )
將相同輸入的線性層合併為一個大線性層的轉換。
示例
>>> from transformers import BertModel
>>> from transformers.utils.fx import symbolic_trace
>>> from optimum.fx.optimization import MergeLinears
>>> model = BertModel.from_pretrained("bert-base-uncased")
>>> traced = symbolic_trace(
... model,
... input_names=["input_ids", "attention_mask", "token_type_ids"],
... )
>>> transformation = MergeLinears()
>>> transformed_model = transformation(traced)
>>> restored_model = transformation(transformed_model, reverse=True)
class optimum.fx.optimization.FuseBiasInLinear
< source >( )
將偏差融合到 torch.nn.Linear 中的權重的轉換。
示例
>>> from transformers import BertModel
>>> from transformers.utils.fx import symbolic_trace
>>> from optimum.fx.optimization import FuseBiasInLinear
>>> model = BertModel.from_pretrained("bert-base-uncased")
>>> traced = symbolic_trace(
... model,
... input_names=["input_ids", "attention_mask", "token_type_ids"],
... )
>>> transformation = FuseBiasInLinear()
>>> transformed_model = transformation(traced)
>>> restored_model = transformation(transformed_model, reverse=True)
class optimum.fx.optimization.ChangeTrueDivToMulByInverse
< source >( )
當分母為靜態時,將 true除法節點更改為乘以逆節點的轉換。例如,在注意力層中,縮放因子有時就是這種情況。
示例
>>> from transformers import BertModel
>>> from transformers.utils.fx import symbolic_trace
>>> from optimum.fx.optimization import ChangeTrueDivToMulByInverse
>>> model = BertModel.from_pretrained("bert-base-uncased")
>>> traced = symbolic_trace(
... model,
... input_names=["input_ids", "attention_mask", "token_type_ids"],
... )
>>> transformation = ChangeTrueDivToMulByInverse()
>>> transformed_model = transformation(traced)
>>> restored_model = transformation(transformed_model, reverse=True)
class optimum.fx.optimization.FuseBatchNorm2dInConv2d
< source >( )
將 `nn.Conv2d` 後面的 `nn.BatchNorm2d` 融合到一個 `nn.Conv2d` 中的轉換。只有當卷積的唯一後續節點是批歸一化時,才會進行融合。
例如,在以下情況下將不會進行融合:
示例
>>> from transformers.utils.fx import symbolic_trace
>>> from transformers import AutoModelForImageClassification
>>> from optimum.fx.optimization import FuseBatchNorm2dInConv2d
>>> model = AutoModelForImageClassification.from_pretrained("microsoft/resnet-50")
>>> model.eval()
>>> traced_model = symbolic_trace(
... model,
... input_names=["pixel_values"],
... disable_check=True
... )
>>> transformation = FuseBatchNorm2dInConv2d()
>>> transformed_model = transformation(traced_model)
class optimum.fx.optimization.FuseBatchNorm1dInLinear
< source >( )
將 `nn.Linear` 之後或之前的 `nn.BatchNorm1d` 融合到一個 `nn.Linear` 中的轉換。只有當線性層的唯一後續節點是批歸一化,或者批歸一化的唯一後續節點是線性層時,才會進行融合。
例如,在以下情況下將不會進行融合:
示例
>>> from transformers.utils.fx import symbolic_trace
>>> from transformers import AutoModel
>>> from optimum.fx.optimization import FuseBatchNorm1dInLinear
>>> model = AutoModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> model.eval()
>>> traced_model = symbolic_trace(
... model,
... input_names=["input_ids", "attention_mask", "pixel_values"],
... disable_check=True
... )
>>> transformation = FuseBatchNorm1dInLinear()
>>> transformed_model = transformation(traced_model)