Skip to content

Commit

Permalink
Add LoHa and LoKr pass
Browse files Browse the repository at this point in the history
  • Loading branch information
xiaoyu-work committed Jan 30, 2025
1 parent d98186d commit efde6d9
Show file tree
Hide file tree
Showing 10 changed files with 269 additions and 81 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ This pass only supports HfModels. Please refer to [LoRA](lora) for more details
```json
{
"type": "LoRA",
"lora_alpha": 16,
"alpha": 16,
"train_data_config": // ...,
"training_args": {
"learning_rate": 0.0002,
Expand Down
1 change: 1 addition & 0 deletions docs/source/reference/options.md
Original file line number Diff line number Diff line change
Expand Up @@ -410,6 +410,7 @@ Please also find the detailed options from following table for each pass:
| [SplitModel](../../reference/pass.rst#_split_model) | Split an ONNX model into multiple smaller sub-models based on predefined assignments. |
| [LoRA](../../reference/pass.rst#_lora) | Run LoRA fine-tuning on a Hugging Face PyTorch model. |
| [QLoRA](../../reference/pass.rst#_qlora) | Run QLoRA fine-tuning on a Hugging Face PyTorch model. |
| [LoHa](../../reference/pass.rst#_loha) | Run LoHa fine-tuning on a Hugging Face PyTorch model. |
| [LoftQ](../../reference/pass.rst#_loftq) | Run LoftQ fine-tuning on a Hugging Face PyTorch model. |
| [QuantizationAwareTraining](../../reference/pass.rst#_onnx_quantization_aware_training) | Run quantization aware training on PyTorch model. |
| [OpenVINOConversion](../../reference/pass.rst#_openvino_conversion) | Converts PyTorch, ONNX or TensorFlow Model to OpenVino Model. |
Expand Down
6 changes: 6 additions & 0 deletions docs/source/reference/pass.rst
Original file line number Diff line number Diff line change
Expand Up @@ -197,6 +197,12 @@ QLoRA
-----
.. autoconfigclass:: olive.passes.QLoRA

.. _loha:

LoHa
-----
.. autoconfigclass:: olive.passes.LoHa

.. _loftq:

LoftQ
Expand Down
2 changes: 1 addition & 1 deletion examples/llama2/llama2_lmeval.json
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@
"max_steps": 150,
"logging_steps": 50.0
},
"lora_alpha": 16,
"alpha": 16,
"eval_data_config": "eval_data"
}
},
Expand Down
4 changes: 2 additions & 2 deletions examples/llama2/llama2_qlora.json
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,8 @@
"max_steps": 150,
"logging_steps": 50.0
},
"lora_r": 64,
"lora_alpha": 16,
"r": 64,
"alpha": 16,
"eval_data_config": "eval_data"
},
"c": {
Expand Down
4 changes: 2 additions & 2 deletions examples/phi3/phi3_template.json
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@
"type": "LoRA",
"train_data_config": "tiny_codes_train",
"eval_data_config": "tiny_codes_eval",
"lora_r": 64,
"r": 64,
"training_args": {
"per_device_train_batch_size": 1,
"per_device_eval_batch_size": 1,
Expand All @@ -80,7 +80,7 @@
"type": "QLoRA",
"train_data_config": "tiny_codes_train",
"eval_data_config": "tiny_codes_eval",
"lora_r": 64,
"r": 64,
"training_args": {
"per_device_train_batch_size": 1,
"per_device_eval_batch_size": 1,
Expand Down
4 changes: 2 additions & 2 deletions olive/cli/finetune.py
Original file line number Diff line number Diff line change
Expand Up @@ -107,8 +107,8 @@ def _get_run_config(self, tempdir: str) -> Dict:
((*finetune_key, "type"), self.args.method),
((*finetune_key, "torch_dtype"), self.args.torch_dtype),
((*finetune_key, "training_args"), self.parse_training_args()),
((*finetune_key, "lora_r"), self.args.lora_r),
((*finetune_key, "lora_alpha"), self.args.lora_alpha),
((*finetune_key, "r"), self.args.lora_r),
((*finetune_key, "alpha"), self.args.lora_alpha),
("output_dir", self.args.output_path),
("log_severity_level", self.args.log_level),
]
Expand Down
Loading

0 comments on commit efde6d9

Please sign in to comment.