You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a customer that requires training/fine-tuning to be done on Gemma model, but it is not currently one of the supported models in python/llm/example/GPU/LLM-Finetuning/QLoRA/alpaca-qlora. Supported models are:
We'd like to support Gemma to apply IPEX-LLM QLoRA.
As the the example codes depend on the specific model and dataset, we'd like to know more about your requirement.
Which Gemma model and dataset do you need? Since the official github has not come up with a QLoRA fine-tuning, we recommend Gemma-2B and gemma-2b-mt-German-to-English like this blog. Is it OK for you?
I have a customer that requires training/fine-tuning to be done on Gemma model, but it is not currently one of the supported models in python/llm/example/GPU/LLM-Finetuning/QLoRA/alpaca-qlora. Supported models are:
LLaMA2-7B
LLaMA2-13B
LLaMA2-70B
LLaMA3-8B
ChatGLM3-6B
Qwen-1.5-7B
Baichuan2-7B
Is there any way to fine tune a Gemma model instead?
The text was updated successfully, but these errors were encountered: