Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alpaca QLoRA training doesn't support Gemma model #12865

Open
vishakh15nair opened this issue Feb 20, 2025 · 1 comment
Open

Alpaca QLoRA training doesn't support Gemma model #12865

vishakh15nair opened this issue Feb 20, 2025 · 1 comment
Assignees

Comments

@vishakh15nair
Copy link

I have a customer that requires training/fine-tuning to be done on Gemma model, but it is not currently one of the supported models in python/llm/example/GPU/LLM-Finetuning/QLoRA/alpaca-qlora. Supported models are:

LLaMA2-7B
LLaMA2-13B
LLaMA2-70B
LLaMA3-8B
ChatGLM3-6B
Qwen-1.5-7B
Baichuan2-7B

Is there any way to fine tune a Gemma model instead?

@Uxito-Ada
Copy link
Contributor

Uxito-Ada commented Feb 21, 2025

Hi @vishakh15nair ,

We'd like to support Gemma to apply IPEX-LLM QLoRA.

As the the example codes depend on the specific model and dataset, we'd like to know more about your requirement.

Which Gemma model and dataset do you need? Since the official github has not come up with a QLoRA fine-tuning, we recommend Gemma-2B and gemma-2b-mt-German-to-English like this blog. Is it OK for you?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants