Reaching out to mentors for project 12 "Enable popular Keras hub GenAI/LLM pipelines for the OpenVINO backend in Keras 3 workflow and optimize" GSOC'2025 #29228
Replies: 5 comments 10 replies
-
I’ve sent an email requesting a review of my initial proposal to ensure I’m on the right track. I also asked about the possibility of improving our communication, perhaps via Discord or any platform you prefer. Looking forward to your thoughts! |
Beta Was this translation helpful? Give feedback.
-
@adrianboguszewski , @mlukasze |
Beta Was this translation helpful? Give feedback.
-
@adrianboguszewski , |
Beta Was this translation helpful? Give feedback.
-
I hope you're doing well. I’m interested in applying for Project 12: "Enable Popular Keras Hub GenAI/LLM Pipelines for OpenVINO Backend" in GSoC 2025. I have already drafted a proposal. I would greatly appreciate your guidance and feedback on both. I’ve tried reaching out to mentors but haven’t connected yet. If possible, I’d appreciate it if you could follow up with me and provide insights. I’d be happy to discuss this further via Discord or LinkedIn, whichever is more convenient for you. Looking forward to your response! Best regards, |
Beta Was this translation helpful? Give feedback.
-
Dear Roman Kazantsev @rkazants , Maxim Vafin @mvafin , Anastasia Popova, and Andrei Kochin,
I'm Mohamed, a third-year Computer Engineering student, and I am highly interested in working on and contributing to the "Enable Popular Keras Hub GenAI/LLM Pipelines for OpenVINO Backend" project.
Here is what I have learned so far:
I have experience working with Python, TensorFlow, Keras, and I've fine-tuned a lot of models and achieved acceptable performance, which I have used in my previous machine-learning projects.
I contributed to OpenVINO's Keras 3 backend #20945, #20934 and OpenVINO's PyTorch Frontend #29142. Through this, I have gained insight into OpenVINO’s execution flow and backend integration.
I have a solid understanding of model inference optimization and how backend implementations affect performance, and I have already started learning KV-cache and LoRA adapters and I am eager to explore them further.
I have a few queries:
Are there specific GenAI/LLM models from Keras Hub that you recommend focusing on first?
Apart from KV-cache and LoRA, would other optimization techniques like quantization or model distillation be relevant for this project?
What OpenVINO benchmarks or performance targets should we aim for, especially in comparison to TensorFlow, PyTorch, and JAX?
I would love to discuss these ideas further and understand how I can contribute effectively to the project.
@adrianboguszewski , @mlukasze
Could you please help connect me with the mentors for further discussions?
Best regards,
Mohamed
Beta Was this translation helpful? Give feedback.
All reactions