-
Notifications
You must be signed in to change notification settings - Fork 736
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Volcano Scheduler in Kubeflow Trainer #2437
Comments
/remove-label lifecycle/needs-triage |
@andreyvelich I find that there is no label like |
That's an amazing feature! |
Nice feature! I would like to know what changes are there in the trainer v2 compared to v1? |
@JesseStutler Hi, thanks for your interest in Kubeflow Trainer. In v1, we create CRDs for every kind of ML frameworks, like In v2, we unify these CRDs into
Also, we provide SDK for data scientists to do the mutation. They do not need to know about Kubernetes any more while leveraging the abilities provided by Kubernetes. You can find more information in this video and the design doc if you are interested in it. /cc @kubeflow/wg-training-leads @astefanutti Do you have other complements? |
/area gsoc |
Oh that would be great, just like Kserve ServingRuntime, If there are any volcano communities that need adaptation, please let us know, we are willing to contribute |
Thanks! This issue will be converted to a GSoC project this year. Our communities can bond with each other in this summer! |
What you would like to be added?
In Kubeflow Training Operator V1, we support Volcano for gang-scheduling, while Trainer V2 hasn't supported it yet.
Since Volcano is a widely adopted scheduler for AI workloads, it could provide Trainer with more AI-specific scheduling capabilities if we integrate Volcano into Trainer, thus benefiting users who want to schedule pods with Volcano on top of Kubeflow Trainer.
/cc @kubeflow/wg-training-leads @saileshd1402 @astefanutti @juliusvonkohout @franciscojavierarceo @varodrig @rareddy @thesuperzapper @seanlaii @deepanker13 @helenxie-bit @Doris-xm @truc0 @mahdikhashan
Why is this needed?
In #2182, users requested for richer Volcano support in Kubeflow Training Operator V1.
AFAIK, kubeedge/sedna is waiting for the support of Volcano to enable gang-scheduling in edge-cloud environments: kubeedge/sedna#463. One of the reasons why it was paused is due to:
PyTorchJob
CRD in training-operator assumes that all training workers(pods) shared the same training parameters, whileFederatedLearningJob
CRD in Sedna allows training workers to have different training parameters. So we assume that all training workers have the same training parameters, which will surely put many restrictions on the applied scenarios of Senda Federated Learning V2 but we have no choice.In Kubeflow Trainer V2, we introduce
jobset
as the low-level runtime for distributed training, which allows users to define multiple training parameters for different training workers. It's a good choice to adopt Kubeflow Training V2 instead of the V1 version for them.Based on the reasons above, supporting Volcano can bring users with great values.
Love this feature?
Give it a 👍 We prioritize the features with most 👍
The text was updated successfully, but these errors were encountered: