Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Distributed training with mutliple pods, with multi-gpu in each pod #2456

Open
githubthunder opened this issue Feb 28, 2025 · 0 comments
Open

Comments

@githubthunder
Copy link

I want to execute distributed training in a Kubernetes environment using the command "kubectl apply -f train.yaml".

Which version of Kubeflow supports the torchrun command for distributed training across multiple PODs, with multiple GPUs in each POD?

Please provide a working example, including sample code and YAML files, with a focus on how to write the YAML file.

Thank you very much!

@githubthunder githubthunder changed the title Distributed training with mutli-pod with multi-gpu in each pod Distributed training with mutliple pods, with multi-gpu in each pod Feb 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant