Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OOM error #65

Open
cyxu2017 opened this issue Feb 11, 2025 · 1 comment
Open

OOM error #65

cyxu2017 opened this issue Feb 11, 2025 · 1 comment

Comments

@cyxu2017
Copy link

what is the GPU memory requirement for running prediction? Is it system dependent? If so is there a simple way to estimate the memory required?

I was running an inference for a complex with N_asym 6, N_token 2372, N_atom 18500, N_msa 4940 on a GPU with 24 GB memory. and the job was killed by a OOM:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 5.37 GiB. GPU

@zhangyuxuann
Copy link
Collaborator

@cyxu2017 it's need about 40G+ in your settings. you can refer to https://github.com/bytedance/Protenix/blob/main/docs/model_train_inference_cost.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants