Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How much RAM does the processing of a yolov5 model uses in inference? #12684

Closed
1 task done
Pedro-Leitek opened this issue Jan 29, 2024 · 2 comments
Closed
1 task done
Labels
question Further information is requested Stale Stale and schedule for closing soon

Comments

@Pedro-Leitek
Copy link

Search before asking

Question

Hi,

I have a Nvidia jetson NX with 8GB of RAM. I use opencv to do the inference with onnx models.
Taking into account that a model yolov5m6 has about 142MB of size (onnx) and I resize a 4K image to 1280x1280, how much RAM would the system use to make the inference? Also do you know how to clean the RAM after each inference? I'm using python.

Thanks

Additional

No response

@Pedro-Leitek Pedro-Leitek added the question Further information is requested label Jan 29, 2024
@glenn-jocher
Copy link
Member

@Pedro-Leitek hello!

The RAM usage during YOLOv5 inference can vary based on several factors, including the model size, input image resolution, and the batch size. For a YOLOv5m6 model and a 1280x1280 input image on a Jetson NX, you might expect the system to use a few hundred megabytes of RAM for the inference process. However, this is a rough estimate and actual usage can be higher depending on the specifics of your implementation and any additional data loaded into memory.

To minimize memory usage, ensure you're using the latest version of YOLOv5 and its dependencies, as we continuously optimize for performance. After each inference, memory in Python is managed by the garbage collector. To explicitly free up memory, you can delete objects and call the garbage collector manually:

import gc

# After inference
del model
del images
gc.collect()

Keep in mind that Python's memory management may not immediately release memory back to the operating system, but it will make it available for future inferences.

Happy coding! 😊

Copy link
Contributor

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

@github-actions github-actions bot added the Stale Stale and schedule for closing soon label Feb 29, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale Stale and schedule for closing soon
Projects
None yet
Development

No branches or pull requests

2 participants