You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the YOLOv5 issues and discussions and found no similar questions.
Question
Hi,
I have a Nvidia jetson NX with 8GB of RAM. I use opencv to do the inference with onnx models.
Taking into account that a model yolov5m6 has about 142MB of size (onnx) and I resize a 4K image to 1280x1280, how much RAM would the system use to make the inference? Also do you know how to clean the RAM after each inference? I'm using python.
Thanks
Additional
No response
The text was updated successfully, but these errors were encountered:
The RAM usage during YOLOv5 inference can vary based on several factors, including the model size, input image resolution, and the batch size. For a YOLOv5m6 model and a 1280x1280 input image on a Jetson NX, you might expect the system to use a few hundred megabytes of RAM for the inference process. However, this is a rough estimate and actual usage can be higher depending on the specifics of your implementation and any additional data loaded into memory.
To minimize memory usage, ensure you're using the latest version of YOLOv5 and its dependencies, as we continuously optimize for performance. After each inference, memory in Python is managed by the garbage collector. To explicitly free up memory, you can delete objects and call the garbage collector manually:
importgc# After inferencedelmodeldelimagesgc.collect()
Keep in mind that Python's memory management may not immediately release memory back to the operating system, but it will make it available for future inferences.
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
Search before asking
Question
Hi,
I have a Nvidia jetson NX with 8GB of RAM. I use opencv to do the inference with onnx models.
Taking into account that a model yolov5m6 has about 142MB of size (onnx) and I resize a 4K image to 1280x1280, how much RAM would the system use to make the inference? Also do you know how to clean the RAM after each inference? I'm using python.
Thanks
Additional
No response
The text was updated successfully, but these errors were encountered: