Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Bytes and KB Units for small PyTorch models #315

Open
halsimov opened this issue Jul 16, 2024 · 1 comment
Open

Add Bytes and KB Units for small PyTorch models #315

halsimov opened this issue Jul 16, 2024 · 1 comment

Comments

@halsimov
Copy link

version: torchinfo 1.8.0 pyhd8ed1ab_0

Describe the solution you'd like

Add the Units alongside MB and GB and choose unit acoording to the best representation one can get.

In the following toy example having MB as unit is not of any help :

==========================================================================================
Layer (type:depth-idx)                   Output Shape              Param #
==========================================================================================
LRBasedClassifier                        [10, 5]                   --
├─Linear: 1-1                            [10, 5]                   125
==========================================================================================
Total params: 125
Trainable params: 125
Non-trainable params: 0
Total mult-adds (Units.MEGABYTES): 0.00
==========================================================================================
Input size (MB): 0.00
Forward/backward pass size (MB): 0.00
Params size (MB): 0.00
Estimated Total Size (MB): 0.00
==========================================================================================
@TylerYep
Copy link
Owner

Good idea, PRs implementing this behavior are welcome! Based on the output value, it should choose the most appropriate unit automatically.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants