Training logs the final checkpoint even if checkpoint_every=0
#666
Labels
bug
Something isn't working
checkpoint_every=0
#666
Short description
There is no way ATM to avoid checkpointing altogether when training.
The docs mention
checkpoint_every=0
, disables checkpointing altogether (see here), but this is not true , because the final checkpoint is anyway logged.What is the expected result?
Option to avoid all checkpointing. Useful for
What is the actual result?
The final model/optimizer states are checkpointed even though
TrainConfig.checkpoint_every=0
.Steps/Code to reproduce
MWE:
Tracebacks (optional)
Environment details (optional)
qadence=="1.10.1"
Would you like to work on this issue?
Yes
The text was updated successfully, but these errors were encountered: