Skip to content
This repository has been archived by the owner on Sep 23, 2024. It is now read-only.

Batch Size #246

Answered by texasdiaz
DaveWhitmer asked this question in Q&A
Dec 26, 2021 · 1 comments · 3 replies
Discussion options

You must be logged in to vote

The manual is specifically describing ftc-ml, which is using GPU training; for GPU, the batch size for most models is 32. If you're using fmltc, you have the option of using TPU or GPU. The TPU batch size for many models is 512 (TPU has more memory and can train more images per step).

"More Training" does indeed simply "pick up where it left off" (at least where it left off from that saved checkpoint). If you want to train a model longer, this is the way to do it.

Replies: 1 comment 3 replies

Comment options

You must be logged in to vote
3 replies
@DaveWhitmer
Comment options

@lizlooney
Comment options

@DaveWhitmer
Comment options

Answer selected by DaveWhitmer
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants