You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
where batch_size only dictates the number of images loaded from the train split. Then in rendering chunk_size is used for batched inference and the batch_size does not really appear anywhere.
for i in range(0, B, self.hparams.chunk):
rendered_ray_chunks = \
render_rays(self.models,
self.embeddings,
rays[i:i+self.hparams.chunk],
ts[i:i+self.hparams.chunk],
self.hparams.N_samples,
self.hparams.use_disp,
self.hparams.perturb,
self.hparams.noise_std,
self.hparams.N_importance,
self.hparams.chunk, # chunk size is effective in val mode
self.train_dataset.white_back)
I am a bit confused here because it seems that chunk_size is the actual batch size per training step. Please clarify.
The text was updated successfully, but these errors were encountered:
In train.py
where batch_size only dictates the number of images loaded from the train split. Then in rendering chunk_size is used for batched inference and the batch_size does not really appear anywhere.
I am a bit confused here because it seems that chunk_size is the actual batch size per training step. Please clarify.
The text was updated successfully, but these errors were encountered: