-
Notifications
You must be signed in to change notification settings - Fork 19.6k
calculate score calculation within callback #21076
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
The loss and metrics displayed in progress bar is for each batch or mini-batch, where as the output in next line is for each epoch. |
ok. Now, if we do this, then I should able to get them matched. x_train = x_train[:256]
y_train = y_train[:256]
x_test = x_test[:256]
y_test = y_test[:256]
model = get_model()
model.fit(
x_train,
y_train,
batch_size=256,
epochs=5,
verbose=1,
callbacks=[CustomCallback(x_train, y_train)],
) Epoch 1/5
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 405ms/step - loss: 25.9250 - mean_absolute_error: 4.1543
{'loss': 25.925048828125, 'mean_absolute_error': 16.05439567565918}
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 490ms/step - loss: 25.9250 - mean_absolute_error: 4.1543 The loss get matched, but why not metrics? |
Why the Callback Print Appears "Mid-Epoch"
Why the Callback's score Metrics Don't Match the fit LogKeras metrics (like keras/keras/src/backend/tensorflow/trainer.py Line 366 in 7ed8edb
For each batch, fit calls The values logged by fit (e.g., loss: 256.4755, mean_absolute_error: 10.3880 on the final 4/4 line for Epoch 1) are the At Then, This compute_metrics call performs another This update uses all the 1000 samples. It happens without resetting the state first. The score you print (e.g., {'loss': 242.52..., 'mean_absolute_error': 6.01...} for Epoch 1) is the result() calculated after this additional bulk update_state. Since the internal state of the metric objects is different when result() is called in these two scenarios, the numbers don't match. One guess for possibly large discrepancies you may be seeing in the initial epochs is that the first mini-batch may have a large enough MAE value, which when averaged across all batches ends up messing with the metrics being printed. |
The text was updated successfully, but these errors were encountered: