You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I am using the acc_norm metric to analyze the score of multiple options normalized by the length. However, I believe the unnormalized scores are displayed in .jsonl file generated by log_samples=True. Is it possible to display normalized scores since the decision is based on that instead of unnormalized scores?
Line 1241 in models/huggingface.py: answer = (float(logits.sum()), bool(max_equal))
The text was updated successfully, but these errors were encountered:
I have modified it to (float(logits.sum())/logits.shape[-1], bool(max_equal)) and set the metric to acc instead of acc_norm for now. Since we only want the sum of log probs of continuations (normalized by the continuation length), we think that this is acceptable instead of adding another feature. Any help will be greatly appreciated!
Hello, I am using the acc_norm metric to analyze the score of multiple options normalized by the length. However, I believe the unnormalized scores are displayed in .jsonl file generated by log_samples=True. Is it possible to display normalized scores since the decision is based on that instead of unnormalized scores?
Line 1241 in models/huggingface.py:
answer = (float(logits.sum()), bool(max_equal))
The text was updated successfully, but these errors were encountered: