You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tested the accuracy of gsm8k-cot on Qwen2-7B-Instruct whose model card shows an accuracy of 0.82. However, I tested on lm-eval-harness, no matter gsm8k and gsm8k-cot, there is still a significant accuracy gap.
I analyzed the output logs and figure out the reason: the parser cannot detect many correct answers in "exact match". It can only extract the answer with the format of "The answer is x."
Some error patterns: with "filtered_resps": ["[invalid]"], "filter": "strict-match", "metrics": ["exact_match"]:
...The answer is \\(366\\).
...Therefore, the answer is 23 jewels.
...Therefore, Brandon's iPhone is 8 years old.
... The answer is: $40.
Therefore, I modified the prompt here by simply add (Please follow the summarize the result at the end with the format of "The answer is xxx", where xx is the result.).
It works pretty well by telling the model output format! The strict accuracy raised from 0.57 to 0.80, much closer to the model card:)
Will you add this formatting for more tasks if possible? I believe it can bridge the gap with model card in HF:)
The text was updated successfully, but these errors were encountered:
@Monstertail I agree. I think you hit an import bug here. This would explain why some of the quant models are scoring higher scores than even natives at some tests since the calibration data in quantization (GPTQModel as example) may align the output to have a more structured output such as "The answer is".
I tested the accuracy of gsm8k-cot on Qwen2-7B-Instruct whose model card shows an accuracy of 0.82. However, I tested on lm-eval-harness, no matter gsm8k and gsm8k-cot, there is still a significant accuracy gap.
[gsm8k]
[gsm8k-cot]
I analyzed the output logs and figure out the reason: the parser cannot detect many correct answers in "exact match". It can only extract the answer with the format of "The answer is x."
Therefore, I modified the prompt here by simply add
(Please follow the summarize the result at the end with the format of "The answer is xxx", where xx is the result.)
.It works pretty well by telling the model output format!

The strict accuracy raised from 0.57 to 0.80, much closer to the model card:)
Will you add this formatting for more tasks if possible? I believe it can bridge the gap with model card in HF:)
The text was updated successfully, but these errors were encountered: