You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've encountered an issue while trying to run o3-mini on a benchmark using lm-evaluation-harness. The error states: "Unsupported parameter: 'temperature' is not supported with this model."
The problem occurs because the OpenAIChatCompletion API automatically includes the 'temperature' parameter, which isn't compatible with o3-mini. This can be seen in the code here:
Hi developers,
I've encountered an issue while trying to run o3-mini on a benchmark using lm-evaluation-harness. The error states: "Unsupported parameter: 'temperature' is not supported with this model."
The problem occurs because the OpenAIChatCompletion API automatically includes the 'temperature' parameter, which isn't compatible with o3-mini. This can be seen in the code here:
lm-evaluation-harness/lm_eval/models/openai_completions.py
Lines 275 to 291 in a40fe42
To reproduce the issue:
Maybe we need something like:
Other discussion: ai-christianson/RA.Aid#70
The text was updated successfully, but these errors were encountered: