-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about humaneval #2648
Comments
Hi! what model are you using? |
I use lm_eval --model vllm --model_args="pretrained=${model_name},dtype=auto,tensor_parallel_size=${GPUS_PER_NODE},max_model_len=16384,gpu_memory_utilization=0.9,enable_chunked_prefill=True" --tasks=humaneval --batch_size=auto --output_path results --apply_chat_template --fewshot_as_multiturn --gen_kwargs="stop_token_ids=[128009]" |
Looks like a prompting/answer extraction issue. Added the prompt from llama evals (as |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I tried to evaluate humaneval on meta-llama-3.1-instruct, but got a score close to 0. I printed the output and found
I think this may be due to
generation_kwargs.until
in the configuration. So what is the correct way to evaluate?The text was updated successfully, but these errors were encountered: