This repository has been archived by the owner on Dec 10, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 4
Benchmarking results #27
Comments
Longer benchmarking run
|
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Benchmarking
deepseek-llm:67b-chat
,mistral:latest
,mixtral:latest
, &llama2:13b
on query classification prompts with 5 runs.All models returned the same and consistent results with
0.0
temperature however Mistral was the best performing in terms of processing times.The text was updated successfully, but these errors were encountered: