You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The `meta-llama/Llama-2-13b-chat-hf` LLM model has been tested on the topical rails evaluation sets, results are available [here](../../../nemoguardrails/eval/README.md).
14
+
The `meta-llama/Llama-2-13b-chat-hf` LLM model has been tested on the topical rails evaluation sets, results are available [here](../../../../nemoguardrails/eval/README.md).
15
15
We have also tested the factchecking rail for the same model with good results.
16
16
There are examples on how to use the models with a HF repo id or from a local path.
17
17
18
18
In this folder, the guardrails application is very basic, but anyone can change it with any other more complex configuration.
19
19
20
-
**Disclaimer**: The `meta-llama/Llama-2-13b-chat-hf` LLM on tested on basic usage combining a toy example of a knowledge base, further experiments of prompt engineering needs to be done on [fact-checking](./config.yml#L133-142) for more complex queries as this model may not work correctly. Thorough testing and optimizations are needed before considering a production deployment.
20
+
**Disclaimer**: The `meta-llama/Llama-2-13b-chat-hf` LLM on tested on basic usage combining a toy example of a knowledge base, further experiments of prompt engineering needs to be done on [fact-checking](config.yml#L133-142) for more complex queries as this model may not work correctly. Thorough testing and optimizations are needed before considering a production deployment.
0 commit comments