diff --git a/README.md b/README.md index a6f25ad..6dd412f 100644 --- a/README.md +++ b/README.md @@ -37,9 +37,9 @@ echo "Translate into German: thank you" | ./ask-llm.py Supported local LLM servers include [llama.cpp](https://github.com/ggerganov/llama.cpp), [Jan](https://jan.ai), [Ollama](https://ollama.com), and [LocalAI](https://localai.io). -To utilize [llama.cpp](https://github.com/ggerganov/llama.cpp) locally with its inference engine, ensure to load a quantized model such as [Phi-3 Mini](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf), [LLama-3 8B](https://huggingface.co/QuantFactory/Meta-Llama-3-8B-Instruct-GGUF), or [OpenHermes 2.5](https://huggingface.co/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF). Adjust the environment variable `LLM_API_BASE_URL` accordingly: +To utilize [llama.cpp](https://github.com/ggerganov/llama.cpp) locally with its inference engine, ensure to load a quantized model such as [Phi-3.5 Mini](https://huggingface.co/bartowski/Phi-3.5-mini-instruct-GGUF), or [Llama-3.1 8B](https://huggingface.co/lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF). Adjust the environment variable `LLM_API_BASE_URL` accordingly: ```bash -/path/to/llama.cpp/server -m Phi-3-mini-4k-instruct-q4.gguf +/path/to/llama-server -m Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf export LLM_API_BASE_URL=http://127.0.0.1:8080/v1 ``` @@ -51,9 +51,9 @@ export LLM_CHAT_MODEL='llama3-8b-instruct' To use [Ollama](https://ollama.com) locally, load a model and configure the environment variable `LLM_API_BASE_URL`: ```bash -ollama pull phi3 +ollama pull phi3.5 export LLM_API_BASE_URL=http://127.0.0.1:11434/v1 -export LLM_CHAT_MODEL='phi3' +export LLM_CHAT_MODEL='phi3.5' ``` For [LocalAI](https://localai.io), initiate its container and adjust the environment variable `LLM_API_BASE_URL`: