This is a straightforward, zero-dependency CLI tool to interact with any LLM service.
It is available in several flavors:
- Python version. Compatible with CPython or PyPy, v3.10 or higher.
- JavaScript version. Compatible with Node.js (>= v18) or Bun (>= v1.0).
- Clojure version. Compatible with Babashka (>= 1.3).
- Go version. Compatible with Go, v1.19 or higher.
Ask LLM is compatible with either a cloud-based (managed) LLM service (e.g. OpenAI GPT model, Groq, OpenRouter, etc) or with a locally hosted LLM server (e.g. llama.cpp, LocalAI, Ollama, etc). Please continue reading for detailed instructions.
Interact with the LLM with:
./ask-llm.py # for Python user
./ask-llm.js # for Node.js user
./ask-llm.clj # for Clojure user
go run ask-llm.go # for Go user
or pipe the question directly to get an immediate answer:
echo "Why is the sky blue?" | ./ask-llm.py
or request the LLM to perform a certain task:
echo "Translate into German: thank you" | ./ask-llm.py
Supported local LLM servers include llama.cpp, Jan, Ollama, and LocalAI.
To utilize llama.cpp locally with its inference engine, ensure to load a quantized model such as Phi-3.5 Mini, or Llama-3.1 8B. Adjust the environment variable LLM_API_BASE_URL
accordingly:
/path/to/llama-server -m Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf
export LLM_API_BASE_URL=http://127.0.0.1:8080/v1
To use Jan with its local API server, refer to its documentation and load a model like Phi-3 Mini, LLama-3 8B, or OpenHermes 2.5 and set the environment variable LLM_API_BASE_URL
:
export LLM_API_BASE_URL=http://127.0.0.1:1337/v1
export LLM_CHAT_MODEL='llama3-8b-instruct'
To use Ollama locally, load a model and configure the environment variable LLM_API_BASE_URL
:
ollama pull phi3.5
export LLM_API_BASE_URL=http://127.0.0.1:11434/v1
export LLM_CHAT_MODEL='phi3.5'
For LocalAI, initiate its container and adjust the environment variable LLM_API_BASE_URL
:
docker run -ti -p 8080:8080 localai/localai tinyllama-chat
export LLM_API_BASE_URL=http://localhost:3928/v1
Supported LLM services include AI21, Deep Infra, DeepSeek, Fireworks, Groq, Hyperbolic, Lepton, Novita, Octo, OpenAI, OpenRouter, and Together.
For configuration specifics, refer to the relevant section. The examples use Llama-3.1 8B (or GPT-4o Mini for OpenAI), but any LLM with at least 7B parameters should work just as well, such as Mistral 7B, Qwen-2 7B, or Gemma-2 9B.
export LLM_API_BASE_URL=https://api.ai21.com/studio/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL=jamba-1.5-mini
export LLM_API_BASE_URL=https://api.deepinfra.com/v1/openai
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="meta-llama/Meta-Llama-3.1-8B-Instruct"
export LLM_API_BASE_URL=https://api.deepseek.com/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="deepseek-chat"
export LLM_API_BASE_URL=https://api.fireworks.ai/inference/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="accounts/fireworks/models/llama-v3p1-8b-instruct"
export LLM_API_BASE_URL=https://api.groq.com/openai/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="llama-3.1-8b-instant"
export LLM_API_BASE_URL=https://api.hyperbolic.xyz/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="meta-llama/Meta-Llama-3.1-8B-Instruct"
export LLM_API_BASE_URL=https://llama3-1-8b.lepton.run/api/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="llama3-1-8b"
export LLM_API_BASE_URL=https://api.novita.ai/v3/openai
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="meta-llama/llama-3.1-8b-instruct"
export LLM_API_BASE_URL=https://text.octoai.run/v1/
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="meta-llama-3.1-8b-instruct"
export LLM_API_BASE_URL=https://api.openai.com/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="gpt-4o-mini"
export LLM_API_BASE_URL=https://openrouter.ai/api/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="meta-llama/llama-3.1-8b-instruct"
export LLM_API_BASE_URL=https://api.together.xyz/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo"