Skip to content

Commit 427b3fd

Browse files
committed
Update frontend deps, setup uv, add support for custom endpoint, README updates
1 parent dfdf26e commit 427b3fd

10 files changed

+6758
-2774
lines changed

README.md

+11-13
Original file line numberDiff line numberDiff line change
@@ -18,16 +18,21 @@ OpenUI let's you describe UI using your imagination, then see it rendered live.
1818

1919
## Running Locally
2020

21-
OpenUI supports [OpenAI](https://platform.openai.com/api-keys), [Groq](https://console.groq.com/keys), and any model [LiteLLM](https://docs.litellm.ai/docs/) supports such as [Gemini](https://aistudio.google.com/app/apikey) or [Anthropic (Claude)](https://console.anthropic.com/settings/keys). The following environment variables are optional, but need to be set in your environment for these services to work:
21+
OpenUI supports [OpenAI](https://platform.openai.com/api-keys), [Groq](https://console.groq.com/keys), and any model [LiteLLM](https://docs.litellm.ai/docs/) supports such as [Gemini](https://aistudio.google.com/app/apikey) or [Anthropic (Claude)](https://console.anthropic.com/settings/keys). The following environment variables are optional, but need to be set in your environment for alternative models to work:
2222

2323
- **OpenAI** `OPENAI_API_KEY`
2424
- **Groq** `GROQ_API_KEY`
2525
- **Gemini** `GEMINI_API_KEY`
2626
- **Anthropic** `ANTHROPIC_API_KEY`
2727
- **Cohere** `COHERE_API_KEY`
2828
- **Mistral** `MISTRAL_API_KEY`
29+
- **OpenAI Compatible** `OPENAI_COMPATIBLE_ENDPOINT` and `OPENAI_COMPATIBLE_API_KEY`
2930

30-
You can also use models available to [Ollama](https://ollama.com). [Install Ollama](https://ollama.com/download) and pull a model like [Llava](https://ollama.com/library/llava). If Ollama is not running on http://127.0.0.1:11434, you can set the `OLLAMA_HOST` environment variable to the host and port of your Ollama instance.
31+
For example, if you're running a tool like [localai](https://localai.io/) you can set `OPENAI_COMPATIBLE_ENDPOINT` and optionally `OPENAI_COMPATIBLE_API_KEY` to have the models available listed in the UI's model selector under LiteLLM.
32+
33+
### Ollama
34+
35+
You can also use models available to [Ollama](https://ollama.com). [Install Ollama](https://ollama.com/download) and pull a model like [Llava](https://ollama.com/library/llava). If Ollama is not running on http://127.0.0.1:11434, you can set the `OLLAMA_HOST` environment variable to the host and port of your Ollama instance. For example when running in docker you'll need to point to http://host.docker.internal:11434 as shown below.
3136

3237
### Docker (preferred)
3338

@@ -43,23 +48,15 @@ Now you can goto [http://localhost:7878](http://localhost:7878) and generate new
4348

4449
### From Source / Python
4550

46-
Assuming you have git and python installed:
47-
48-
> **Note:** There's a .python-version file that specifies **openui** as the virtual env name. Assuming you have pyenv and pyenv-virtualenv you can run the following from the root of the repository or just run `pyenv local 3.X` where X is the version of python you have installed.
49-
> ```bash
50-
> pyenv virtualenv 3.12.2 openui
51-
> pyenv local openui
52-
> ```
51+
Assuming you have git and [uv](https://github.com/astral-sh/uv) installed:
5352

5453
```bash
5554
git clone https://github.com/wandb/openui
5655
cd openui/backend
57-
# You probably want to do this from a virtual environment
58-
pip install .
56+
uv sync --frozen --extra litellm
57+
source .venv/bin/activate
5958
# Set API keys for any LLM's you want to use
6059
export OPENAI_API_KEY=xxx
61-
# You may change the base url to use an OpenAI-compatible api by setting the OPENAI_BASE_URL environment variable
62-
# export OPENAI_BASE_URL=https://api.myopenai.com/v1
6360
python -m openui
6461
```
6562

@@ -82,6 +79,7 @@ To use litellm from source you can run:
8279
```bash
8380
pip install .[litellm]
8481
export ANTHROPIC_API_KEY=xxx
82+
export OPENAI_COMPATIBLE_ENDPOINT=http://localhost:8080/v1
8583
python -m openui --litellm
8684
```
8785

backend/.python-version

+1-1
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
openui
1+
3.12

backend/Dockerfile

+16-14
Original file line numberDiff line numberDiff line change
@@ -1,25 +1,27 @@
11
# Build the virtualenv as a separate step: Only re-execute this step when pyproject.toml changes
2-
FROM python:3.12-bookworm AS build-venv
2+
FROM ghcr.io/astral-sh/uv:python3.12-bookworm-slim AS builder
33

4-
WORKDIR /build
5-
COPY pyproject.toml .
6-
COPY README.md .
7-
8-
RUN mkdir -p openui/util && \
9-
python -m venv /venv && \
10-
/venv/bin/pip install --upgrade pip setuptools wheel && \
11-
/venv/bin/pip install --disable-pip-version-check .[litellm]
4+
WORKDIR /app
125

13-
# Copy the virtualenv into a distroless image
14-
FROM python:3.12-slim-bookworm
6+
ENV UV_LINK_MODE=copy UV_COMPILE_BYTECODE=1
157

16-
ENV PATH="/venv/bin:$PATH"
8+
RUN --mount=type=cache,target=/root/.cache/uv \
9+
--mount=type=bind,source=uv.lock,target=uv.lock \
10+
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
11+
uv sync --frozen --extra litellm --no-install-project --no-dev
1712

18-
COPY --from=build-venv /venv /venv
1913
COPY . /app
2014

15+
RUN --mount=type=cache,target=/root/.cache/uv \
16+
uv sync --frozen --extra litellm --no-dev
17+
18+
# Copy the virtualenv into a distroless image
19+
FROM ghcr.io/astral-sh/uv:python3.12-bookworm-slim
20+
2121
WORKDIR /app
2222

23-
RUN pip install --no-deps -U /app
23+
COPY --from=builder --chown=app:app /app /app
24+
25+
ENV PATH="/app/.venv/bin:$PATH"
2426

2527
ENTRYPOINT ["python", "-m", "openui", "--litellm"]

backend/openui/config.py

+4-1
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,9 @@ class Env(Enum):
3939
SESSION_KEY = secrets.token_hex(32)
4040
with env_path.open("w") as f:
4141
f.write(f"OPENUI_SESSION_KEY={SESSION_KEY}")
42+
# Set the LITELLM_MASTER_KEY to a random value if it's not already set
43+
if os.getenv("LITELLM_MASTER_KEY") is None:
44+
os.environ["LITELLM_MASTER_KEY"] = "sk-{SESSION_KEY}"
4245
# GPT 3.5 is 0.0005 per 1k tokens input and 0.0015 output
4346
# 700k puts us at a max of $1.00 spent per user over a 48 hour period
4447
MAX_TOKENS = int(os.getenv("OPENUI_MAX_TOKENS", "700000"))
@@ -61,5 +64,5 @@ class Env(Enum):
6164
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY", "xxx")
6265
GROQ_BASE_URL = os.getenv("GROQ_BASE_URL", "https://api.groq.com/openai/v1")
6366
GROQ_API_KEY = os.getenv("GROQ_API_KEY")
64-
LITELLM_API_KEY = os.getenv("LITELLM_API_KEY", "xxx")
67+
LITELLM_API_KEY = os.getenv("LITELLM_API_KEY", os.getenv("LITELLM_MASTER_KEY"))
6568
LITELLM_BASE_URL = os.getenv("LITELLM_BASE_URL", "http://0.0.0.0:4000")

backend/openui/litellm.py

+21
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
import yaml
22
import os
33
import tempfile
4+
import openai
45

56

67
def generate_config():
@@ -95,6 +96,26 @@ def generate_config():
9596
]
9697
)
9798

99+
if "OPENAI_COMPATIBLE_ENDPOINT" in os.environ:
100+
client = openai.OpenAI(
101+
api_key=os.getenv("OPENAI_COMPATIBLE_API_KEY"),
102+
base_url=os.getenv("OPENAI_COMPATIBLE_ENDPOINT"),
103+
)
104+
try:
105+
for model in client.models.list().data:
106+
models.append(
107+
{
108+
"model_name": model.id,
109+
"litellm_params": {
110+
"model": f"openai/{model.id}",
111+
"api_key": os.getenv("OPENAI_COMPATIBLE_API_KEY"),
112+
"base_url": os.getenv("OPENAI_COMPATIBLE_ENDPOINT"),
113+
},
114+
}
115+
)
116+
except Exception as e:
117+
print(f"Error listing models for {os.getenv('OPENAI_COMPATIBLE_ENDPOINT')}: {e}")
118+
98119
yaml_structure = {"model_list": models}
99120
with tempfile.NamedTemporaryFile(
100121
delete=False, mode="w", suffix=".yaml"

backend/openui/server.py

-7
Original file line numberDiff line numberDiff line change
@@ -594,13 +594,6 @@ def check_wandb_auth():
594594

595595
wandb_enabled = check_wandb_auth()
596596

597-
if not wandb_enabled:
598-
try:
599-
from weave.integrations.openai.openai_sdk import openai_patcher
600-
openai_patcher.undo_patch()
601-
except Exception:
602-
pass
603-
604597
class Server(uvicorn.Server):
605598
# TODO: this still isn't working for some reason, can't ctrl-c when not in dev mode
606599
def install_signal_handlers(self):

0 commit comments

Comments
 (0)