Skip to content

Commit 2755e19

Browse files
committed
2 weeks of work for the big redesign with lots of exicting new features!
1 parent 908b7f7 commit 2755e19

File tree

127 files changed

+381871
-46102
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

127 files changed

+381871
-46102
lines changed

.env

Whitespace-only changes.

.gitignore

+1
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
11
.DS_Store
22
nohup.out
33
.cache/
4+
.env

README.md

+38-17
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,32 @@ OpenUI let's you describe UI using your imagination, then see it rendered live.
1818

1919
## Running Locally
2020

21-
You can also run OpenUI locally and use models available to [Ollama](https://ollama.com). [Install Ollama](https://ollama.com/download) and pull a model like [CodeLlama](https://ollama.com/library/codellama), then assuming you have git and python installed:
21+
OpenUI supports [OpenAI](https://platform.openai.com/api-keys), [Groq](https://console.groq.com/keys), and any model [LiteLLM](https://docs.litellm.ai/docs/) supports such as [Gemini](https://aistudio.google.com/app/apikey) or [Anthropic (Claude)](https://console.anthropic.com/settings/keys). The following environment variables are optional, but need to be set in your environment for these services to work:
22+
23+
- **OpenAI** `OPENAI_API_KEY`
24+
- **Groq** `GROQ_API_KEY`
25+
- **Gemini** `GEMINI_API_KEY`
26+
- **Anthropic** `ANTHROPIC_API_KEY`
27+
- **Cohere** `COHERE_API_KEY`
28+
- **Mistral** `MISTRAL_API_KEY`
29+
30+
You can also use models available to [Ollama](https://ollama.com). [Install Ollama](https://ollama.com/download) and pull a model like [Llava](https://ollama.com/library/llava). If Ollama is not running on http://127.0.0.1:11434, you can set the `OLLAMA_HOST` environment variable to the host and port of your Ollama instance.
31+
32+
### Docker (preferred)
33+
34+
The following command would forward the API keys from your current shell environment and tell Docker to use the Ollama instance running on your machine.
35+
36+
```bash
37+
export ANTHROPIC_API_KEY=xxx
38+
export OPENAI_API_KEY=xxx
39+
docker run -n openui -p 7878:7878 -e OPENAI_API_KEY -e ANTHROPIC_API_KEY -e OLLAMA_HOST=http://host.docker.internal:11434 gchr.io/wandb/openui
40+
```
41+
42+
Now you can goto [http://localhost:7878](http://localhost:7878) and generate new UI's!
43+
44+
### From Source / Python
45+
46+
Assuming you have git and python installed:
2247

2348
> **Note:** There's a .python-version file that specifies **openui** as the virtual env name. Assuming you have pyenv and pyenv-virtualenv you can run the following from the root of the repository or just run `pyenv local 3.X` where X is the version of python you have installed.
2449
> ```bash
@@ -38,25 +63,32 @@ export OPENAI_API_KEY=xxx
3863
python -m openui
3964
```
4065
41-
## Groq
66+
## LiteLLM
4267

43-
To use the super fast [Groq](https://groq.com) models, set `GROQ_API_KEY` to your Groq api key which you can [find here](https://console.groq.com/keys). To use one of the Groq models, click the settings icon in the sidebar and choose from the list:
68+
[LiteLLM](https://docs.litellm.ai/docs/) can be used to connect to basically any LLM service available. We generate a config automatically based on your environment variables. You can create your own [proxy config](https://litellm.vercel.app/docs/proxy/configs) to override this behavior. We look for a custom config in the following locations:
4469

45-
<img src="./assets/settings.jpeg" width="500" alt="Select Groq models" />
70+
1. `litellm-config.yaml` in the current directory
71+
2. `/app/litellm-config.yaml` when running in a docker container
72+
3. An arbitrary path specified by the `LITELLM_CONFIG` environment variable
4673

47-
You can also change the default base url used for Groq (if necessary), i.e.
74+
For example to use a custom config in docker you can run:
4875

4976
```bash
50-
export GROQ_BASE_URL=https://api.groq.com/openai/v1
77+
docker run -n openui -p 7878:7878 -v $(pwd)/litellm-config.yaml:/app/litellm-config.yaml gchr.io/wandb/openui
5178
```
5279

80+
## Groq
81+
82+
To use the super fast [Groq](https://groq.com) models, set `GROQ_API_KEY` to your Groq api key which you can [find here](https://console.groq.com/keys). To use one of the Groq models, click the settings icon in the nav bar.
83+
5384
### Docker Compose
5485

5586
> **DISCLAIMER:** This is likely going to be very slow. If you have a GPU you may need to change the tag of the `ollama` container to one that supports it. If you're running on a Mac, follow the instructions above and run Ollama natively to take advantage of the M1/M2.
5687
5788
From the root directory you can run:
5889

5990
```bash
91+
echo "LITELLM_MASTER_KEY=sk-$(openssl rand -hex 20)" > .env
6092
docker-compose up -d
6193
docker exec -it openui-ollama-1 ollama pull llava
6294
```
@@ -65,17 +97,6 @@ If you have your OPENAI_API_KEY set in the environment already, just remove `=xx
6597

6698
*If you make changes to the frontend or backend, you'll need to run `docker-compose build` to have them reflected in the service.*
6799

68-
### Docker
69-
70-
You can build and run the docker file manually from the `/backend` directory:
71-
72-
```bash
73-
docker build . -t wandb/openui --load
74-
docker run -p 7878:7878 -e OPENAI_API_KEY -e GROQ_API_KEY wandb/openui
75-
```
76-
77-
Now you can goto [http://localhost:7878](http://localhost:7878)
78-
79100
## Development
80101

81102
A [dev container](https://github.com/wandb/openui/blob/main/.devcontainer/devcontainer.json) is configured in this repository which is the quickest way to get started.

backend/.dockerignore

+2-1
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,10 @@
1010
**/*.egg-info
1111
**/.DS_Store
1212
**/build
13+
**/wandb
14+
**/*.db
1315

1416
# flyctl launch added from openui/eval/.gitignore
15-
openui/eval/**/wandb
1617
openui/eval/**/datasets
1718
openui/eval/**/components
1819
fly.toml

backend/.github/workflows/docker.yml

+74
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
#
2+
name: Create and publish a Docker image
3+
4+
# Configures this workflow to run every time a change is pushed to the branch called `release`.
5+
on:
6+
- workflow_dispatch
7+
- push
8+
9+
env:
10+
REGISTRY: ghcr.io
11+
IMAGE_NAME: ${{ github.repository }}
12+
13+
jobs:
14+
build-and-push-image:
15+
runs-on: ubuntu-latest
16+
permissions:
17+
contents: read
18+
packages: write
19+
attestations: write
20+
id-token: write
21+
steps:
22+
- name: Checkout repository
23+
uses: actions/checkout@v4
24+
- uses: pnpm/action-setup@v4
25+
name: Install pnpm
26+
with:
27+
version: 9
28+
run_install: false
29+
- name: Install Node.js
30+
uses: actions/setup-node@v4
31+
with:
32+
node-version: 20
33+
cache: "pnpm"
34+
- name: Get pnpm store directory
35+
shell: bash
36+
run: |
37+
echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
38+
- uses: actions/cache@v4
39+
name: Setup pnpm cache
40+
with:
41+
path: ${{ env.STORE_PATH }}
42+
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
43+
restore-keys: |
44+
${{ runner.os }}-pnpm-store-
45+
- name: Install dependencies
46+
run: pnpm install -C frontend
47+
# We use npm here because pnpm wasn't executing post hooks
48+
- name: Build frontend
49+
run: cd frontend && npm run build
50+
- name: Log in to the Container registry
51+
uses: docker/login-action@v3
52+
with:
53+
registry: ${{ env.REGISTRY }}
54+
username: ${{ github.actor }}
55+
password: ${{ secrets.GITHUB_TOKEN }}
56+
- name: Extract metadata (tags, labels) for Docker
57+
id: meta
58+
uses: docker/metadata-action@v5
59+
with:
60+
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
61+
- name: Build and push Docker image
62+
id: push
63+
uses: docker/build-push-action@v6
64+
with:
65+
context: backend/.
66+
push: true
67+
tags: ${{ steps.meta.outputs.tags }}
68+
labels: ${{ steps.meta.outputs.labels }}
69+
- name: Generate artifact attestation
70+
uses: actions/attest-build-provenance@v1
71+
with:
72+
subject-name: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME}}
73+
subject-digest: ${{ steps.push.outputs.digest }}
74+
push-to-registry: true

backend/Dockerfile

+2-2
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ COPY README.md .
88
RUN mkdir -p openui/util && \
99
python -m venv /venv && \
1010
/venv/bin/pip install --upgrade pip setuptools wheel && \
11-
/venv/bin/pip install --disable-pip-version-check .
11+
/venv/bin/pip install --disable-pip-version-check .[litellm]
1212

1313
# Copy the virtualenv into a distroless image
1414
FROM python:3.12-slim-bookworm
@@ -22,4 +22,4 @@ WORKDIR /app
2222

2323
RUN pip install --no-deps -U /app
2424

25-
ENTRYPOINT ["python", "-m", "openui"]
25+
ENTRYPOINT ["python", "-m", "openui", "--litellm"]

backend/README.md

+7
Original file line numberDiff line numberDiff line change
@@ -57,3 +57,10 @@ pytest
5757
## Evaluation
5858

5959
The [eval](./openui/eval) folder contains scripts for evaluating the performance of a model. It automates generating UI, taking screenshots of the UI, then asking `gpt-4-vision-preview` to rate the elements. More details about the eval pipeline coming soon...
60+
61+
62+
## Google Gemeni
63+
64+
```
65+
gcloud auth application-default login --impersonate-service-account [email protected]
66+
```

backend/openui/__main__.py

+41-3
Original file line numberDiff line numberDiff line change
@@ -2,21 +2,25 @@
22
from .logs import setup_logger
33
from . import server
44
from . import config
5+
from .litellm import generate_config
56
import os
67
import uvicorn
78
from uvicorn import Config
89
import sys
10+
import subprocess
11+
import time
12+
913

1014
def is_running_in_docker():
1115
# Check for the .dockerenv file
12-
if os.path.exists('/.dockerenv'):
16+
if os.path.exists("/.dockerenv"):
1317
return True
1418

1519
# Check for Docker-related entries in /proc/self/cgroup
1620
try:
17-
with open('/proc/self/cgroup', 'r') as file:
21+
with open("/proc/self/cgroup", "r") as file:
1822
for line in file:
19-
if 'docker' in line:
23+
if "docker" in line:
2024
return True
2125
except Exception as e:
2226
pass
@@ -26,8 +30,17 @@ def is_running_in_docker():
2630

2731
return False
2832

33+
2934
if __name__ == "__main__":
3035
ui = any([arg == "-i" for arg in sys.argv])
36+
litellm = (
37+
any([arg == "--litellm" for arg in sys.argv])
38+
or "OPENUI_LITELLM_CONFIG" in os.environ
39+
)
40+
# TODO: only render in interactive mode?
41+
print(
42+
(Path(__file__).parent / "logo.ascii").read_text(), file=sys.stderr, flush=True
43+
)
3144
logger = setup_logger("/tmp/openui.log" if ui else None)
3245
logger.info("Starting OpenUI AI Server created by W&B...")
3346

@@ -63,6 +76,31 @@ def is_running_in_docker():
6376
logger.info("Running Terminal UI App")
6477
app.run()
6578
else:
79+
if litellm:
80+
config_path = "litellm-config.yaml"
81+
if "OPENUI_LITELLM_CONFIG" in os.environ:
82+
config_path = os.environ["OPENUI_LITELLM_CONFIG"]
83+
elif os.path.exists("/app/litellm-config.yaml"):
84+
config_path = "/app/litellm-config.yaml"
85+
else:
86+
config_path = generate_config()
87+
88+
logger.info(
89+
f"Starting LiteLLM in the background with config: {config_path}"
90+
)
91+
litellm_process = subprocess.Popen(
92+
["litellm", "--config", config_path],
93+
stdout=subprocess.PIPE,
94+
stderr=subprocess.PIPE,
95+
text=True,
96+
)
97+
# Ensure litellm stays up for 5 seconds
98+
for i in range(5):
99+
if litellm_process.poll() is not None:
100+
stdout, stderr = litellm_process.communicate()
101+
logger.error(f"LiteLLM failed to start:\n{stderr}")
102+
break
103+
time.sleep(1)
66104
logger.info("Running API Server")
67105
mkcert_dir = Path.home() / ".vite-plugin-mkcert"
68106

backend/openui/assets/question.svg

+5
Loading

backend/openui/config.py

+12-2
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,17 @@ class Env(Enum):
4949
AWS_ACCESS_KEY_ID = os.getenv("AWS_ACCESS_KEY_ID")
5050
AWS_SECRET_ACCESS_KEY = os.getenv("AWS_SECRET_ACCESS_KEY")
5151
BUCKET_NAME = os.getenv("BUCKET_NAME", "openui")
52+
53+
# Cors, if you're hosting the annotator iframe elsewhere, add it here
54+
CORS_ORIGINS = os.getenv(
55+
"OPENUI_CORS_ORIGINS", "https://wandb.github.io,https://localhost:5173"
56+
).split(",")
57+
58+
# Model providers
59+
OLLAMA_HOST = os.getenv("OLLAMA_HOST", "http://127.0.0.1:11434")
5260
OPENAI_BASE_URL = os.getenv("OPENAI_BASE_URL", "https://api.openai.com/v1")
53-
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
61+
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY", "xxx")
5462
GROQ_BASE_URL = os.getenv("GROQ_BASE_URL", "https://api.groq.com/openai/v1")
55-
GROQ_API_KEY = os.getenv("GROQ_API_KEY")
63+
GROQ_API_KEY = os.getenv("GROQ_API_KEY")
64+
LITELLM_API_KEY = os.getenv("LITELLM_API_KEY", "xxx")
65+
LITELLM_BASE_URL = os.getenv("LITELLM_BASE_URL", "http://0.0.0.0:4000")

backend/openui/db/models.py

+17-2
Original file line numberDiff line numberDiff line change
@@ -68,6 +68,14 @@ class Component(BaseModel):
6868
data = JSONField()
6969

7070

71+
class Vote(BaseModel):
72+
id = BinaryUUIDField(primary_key=True)
73+
user = ForeignKeyField(User, backref="votes")
74+
component = ForeignKeyField(Component, backref="votes")
75+
vote = BooleanField()
76+
created_at = DateTimeField()
77+
78+
7179
class Usage(BaseModel):
7280
input_tokens = IntegerField()
7381
output_tokens = IntegerField()
@@ -105,7 +113,7 @@ def tokens_since(cls, user_id: str, day: datetime.date) -> int:
105113
)
106114

107115

108-
CURRENT_VERSION = "2024-03-12"
116+
CURRENT_VERSION = "2024-05-14"
109117

110118

111119
def alter(schema: SchemaMigration, ops: list[list], version: str) -> bool:
@@ -135,12 +143,19 @@ def perform_migration(schema: SchemaMigration) -> bool:
135143
)
136144
if altered:
137145
perform_migration(schema)
146+
if schema.version == "2024-03-12":
147+
version = "2024-05-14"
148+
database.create_tables([Vote])
149+
schema.version = version
150+
schema.save()
151+
if version != CURRENT_VERSION:
152+
perform_migration(schema)
138153

139154

140155
def ensure_migrated():
141156
if not config.DB.exists():
142157
database.create_tables(
143-
[User, Credential, Session, Component, SchemaMigration, Usage]
158+
[User, Credential, Session, Component, SchemaMigration, Usage, Vote]
144159
)
145160
SchemaMigration.create(version=CURRENT_VERSION)
146161
else:

0 commit comments

Comments
 (0)