You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+38-17
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,32 @@ OpenUI let's you describe UI using your imagination, then see it rendered live.
18
18
19
19
## Running Locally
20
20
21
-
You can also run OpenUI locally and use models available to [Ollama](https://ollama.com). [Install Ollama](https://ollama.com/download) and pull a model like [CodeLlama](https://ollama.com/library/codellama), then assuming you have git and python installed:
21
+
OpenUI supports [OpenAI](https://platform.openai.com/api-keys), [Groq](https://console.groq.com/keys), and any model [LiteLLM](https://docs.litellm.ai/docs/) supports such as [Gemini](https://aistudio.google.com/app/apikey) or [Anthropic (Claude)](https://console.anthropic.com/settings/keys). The following environment variables are optional, but need to be set in your environment for these services to work:
22
+
23
+
-**OpenAI**`OPENAI_API_KEY`
24
+
-**Groq**`GROQ_API_KEY`
25
+
-**Gemini**`GEMINI_API_KEY`
26
+
-**Anthropic**`ANTHROPIC_API_KEY`
27
+
-**Cohere**`COHERE_API_KEY`
28
+
-**Mistral**`MISTRAL_API_KEY`
29
+
30
+
You can also use models available to [Ollama](https://ollama.com). [Install Ollama](https://ollama.com/download) and pull a model like [Llava](https://ollama.com/library/llava). If Ollama is not running on http://127.0.0.1:11434, you can set the `OLLAMA_HOST` environment variable to the host and port of your Ollama instance.
31
+
32
+
### Docker (preferred)
33
+
34
+
The following command would forward the API keys from your current shell environment and tell Docker to use the Ollama instance running on your machine.
Now you can goto [http://localhost:7878](http://localhost:7878) and generate new UI's!
43
+
44
+
### From Source / Python
45
+
46
+
Assuming you have git and python installed:
22
47
23
48
> **Note:** There's a .python-version file that specifies **openui** as the virtual env name. Assuming you have pyenv and pyenv-virtualenv you can run the following from the root of the repository or just run `pyenv local 3.X` where X is the version of python you have installed.
24
49
> ```bash
@@ -38,25 +63,32 @@ export OPENAI_API_KEY=xxx
38
63
python -m openui
39
64
```
40
65
41
-
## Groq
66
+
## LiteLLM
42
67
43
-
To use the super fast [Groq](https://groq.com) models, set `GROQ_API_KEY` to your Groq api key which you can [find here](https://console.groq.com/keys). To use one of the Groq models, click the settings icon in the sidebar and choose from the list:
68
+
[LiteLLM](https://docs.litellm.ai/docs/) can be used to connect to basically any LLM service available. We generate a config automatically based on your environment variables. You can create your own [proxy config](https://litellm.vercel.app/docs/proxy/configs) to override this behavior. We look for a custom config in the following locations:
docker run -n openui -p 7878:7878 -v $(pwd)/litellm-config.yaml:/app/litellm-config.yaml gchr.io/wandb/openui
51
78
```
52
79
80
+
## Groq
81
+
82
+
To use the super fast [Groq](https://groq.com) models, set `GROQ_API_KEY` to your Groq api key which you can [find here](https://console.groq.com/keys). To use one of the Groq models, click the settings icon in the nav bar.
83
+
53
84
### Docker Compose
54
85
55
86
> **DISCLAIMER:** This is likely going to be very slow. If you have a GPU you may need to change the tag of the `ollama` container to one that supports it. If you're running on a Mac, follow the instructions above and run Ollama natively to take advantage of the M1/M2.
@@ -65,17 +97,6 @@ If you have your OPENAI_API_KEY set in the environment already, just remove `=xx
65
97
66
98
*If you make changes to the frontend or backend, you'll need to run `docker-compose build` to have them reflected in the service.*
67
99
68
-
### Docker
69
-
70
-
You can build and run the docker file manually from the `/backend` directory:
71
-
72
-
```bash
73
-
docker build . -t wandb/openui --load
74
-
docker run -p 7878:7878 -e OPENAI_API_KEY -e GROQ_API_KEY wandb/openui
75
-
```
76
-
77
-
Now you can goto [http://localhost:7878](http://localhost:7878)
78
-
79
100
## Development
80
101
81
102
A [dev container](https://github.com/wandb/openui/blob/main/.devcontainer/devcontainer.json) is configured in this repository which is the quickest way to get started.
Copy file name to clipboardexpand all lines: backend/README.md
+7
Original file line number
Diff line number
Diff line change
@@ -57,3 +57,10 @@ pytest
57
57
## Evaluation
58
58
59
59
The [eval](./openui/eval) folder contains scripts for evaluating the performance of a model. It automates generating UI, taking screenshots of the UI, then asking `gpt-4-vision-preview` to rate the elements. More details about the eval pipeline coming soon...
0 commit comments