Skip to content

Commit a3e7030

Browse files
initial commit
1 parent 1130936 commit a3e7030

32 files changed

+2027
-0
lines changed

.gitignore

+12
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
__pycache__/
2+
venv/
3+
*.csv
4+
*.log
5+
*.pt
6+
*.pkl
7+
*.zip
8+
*.DS_Store
9+
*.pdf
10+
*.out
11+
*.png
12+
*.rdb

README.md

+74
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
# project-polybius
2+
3+
## Overview
4+
5+
project-polybius is a text based game that uses Large Language Models to generate new experiences for the player.
6+
This repo provides the code to deploy it on cloud and a client to interact with the deployed system.
7+
8+
9+
## Instructions
10+
11+
### Running the services
12+
I have tested this code on google cloud platform's kubernetes engine (GKE) because it uses GCP's Vertex AI APIs for text generation, but technically it can run on any cloud or locally provided that an appropriate `generate_text()` function (imported in `llm-handler.py`) is implemented in its corresponding api_helper file.
13+
14+
After starting a kubernetes cluster, clone the repo:
15+
```
16+
git clone https://github.com/nikhilbarhate99/project-polybius.git
17+
cd project-polybius
18+
```
19+
Install minio on kubernetes cluster using helm (make sure helm is installed before)
20+
```
21+
helm repo add bitnami https://charts.bitnami.com/bitnami
22+
helm install -f ./minio/minio-config.yaml -n minio-ns --create-namespace minio-proj bitnami/minio
23+
```
24+
Deploy the system on the cluster, this will start running all the pods, services, deployments required for the game.
25+
```
26+
./deploy-cloud.sh
27+
```
28+
Run the following command to expose the rest service using:
29+
```
30+
kubectl apply -f expose-rest.yaml
31+
```
32+
After a while, the kubernetes cluster will assign an external IP to the `expose-rest-svc` service.
33+
You can check this by running:
34+
```
35+
kubectl get all
36+
```
37+
**Important:** Now copy that external IP to the `REST_HOST` variable in the `global_variables.py` file.
38+
This allows the client to find the rest service.
39+
40+
### Running the client
41+
finally, once all the servcies are running on the cluster, we can install all the requirements in a virtual env and use the client to play the game:
42+
```
43+
python3 -m venv ./venv
44+
source venv/bin/activate
45+
pip3 install -r requirements.txt
46+
python3 polybius_client.py
47+
```
48+
As long as the cluster is running, multiple clients can play the game and their games will be saved in the Database.
49+
50+
**Note:** To play the game (on client side) four files are required which can be zipped an distributed to users: `polybius_client.py`, `global_variables.py`, `utils.py`, `requirements.txt`
51+
52+
## System
53+
54+
The current architecture is simple as illustrated in the figure
55+
![](https://github.com/nikhilbarhate99/dcsc-project-polybius/blob/main/media/polybius_fig.png)
56+
57+
58+
## Examples
59+
60+
| ![](https://github.com/nikhilbarhate99/dcsc-project-polybius/blob/main/media/game_pic_1.png) | ![](https://github.com/nikhilbarhate99/dcsc-project-polybius/blob/main/media/game_pic_2.png) |
61+
| :---:|:---: |
62+
63+
64+
65+
## To Do
66+
67+
- [ ] Add Authentication
68+
- [ ] Better error handling for REST Success/Failures
69+
- [ ] Change to a JSON DB
70+
- [ ] Fix UI issues
71+
- [ ] Use logs service
72+
- [ ] Add support for different settings/genre for game stories
73+
- [ ] Implement `generate_text()` function for other LLM APIs e.g. Gemini, GPT4 etc.
74+

deploy-cloud.sh

+16
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
#!/bin/sh
2+
kubectl apply -f redis/redis-deployment.yaml
3+
kubectl apply -f redis/redis-service.yaml
4+
5+
kubectl apply -f rest/rest-deployment.yaml
6+
kubectl apply -f rest/rest-service.yaml
7+
8+
# kubectl apply -f logs/logs-deployment.yaml
9+
10+
kubectl apply -f llm/llm-deployment.yaml
11+
12+
kubectl apply -f minio/minio-external-service.yaml
13+
14+
# kubectl apply -f expose-rest.yaml
15+
16+
# kubectl apply -f rest/rest-ingress.yaml

deploy-local.sh

+29
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
#!/bin/sh
2+
3+
kubectl apply -f redis/redis-deployment.yaml
4+
kubectl apply -f redis/redis-service.yaml
5+
6+
kubectl apply -f rest/rest-deployment.yaml
7+
kubectl apply -f rest/rest-service.yaml
8+
9+
# kubectl apply -f logs/logs-deployment.yaml
10+
11+
kubectl apply -f llm/llm-deployment.yaml
12+
13+
kubectl apply -f minio/minio-external-service.yaml
14+
15+
16+
17+
kubectl port-forward --address 0.0.0.0 service/redis 6379:6379 &
18+
19+
kubectl port-forward -n minio-ns --address 0.0.0.0 service/minio-proj 9000:9000 &
20+
kubectl port-forward -n minio-ns --address 0.0.0.0 service/minio-proj 9001:9001 &
21+
22+
23+
# port forward for rest for local dev
24+
# kubectl port-forward service/rest-svc 5005:5005
25+
26+
27+
# kubectl apply -f expose-rest.yaml
28+
29+
# kubectl apply -f rest/rest-ingress.yaml

deploy-minio-local.sh

+18
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
#!/bin/sh
2+
#
3+
# You can use this script to launch Redis and minio on Kubernetes
4+
# and forward their connections to your local computer. That means
5+
# you can then work on your worker-server.py and rest-server.py
6+
# on your local computer rather than pushing to Kubernetes with each change.
7+
#
8+
# To kill the port-forward processes us e.g. "ps augxww | grep port-forward"
9+
# to identify the processes ids
10+
#
11+
12+
13+
kubectl apply -f minio/minio-external-service.yaml
14+
15+
# If you're using minio from the kubernetes tutorial this will forward those
16+
kubectl port-forward -n minio-ns --address 0.0.0.0 service/minio-proj 9000:9000 &
17+
kubectl port-forward -n minio-ns --address 0.0.0.0 service/minio-proj 9001:9001 &
18+

expose-rest.yaml

+12
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
apiVersion: v1
2+
kind: Service
3+
metadata:
4+
name: expose-rest-svc
5+
spec:
6+
selector:
7+
app: rest
8+
ports:
9+
- protocol: TCP
10+
port: 5005
11+
targetPort: 5005
12+
type: LoadBalancer

0 commit comments

Comments
 (0)