You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Add your configuration variables (e.g., API keys, model endpoints).
43
-
```bash
44
-
API_KEY=api_key_from_openrouter.ai
45
-
BASE_URL=https://openrouter.ai/api/v1
46
-
```
51
+
- Create a `.env` file in the root directory.
52
+
- Add your configuration variables (e.g., API keys, model endpoints).
53
+
```bash
54
+
API_KEY=api_key_from_openrouter.ai
55
+
BASE_URL=https://openrouter.ai/api/v1
56
+
REDIS_HOST=127.0.0.1
57
+
REDIS_PORT=6379
58
+
```
47
59
48
60
## 🚀 Usage
61
+
49
62
### Start the Server
50
63
```bash
51
64
npm start
52
65
```
53
66
54
67
### Access the API
55
-
-The server will run on http://localhost:3000 by default.
56
-
-Use the provided endpoints to interact with the integrated AI models.
68
+
-The server will run on `http://localhost:3000` by default.
69
+
-Use the provided endpoints to interact with the integrated AI models.
57
70
58
71
### Example Requests
72
+
59
73
#### Search request
60
74
```bash
61
75
curl -X POST http://localhost:3000/search \
@@ -70,6 +84,14 @@ curl -X POST http://localhost:3000/analyze \
70
84
-d '{"model": "change_with_model_example", "text": "Tesla announced new solar roof technology with 25% improved efficiency in Q4 2023."}'
71
85
```
72
86
87
+
## 🔒 Best Practices Implemented
88
+
89
+
-**Rate Limiting**: To prevent abuse and ensure fair usage, rate limiting is implemented with **exponential backoff**. If the user exceeds the allowed number of requests, they will be temporarily blocked from making more requests.
90
+
-**Caching**: Redis is used to cache frequent queries, reducing the load on the AI models and improving the response times.
91
+
-**Input Validation**: We validate all incoming requests using **Zod** to ensure the data is structured and safe.
92
+
-**Streaming**: For long-running processes, such as AI completions, we stream the responses in real-time.
93
+
-**Monitoring**: The API tracks usage metrics and errors via **Prometheus**, allowing you to monitor the health and performance of your system in real-time.
94
+
73
95
## 🤝 Contributing
74
96
Contributions are welcome! If you have suggestions, improvements, or bug fixes, please open an issue or submit a pull request.
75
97
@@ -80,13 +102,12 @@ This project is licensed under the **MIT License**. See the [LICENSE](./LICENSE)
80
102
- Special thanks to all contributors and the open-source community.
81
103
- Gratitude to the maintainers of the libraries used in this project.
82
104
105
+
---
106
+
107
+
### **Additional Notes:**
83
108
84
-
<!--
85
-
model: "anthropic/claude-3.5-sonnet",
86
-
model: "google/gemini-flash-1.5",
87
-
model: "deepseek/deepseek-r1",
88
-
model: "openai/gpt-4o-mini",
89
-
model: "meta-llama/llama-3.2-3b-instruct",
90
-
model: "mistralai/mistral-small",
91
-
https://openrouter.ai/models
92
-
-->
109
+
- The **Rate Limiting** feature uses Redis for storing request counts, ensuring that users cannot flood the system with requests.
110
+
-**Caching** stores frequently requested data in Redis, which minimizes the number of redundant calls to the AI models, improving efficiency and speed.
111
+
-**Zod** ensures that all user inputs are validated before they are processed, making the application more secure and reliable.
112
+
-**Streaming** allows for real-time responses, reducing the wait time for users interacting with models that take longer to process.
113
+
-**Prometheus** provides valuable insights into the health of the API, making it easier to monitor usage, errors, and response times.
0 commit comments