This project demonstrates how to use the OpenAI API for chat completions with a local server setup. It allows users to input prompts via command-line arguments and receive responses from a specified model.
- Python 3.x
mlx-omni-server
installed
-
Install the
mlx-omni-server
:pip install mlx-omni-server
-
Start the local server:
mlx-omni-server
Run the script from the command line, providing a prompt as an argument:
uv run main.py "What is the capital of Argentina?"
Feel free to modify any sections to better fit your project's specifics!