Skip to content

matigumma/mlx-mac-local-inference

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 

Repository files navigation

Chat Completion with Local Server

This project demonstrates how to use the OpenAI API for chat completions with a local server setup. It allows users to input prompts via command-line arguments and receive responses from a specified model.

Prerequisites

  • Python 3.x
  • mlx-omni-server installed

Installation

  1. Install the mlx-omni-server:

    pip install mlx-omni-server
  2. Start the local server:

    mlx-omni-server

Usage

Run the script from the command line, providing a prompt as an argument:

uv run main.py "What is the capital of Argentina?"

Feel free to modify any sections to better fit your project's specifics!

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages