Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request or Clarification] - Integration of Ollama or LiteLLM for Local Model Usage #268

Open
ARISTheGod opened this issue Jan 11, 2025 · 5 comments

Comments

@ARISTheGod
Copy link

Description

I would like to know if it's possible to integrate Ollama or LiteLLM with Midscene.js to enable local model usage. These tools provide excellent support for running large language models (LLMs) locally, ensuring privacy, cost efficiency, and flexibility. However, I am unsure how to configure Midscene.js to work with these tools or if additional steps are required.

Why This Matters

  • Cost Efficiency: Avoids reliance on expensive cloud APIs by leveraging local resources.
  • Flexibility: Tools like LiteLLM and Ollama support a wide range of models, including Llama2, Mistral, and CodeLlama.
  • Privacy: Running models locally ensures that sensitive data remains secure.

Clarification Questions

  1. Does Midscene.js currently support integration with LiteLLM or Ollama as a model provider?
  2. If so, could you provide guidance on how to configure it? For example:
    2.1. What should the yaml configuration look like for using a local LiteLLM proxy server (e.g., http://localhost:11434)?
    2.2. Are there any specific dependencies or setup steps required?
  3. If not supported yet, is there a roadmap or plan to include this functionality in the future?

I have reviewed the connectivity test project but couldn't find specific details about integrating these tools. Any guidance or clarification would be greatly appreciated.

Even though I couldn’t fully test or understand all the features, I want to express my admiration for this project. It’s clear that a lot of effort has gone into making Midscene.js an innovative and robust tool. The idea is truly impressive, and I believe it has the potential to become one of the most widely used tools.
As a free and open-source project, this is even more remarkable. Thank you for creating such an incredible tool!

@yuyutaotao
Copy link
Collaborator

Hi @ARISTheGod ,
Thanks for feedback !

Midscene.js supports any LLM service that fits the OpenAI style interface. I believe that Ollama and LiteLLM should also work properly.

You should config these env variables

export OPENAI_BASE_URL="http://localhost:11434"
export MIDSCENE_MODEL_NAME='you-preferred-name';

You can find more configs here: https://midscenejs.com/model-provider.html

@yuyutaotao
Copy link
Collaborator

BTW, currently we find the models from open source do not work well with Midscene.

If you find some self-hosted models that could be suitable for Midscene, please let us know.

@afsarali-pg
Copy link

afsarali-pg commented Jan 15, 2025

I am also looking for this ollama local model integration

@yuyutaotao
Copy link
Collaborator

Stay tuned ! We will provide the ollama solution in the next few weeks !

I'll ping this issue when it is released.

@ARISTheGod
Copy link
Author

Stay tuned ! We will provide the ollama solution in the next few weeks !

I'll ping this issue when it is released.

That's fantastic news! Thanks so much for the update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants