Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] lobechat部署后,怎样让设值的LLM生效 #5835

Open
lizhao-8202 opened this issue Feb 7, 2025 · 7 comments
Open

[Bug] lobechat部署后,怎样让设值的LLM生效 #5835

lizhao-8202 opened this issue Feb 7, 2025 · 7 comments
Labels
🐛 Bug Something isn't working | 缺陷 ollama Relative to Ollama Provider and ollama models

Comments

@lizhao-8202
Copy link

📦 部署环境

Docker

📌 软件版本

lobechat1.51.3

💻 系统环境

Other Linux

🌐 浏览器

Chrome

🐛 问题描述

本地部署lobechat,登录系统修改系统助手和语言模型的LLM后,再次开启会话窗口登录系统,需要再次设置

📷 复现步骤

1、在lobechat的应用设置里修改系统助手、语言模型为ollama提供的deepseek
2、新打开浏览器或其他机器访问lobechat
3、点击会话,模型仍然为gpt-4o mini模型,而非设置的deepseek

🚦 期望结果

设置一次LLM模型后,新开启的会话窗口为之前设置的LLM模型,而不是每次都是gpt-4o-mini模型

📝 补充信息

No response

@lizhao-8202 lizhao-8202 added the 🐛 Bug Something isn't working | 缺陷 label Feb 7, 2025
@github-project-automation github-project-automation bot moved this to Roadmap - Chat 1.x in Lobe Chat Routine Feb 7, 2025
@lobehubbot
Copy link
Member

👀 @lizhao-8202

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible.
Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


📦 Deployment environment

Docker

📌 Software version

lobechat1.51.3

💻 System environment

Other Linux

🌐 Browser

Chrome

🐛 Question description

Deploy lobechat locally. After logging in to the system to modify the LLM of the system assistant and language model, open the session window again to log in to the system. You need to set it up again

📷 Reproduction steps

  1. Modify the deepseek provided by the system assistant and language model for ollama in the application settings of lobechat
  2. Newly opened browser or other machine to access lobechat
  3. Click on the session, the model is still the gpt-4o mini model, not the set deepseek

🚦 Expected results

After setting up the LLM model once, the newly opened session window is the LLM model set before, instead of the gpt-4o-mini model every time

📝 Supplementary information

No response

@dosubot dosubot bot added the ollama Relative to Ollama Provider and ollama models label Feb 7, 2025
Copy link

dosubot bot commented Feb 7, 2025

@lizhao-8202 你好!我是一个可以帮助你解决问题、回答你的疑问,并成为贡献者的机器人。我叫Dosu。在等待人类维护者的过程中,我会尽力帮助你解决问题。

I found a similar issue that might be relevant to your problem. In a previous discussion, it was noted that setting the DEFAULT_AGENT_CONFIG and OLLAMA_MODEL_LIST environment variables did not correctly display the default model. The solution was to ensure that provider=ollama is included in the configuration [1].

Additionally, there was a closed issue where the default assistant model setting was not effective and always reverted to GPT-3.5 Turbo. This was resolved by a pull request that fixed the telemetry preference modal and default agent config error [2].

You might want to check if your environment variables are correctly set and if there are any updates or patches available that address this issue in your version of Lobe Chat.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@lizhao-8202
Copy link
Author

启动的时候,加了DEFAULT_AGENT_CONFIG='provider=ollama;model=deepseek-r1:14b' ,ENABLED_OLLAMA=1和OLLAMA_PROXY_URL=http://ollama服务IP:端口。存在2个问题。
1、chat时还是需要输入key
2、查看会话管理里,ollama服务地址是http://127.0.0.1:11434,非我用OLLAMA_PROXY_URL设置的环境变量值。

另:为了避免干扰。我想默认就把在会话页的模型选择只留下我参数设置的模型,是不是也有参数设置

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


When starting, add DEFAULT_AGENT_CONFIG='provider=ollama;model=deepseek-r1:14b',ENABLED_OLLAMA=1 and OLLAMA_PROXY_URL=http://ollama service IP:port. There are 2 problems.

  1. You still need to enter key when chat
  2. Check the session management, the ollama service address is http://127.0.0.1:11434, which is not the environment variable value set by me with OLLAMA_PROXY_URL.

Also: To avoid interference. I want to choose the model on the session page by default and leave only the model with my parameter settings. Is there a parameter setting as well?

@ZhengJiaCheng1
Copy link

docker run -d -p 3210:3210 -e OLLAMA_HOST=http://10.168.1.100:11434 -e ENABLED_OPENAI=0 -e OLLAMA_MODEL_LIST=deepseek-r1:32b -e DEFAULT_AGENT_CONFIG='model=deepseek-r1:32b;provider=Ollama' lobehub/lobe-chat

使用以上指令无法设置助手的默认模型,默认的仍是openai

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


docker run -d -p 3210:3210 -e OLLAMA_HOST=http://10.168.1.100:11434 -e ENABLED_OPENAI=0 -e OLLAMA_MODEL_LIST=deepseek-r1:32b -e DEFAULT_AGENT_CONFIG='model=deepseek-r1:32b;provider=Ollama' lobehub/lobe-chat

Using the above command, the default model of the assistant cannot be set, the default is still openai

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 Bug Something isn't working | 缺陷 ollama Relative to Ollama Provider and ollama models
Projects
Status: Roadmap - Chat 1.x
Development

No branches or pull requests

3 participants