-
-
Notifications
You must be signed in to change notification settings - Fork 12k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] lobechat部署后,怎样让设值的LLM生效 #5835
Comments
Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. |
📦 Deployment environmentDocker 📌 Software versionlobechat1.51.3 💻 System environmentOther Linux 🌐 BrowserChrome 🐛 Question descriptionDeploy lobechat locally. After logging in to the system to modify the LLM of the system assistant and language model, open the session window again to log in to the system. You need to set it up again 📷 Reproduction steps
🚦 Expected resultsAfter setting up the LLM model once, the newly opened session window is the LLM model set before, instead of the gpt-4o-mini model every time 📝 Supplementary informationNo response |
@lizhao-8202 你好!我是一个可以帮助你解决问题、回答你的疑问,并成为贡献者的机器人。我叫Dosu。在等待人类维护者的过程中,我会尽力帮助你解决问题。 I found a similar issue that might be relevant to your problem. In a previous discussion, it was noted that setting the Additionally, there was a closed issue where the default assistant model setting was not effective and always reverted to GPT-3.5 Turbo. This was resolved by a pull request that fixed the telemetry preference modal and default agent config error [2]. You might want to check if your environment variables are correctly set and if there are any updates or patches available that address this issue in your version of Lobe Chat. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
启动的时候,加了DEFAULT_AGENT_CONFIG='provider=ollama;model=deepseek-r1:14b' ,ENABLED_OLLAMA=1和OLLAMA_PROXY_URL=http://ollama服务IP:端口。存在2个问题。 另:为了避免干扰。我想默认就把在会话页的模型选择只留下我参数设置的模型,是不是也有参数设置 |
When starting, add DEFAULT_AGENT_CONFIG='provider=ollama;model=deepseek-r1:14b',ENABLED_OLLAMA=1 and OLLAMA_PROXY_URL=http://ollama service IP:port. There are 2 problems.
Also: To avoid interference. I want to choose the model on the session page by default and leave only the model with my parameter settings. Is there a parameter setting as well? |
docker run -d -p 3210:3210 -e OLLAMA_HOST=http://10.168.1.100:11434 -e ENABLED_OPENAI=0 -e OLLAMA_MODEL_LIST=deepseek-r1:32b -e DEFAULT_AGENT_CONFIG='model=deepseek-r1:32b;provider=Ollama' lobehub/lobe-chat 使用以上指令无法设置助手的默认模型,默认的仍是openai |
docker run -d -p 3210:3210 -e OLLAMA_HOST=http://10.168.1.100:11434 -e ENABLED_OPENAI=0 -e OLLAMA_MODEL_LIST=deepseek-r1:32b -e DEFAULT_AGENT_CONFIG='model=deepseek-r1:32b;provider=Ollama' lobehub/lobe-chat Using the above command, the default model of the assistant cannot be set, the default is still openai |
📦 部署环境
Docker
📌 软件版本
lobechat1.51.3
💻 系统环境
Other Linux
🌐 浏览器
Chrome
🐛 问题描述
本地部署lobechat,登录系统修改系统助手和语言模型的LLM后,再次开启会话窗口登录系统,需要再次设置
📷 复现步骤
1、在lobechat的应用设置里修改系统助手、语言模型为ollama提供的deepseek
2、新打开浏览器或其他机器访问lobechat
3、点击会话,模型仍然为gpt-4o mini模型,而非设置的deepseek
🚦 期望结果
设置一次LLM模型后,新开启的会话窗口为之前设置的LLM模型,而不是每次都是gpt-4o-mini模型
📝 补充信息
No response
The text was updated successfully, but these errors were encountered: