Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'Body' object has no attribute 'llm' #52

Open
4 tasks done
Dzz2004 opened this issue Oct 7, 2024 · 0 comments
Open
4 tasks done

AttributeError: 'Body' object has no attribute 'llm' #52

Dzz2004 opened this issue Oct 7, 2024 · 0 comments

Comments

@Dzz2004
Copy link

Dzz2004 commented Oct 7, 2024

Checked other resources

  • I searched the Codefuse documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in Codefuse-Repos rather than my code.
  • I added a very descriptive title to this issue.

System Info

Windows

Code Version

Latest Release

Description

在尝试连接本地chatglm2-6b模型时,输入任何问题都没有输出,终端出现如下报错。8000端口,项目里给的默认端口是8888,我运行了fastchat,改成了8000,同时模型端口也改成了21002(fastchat中modelworker的端口)

出现这样的问题是本地模型没有连上吗?

Example Code

问题大概出现在这里:

def _fastapi_stream2generator(self, response: StreamingResponse, as_json: bool =False):
'''
将api.py中视图函数返回的StreamingResponse转化为同步生成器
'''
try:
loop = asyncio.get_event_loop()
except:
loop = asyncio.new_event_loop()

    try:
        for chunk in  iter_over_async(response.body_iterator, loop):
            if as_json and chunk:
                yield json.loads(chunk)
            elif chunk.strip():
                yield chunk
    except Exception as e:
        logger.error(traceback.format_exc())

Error Message and Stack Trace (if applicable)

2024-10-07 22:00:10,718 - _client.py[line:1038] - INFO: HTTP Request: GET http://127.0.0.1:7862/sdfiles/list "HTTP/1.1 200 OK"
2024-10-07 22:00:11,154 - _client.py[line:1038] - INFO: HTTP Request: GET http://127.0.0.1:7862/sdfiles/download?filename=&save_filename= "HTTP/1.1 200 OK"
2024-10-07 22:00:11.191 | DEBUG | webui.dialogue:dialogue_page:294 - prompt: 你好
2024-10-07 22:00:11.674 | ERROR | webui.utils:_fastapi_stream2generator:252 - Traceback (most recent call last):
File "C:\Users\Lenovo\Desktop\codefuse-chatbot\examples\webui\utils.py", line 246, in _fastapi_stream2generator
for chunk in iter_over_async(response.body_iterator, loop):
File "C:\Users\Lenovo.conda\envs\devopsgpt\lib\site-packages\muagent\utils\server_utils.py", line 120, in iter_over_async
done, obj = loop.run_until_complete(get_next())
File "C:\Users\Lenovo.conda\envs\devopsgpt\lib\asyncio\base_events.py", line 647, in run_until_complete
return future.result()
File "C:\Users\Lenovo.conda\envs\devopsgpt\lib\site-packages\muagent\utils\server_utils.py", line 115, in get_next
obj = await ait.anext()
File "C:\Users\Lenovo.conda\envs\devopsgpt\lib\site-packages\starlette\concurrency.py", line 63, in iterate_in_threadpool
yield await anyio.to_thread.run_sync(_next, iterator)
File "C:\Users\Lenovo.conda\envs\devopsgpt\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "C:\Users\Lenovo.conda\envs\devopsgpt\lib\site-packages\anyio_backends_asyncio.py", line 2405, in run_sync_in_worker_thread
return await future
File "C:\Users\Lenovo.conda\envs\devopsgpt\lib\site-packages\anyio_backends_asyncio.py", line 914, in run
result = context.run(func, *args)
File "C:\Users\Lenovo.conda\envs\devopsgpt\lib\site-packages\starlette\concurrency.py", line 53, in _next
return next(iterator)
File "C:\Users\Lenovo.conda\envs\devopsgpt\lib\site-packages\muagent\chat\base_chat.py", line 80, in chat_iterator
model = getChatModelFromConfig(llm_config)
File "C:\Users\Lenovo.conda\envs\devopsgpt\lib\site-packages\muagent\llm_models\openai_model.py", line 117, in getChatModelFromConfig
if llm_config and llm_config.llm and isinstance(llm_config.llm, LLM):
AttributeError: 'Body' object has no attribute 'llm'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant