Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: astream chat with tools returns empty response #17978

Open
alireza-sheikh opened this issue Mar 2, 2025 · 3 comments
Open

[Bug]: astream chat with tools returns empty response #17978

alireza-sheikh opened this issue Mar 2, 2025 · 3 comments
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized

Comments

@alireza-sheikh
Copy link

Bug Description

When I ran this code I expected to get the response chunk by chunk but the response was empty. I use the achat_with_tools method and it works properly. (I used gpt-4o and gpt-40-mini)

self.oracle: FunctionCallingLLM

response = await self.oracle.astream_chat_with_tools(
            tools=list(self.toolset.values()),
            chat_history=chat_history,
            tool_choice="required",
        )
async for chunk in response:
     print(chunk.delta)

Version

0.12.10

Steps to Reproduce

I've made my workflow agent with the below guideline.
https://docs.llamaindex.ai/en/stable/examples/workflow/function_calling_agent/

Relevant Logs/Tracbacks

@alireza-sheikh alireza-sheikh added bug Something isn't working triage Issue needs to be triaged/prioritized labels Mar 2, 2025
Copy link

dosubot bot commented Mar 2, 2025

Hey @alireza-sheikh! I'm here to help you with any issues you're facing. I can assist with bug fixes, answer questions, and guide you on how to contribute. Let's tackle this together!

The issue of astream_chat_with_tools returning an empty response might be related to the configuration of the MockLLM instance. If the MockLLM is initialized with an empty list of responses, it will result in an empty response from the method. This behavior is demonstrated in the empty_calculator_agent fixture, where a MockLLM with no responses leads to an empty output [1].

To address this, ensure that the MockLLM or the LLM you are using is properly configured with responses. Additionally, verify that the tools and chat_history are correctly set up and that the tool_choice parameter is appropriately configured. If the issue persists, consider checking the specific implementation details of the FunctionCallingLLM and the tools being used.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

@alireza-sheikh
Copy link
Author

alireza-sheikh commented Mar 2, 2025

@dosu.
This is how I create oracle variable and it is using OpenAI object directly and I think it has not any issue.

from llama_index.llms.openai import OpenAI

class LlamaLLMFactory:
    """Factory class to create and configure LLM instances for different tasks."""

    def __init__(self):
        self._fn_cache = {}
        self._cache = {}

    def get_function_calling_llm(
        self, name: str, **kwargs: dict[str, Any]
    ) -> FunctionCallingLLM:
        """Return the requested function calling LLM instance.

        Args:
            name (str): The name of the LLM.
            **kwargs (dict[str, Any]): Additional keyword arguments.

        Returns:
            FunctionCallingLLM: The requested LLM instance.

        Raises:
            ValueError: If the requested LLM is not supported.

        """
        if name in self._fn_cache:
            return self._fn_cache[name]

        if name.startswith("gpt") or name.startswith("o1") or name.startswith("o3"):
            llm_instance = OpenAI(
                api_key=settings.GPT_API_KEY, model=name, additional_kwargs=kwargs
            )
            self._fn_cache[name] = llm_instance
            return llm_instance

        raise ValueError(f"Unsupported LLM: {name}")

llm_factory = LlamaLLMFactory()
oracle = llm_factory.get_function_calling_llm(
                "gpt-3.5-turbo", **llm_args
            )

@logan-markewich
Copy link
Collaborator

Can you reproduce in a colab or similar? astream_chat_with_tools works fine for me

@run-llama run-llama deleted a comment from dosubot bot Mar 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized
Projects
None yet
Development

No branches or pull requests

2 participants