Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reflect_on_tool_use=True KeyError content type is not string #5083

Closed
shenkers opened this issue Jan 16, 2025 · 2 comments
Closed

reflect_on_tool_use=True KeyError content type is not string #5083

shenkers opened this issue Jan 16, 2025 · 2 comments

Comments

@shenkers
Copy link

shenkers commented Jan 16, 2025

What happened?

I am using 0.4.2 and am trying to get some of the examples from the docs working. I was curious to add "reflect_on_tool_use" to the example described in the agents tutorial here: https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/agents.html#using-tools , but I'm getting an error about message.content not being a string.

I get this error in the litellm server logs:

mambaforge/envs/litellm/lib/python3.12/site-packages/litellm/litellm_core_utils/prompt_templates/factory.py", line 3057, in custom_prompt
    if isinstance(message["content"], str):
                  ~~~~~~~^^^^^^^^^^^
KeyError: 'content'

which is also reflected/repeated in the autogen output:

 if isinstance(message["content"], str):\n                  
~~~~~~~^^^^^^^^^^^\nKeyError: \'content\'\n', 'type': None, 'param': None, 'code': '500'}}

If I remove the reflect_on_tool_use=True I see the tool output printed directly and there is no error.

What did you expect to happen?

I was expecting an additional chat message to be added based on the output of the tool

How can we reproduce it (as minimally and precisely as possible)?

I am running an ollama model locally through litellm, but from what I understand it is interoperable with OpenAI for the purpose of reproducing.

import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_agentchat.ui import Console
from autogen_core import CancellationToken
from autogen_ext.models.openai import OpenAIChatCompletionClient

def get_model_client() -> OpenAIChatCompletionClient:  # type: ignore
    "Mimic OpenAI API using Local LLM Server."
    return OpenAIChatCompletionClient(
        model="ollama/llama3.2:1b-instruct-q4_K_M",
        api_key="NotRequiredSinceWeAreLocal",
        base_url="http://0.0.0.0:4000",
        model_capabilities={
            "json_output": False,
            "vision": False,
            "function_calling": True,
        },
    )


# Define a tool that searches the web for information.
async def web_search(query: str) -> str:
    """Find information on the web"""
    return "AutoGen is a programming framework for building multi-agent applications."


# Create an agent that uses the OpenAI GPT-4o model.
model_client = get_model_client()

agent = AssistantAgent(
    name="assistant",
    model_client=model_client,
    tools=[web_search],
    system_message="Use tools to solve tasks.",
    reflect_on_tool_use=True
)


async def assistant_run() -> None:
      response = await agent.on_messages(
          [TextMessage(content="Find information on AutoGen", source="user")],
          cancellation_token=CancellationToken(),
      )
      #print(response.inner_messages)
      print(response.chat_message)
      print( await agent.pit() )

asyncio.run(assistant_run())

AutoGen version

0.4.2

Which package was this bug in

Core

Model used

ollama/llama3.2:1b-instructK_M_K

Python version

3.12

Operating system

mac

Any additional info you think would be helpful for fixing this bug

I peeked into the code for AssistantAgent and it looks like it's passing _model_context.get_messages() back to the llm, and from inspecting the AssistantAgent's inner state after the tool completes I see that the context includes an AssistantMessage has a content that is a list of the tool-call and tool-response (not the expected string type).

@ekzhu
Copy link
Collaborator

ekzhu commented Jan 16, 2025

@shenkers It looks like there is some issue with LiteLLM. AutoGen's OpenAIChatCompletionClient sends out messages without content key when it is an assistant message with tool calls, as it is using the openai client under the hood.

You can see from openai library:

https://github.com/openai/openai-python/blob/d9c966dea77fa3493114865a7f785f3134f1cc1e/src/openai/types/chat/chat_completion_assistant_message_param.py#L46-L50

the content field is optional when function_call or tool_calls are specified.

I suggest you file an issue with LiteLLM.

@ekzhu
Copy link
Collaborator

ekzhu commented Jan 16, 2025

You can use Ollama without LiteLLM. We are also adding an Ollama Client #3817 so you can use that once it is done.

@ekzhu ekzhu closed this as completed Jan 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants