You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
mambaforge/envs/litellm/lib/python3.12/site-packages/litellm/litellm_core_utils/prompt_templates/factory.py", line 3057, in custom_prompt
if isinstance(message["content"], str):
~~~~~~~^^^^^^^^^^^
KeyError: 'content'
which is also reflected/repeated in the autogen output:
If I remove the reflect_on_tool_use=True I see the tool output printed directly and there is no error.
What did you expect to happen?
I was expecting an additional chat message to be added based on the output of the tool
How can we reproduce it (as minimally and precisely as possible)?
I am running an ollama model locally through litellm, but from what I understand it is interoperable with OpenAI for the purpose of reproducing.
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_agentchat.ui import Console
from autogen_core import CancellationToken
from autogen_ext.models.openai import OpenAIChatCompletionClient
def get_model_client() -> OpenAIChatCompletionClient: # type: ignore
"Mimic OpenAI API using Local LLM Server."
return OpenAIChatCompletionClient(
model="ollama/llama3.2:1b-instruct-q4_K_M",
api_key="NotRequiredSinceWeAreLocal",
base_url="http://0.0.0.0:4000",
model_capabilities={
"json_output": False,
"vision": False,
"function_calling": True,
},
)
# Define a tool that searches the web for information.
async def web_search(query: str) -> str:
"""Find information on the web"""
return "AutoGen is a programming framework for building multi-agent applications."
# Create an agent that uses the OpenAI GPT-4o model.
model_client = get_model_client()
agent = AssistantAgent(
name="assistant",
model_client=model_client,
tools=[web_search],
system_message="Use tools to solve tasks.",
reflect_on_tool_use=True
)
async def assistant_run() -> None:
response = await agent.on_messages(
[TextMessage(content="Find information on AutoGen", source="user")],
cancellation_token=CancellationToken(),
)
#print(response.inner_messages)
print(response.chat_message)
print( await agent.pit() )
asyncio.run(assistant_run())
AutoGen version
0.4.2
Which package was this bug in
Core
Model used
ollama/llama3.2:1b-instructK_M_K
Python version
3.12
Operating system
mac
Any additional info you think would be helpful for fixing this bug
I peeked into the code for AssistantAgent and it looks like it's passing _model_context.get_messages() back to the llm, and from inspecting the AssistantAgent's inner state after the tool completes I see that the context includes an AssistantMessage has a content that is a list of the tool-call and tool-response (not the expected string type).
The text was updated successfully, but these errors were encountered:
@shenkers It looks like there is some issue with LiteLLM. AutoGen's OpenAIChatCompletionClient sends out messages without content key when it is an assistant message with tool calls, as it is using the openai client under the hood.
What happened?
I am using 0.4.2 and am trying to get some of the examples from the docs working. I was curious to add "reflect_on_tool_use" to the example described in the agents tutorial here: https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/agents.html#using-tools , but I'm getting an error about message.content not being a string.
I get this error in the litellm server logs:
which is also reflected/repeated in the autogen output:
If I remove the
reflect_on_tool_use=True
I see the tool output printed directly and there is no error.What did you expect to happen?
I was expecting an additional chat message to be added based on the output of the tool
How can we reproduce it (as minimally and precisely as possible)?
I am running an ollama model locally through litellm, but from what I understand it is interoperable with OpenAI for the purpose of reproducing.
AutoGen version
0.4.2
Which package was this bug in
Core
Model used
ollama/llama3.2:1b-instructK_M_K
Python version
3.12
Operating system
mac
Any additional info you think would be helpful for fixing this bug
I peeked into the code for AssistantAgent and it looks like it's passing _model_context.get_messages() back to the llm, and from inspecting the AssistantAgent's inner state after the tool completes I see that the context includes an AssistantMessage has a content that is a list of the tool-call and tool-response (not the expected string type).
The text was updated successfully, but these errors were encountered: