Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid finish_reason error in tool calls with anthropic endpoint #5075

Closed
ravishqureshi opened this issue Jan 16, 2025 · 6 comments · Fixed by #5085
Closed

Invalid finish_reason error in tool calls with anthropic endpoint #5075

ravishqureshi opened this issue Jan 16, 2025 · 6 comments · Fixed by #5085

Comments

@ravishqureshi
Copy link

What happened?

Models such as anthropic support only sequential function calling. Thus when we pass multiple tools while using an anthropic model, the function caling of subsequent tools is not handled.

and if we try to handle it ourselves, it fails again at finish_reason
ValidationError: 1 validation error for CreateResult
finish_reason
Input should be 'stop', 'length', 'function_calls', 'content_filter' or 'unknown' [type=literal_error, input_value='end_turn', input_type=str]
For further information visit https://errors.pydantic.dev/2.10/v/literal_error

What did you expect to happen?

Either autogen allows the developer to handle the sequential call. Or autogen handles sequential call itself the way it handles parallel tool calls for openai

How can we reproduce it (as minimally and precisely as possible)?

async def run_task(agent, messages):
"""
Sends messages to the agent and handles tool calls.

Args:
    agent (AssistantAgent): The initialized AssistantAgent.
    messages (list): The conversation history.

Returns:
    str: The final response after tool execution.
"""
# Send messages to the agent
cancellation_token = CancellationToken()

response = await agent.on_messages(messages=messages,cancellation_token=cancellation_token)
print("here 0")
print(response)
if response.inner_messages:
    print("here 0.1")
    for message in response.inner_messages:
        print(response.chat_message.content)
        print(type(response.chat_message.content))
        print("*********")
        if isinstance(message, ToolCallRequestEvent):
          tool_result_message = ToolCallSummaryMessage(
                      content=str(response.chat_message.content),
                      source="tool"
                  )
          response = await agent.on_messages(messages=[tool_result_message], cancellation_token=cancellation_token)
          break
print("here 4")
return response.chat_message.content

Above code works for openai. for anthropic, i will need to remove the break and handle subsequent function calling which results in failure

AutoGen version

0.4.2

Which package was this bug in

AgentChat

Model used

sonnet3.5

Python version

3.12

Operating system

No response

Any additional info you think would be helpful for fixing this bug

No response

@rysweet
Copy link
Collaborator

rysweet commented Jan 16, 2025

Thanks for the report @ravishqureshi - we will have a look and think about the right approach. Feel free to also consider submitting a pr.

@ekzhu ekzhu changed the title Sequential Function calling is not supported Error in tool calls with anthropic endpoint Jan 16, 2025
@ekzhu ekzhu changed the title Error in tool calls with anthropic endpoint Invalid finish_reason error in tool calls with anthropic endpoint Jan 16, 2025
@ekzhu
Copy link
Collaborator

ekzhu commented Jan 16, 2025

@ravishqureshi the AssistantAgent is designed to keep each turn limited to 1 model client call, and 2 if reflect_on_tool_use=True. Consider the following, if you want to execute multiple tools.

  1. Call the same agent again if the response.chat_message is a ToolResultSummaryMessage. This could be an orchestration decision by the team, e.g., using custom selector function in SelectorGroupChat, or just use Swarm team.
  2. Breaking up into multiple agents and spread the tools.

@ekzhu
Copy link
Collaborator

ekzhu commented Jan 16, 2025

PR #5085 to fix the finish_reason bug

@ekzhu
Copy link
Collaborator

ekzhu commented Jan 17, 2025

@ravishqureshi if you want to disable parallel tool calls all together, set parallel_tool_calls=False in the OpenAIChatCompletionClient.

@ravishqureshi
Copy link
Author

Thanks for a quick turnaround on the fix! I really love how the issues are fixed at speed by MS. Keep up the good work.

Also, i wanted to actually allow "parallel_tool_calls" for models like claude that do only sequential tool call for now. But this fix above will allow me to handle these cases easily..

@ekzhu
Copy link
Collaborator

ekzhu commented Jan 20, 2025

i wanted to actually allow "parallel_tool_calls" for models like claude that do only sequential tool call for now

Claude can support parallel tools: https://docs.anthropic.com/en/docs/build-with-claude/tool-use#multiple-tool-example

I am not familiar with it, but it seems to me that you may be able to get "consistent" agent behavior -- note that model is different -- from both Claude and OpenAI models.

Keep up the good work.

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants