-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Invalid finish_reason error in tool calls with anthropic endpoint #5075
Comments
Thanks for the report @ravishqureshi - we will have a look and think about the right approach. Feel free to also consider submitting a pr. |
@ravishqureshi the AssistantAgent is designed to keep each turn limited to 1 model client call, and 2 if
|
PR #5085 to fix the finish_reason bug |
@ravishqureshi if you want to disable parallel tool calls all together, set |
Thanks for a quick turnaround on the fix! I really love how the issues are fixed at speed by MS. Keep up the good work. Also, i wanted to actually allow "parallel_tool_calls" for models like claude that do only sequential tool call for now. But this fix above will allow me to handle these cases easily.. |
Claude can support parallel tools: https://docs.anthropic.com/en/docs/build-with-claude/tool-use#multiple-tool-example I am not familiar with it, but it seems to me that you may be able to get "consistent" agent behavior -- note that model is different -- from both Claude and OpenAI models.
Thank you! |
What happened?
Models such as anthropic support only sequential function calling. Thus when we pass multiple tools while using an anthropic model, the function caling of subsequent tools is not handled.
and if we try to handle it ourselves, it fails again at finish_reason
ValidationError: 1 validation error for CreateResult
finish_reason
Input should be 'stop', 'length', 'function_calls', 'content_filter' or 'unknown' [type=literal_error, input_value='end_turn', input_type=str]
For further information visit https://errors.pydantic.dev/2.10/v/literal_error
What did you expect to happen?
Either autogen allows the developer to handle the sequential call. Or autogen handles sequential call itself the way it handles parallel tool calls for openai
How can we reproduce it (as minimally and precisely as possible)?
async def run_task(agent, messages):
"""
Sends messages to the agent and handles tool calls.
Above code works for openai. for anthropic, i will need to remove the break and handle subsequent function calling which results in failure
AutoGen version
0.4.2
Which package was this bug in
AgentChat
Model used
sonnet3.5
Python version
3.12
Operating system
No response
Any additional info you think would be helpful for fixing this bug
No response
The text was updated successfully, but these errors were encountered: