You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
✅ Initial Checks
• I confirm that I’m using the latest version of Pydantic AI
📝 Description
Hello, Pydantic team 👋
I encountered an issue with my agent setup. Below is a minimal example of the error using a weather agent.
Expected Behavior
I want to leverage the LLM’s capabilities to print the plan to the user before execution, while simultaneously performing the tool call. This is possible with vanilla APIs. The best way to reproduce the issue is to use Claude and the example below.
I expect only a streaming response.
Expected Output:
User: Asks about the weather and the plan
<agent_run>
Agent: Prints plan
Agent: Executes tool call
Agent: Prints response based on tool calls
</agent_run>
Actual Behavior
User: Asks about the weather and the plan
<agent_run>
Agent: Prints plan
END
</agent_run>
I see that the LLM returned tools to call, but the agent did not call them for some reason.
❓ Question
How can I achieve this behavior with pydantic-ai? What am I doing wrong?
Thank you for your support! ❤️
📌 Example Code
from __future__ importannotationsas_annotationsimportasyncioimportosfromdataclassesimportdataclassfromtypingimportAnyimportlogfirefromdevtoolsimportdebugfromhttpximportAsyncClientfrompydantic_aiimportAgent, ModelRetry, RunContextfrompydantic_ai.models.anthropicimportAnthropicModellogfire.configure(send_to_logfire='if-token-present')
@dataclassclassDeps:
"""Dependencies required for the weather application."""client: AsyncClientweather_api_key: str|Nonegeo_api_key: str|Nonemodel=AnthropicModel('claude-3-5-sonnet-20240620')
asyncdefget_lat_lng(ctx: RunContext[Deps], location_description: str) ->dict[str, float]:
"""Get the latitude and longitude of a location."""ifctx.deps.geo_api_keyisNone:
return {'lat': 51.1, 'lng': -0.1} # Dummy response (London)params= {'q': location_description, 'api_key': ctx.deps.geo_api_key}
withlogfire.span('calling geocode API', params=params) asspan:
r=awaitctx.deps.client.get('https://geocode.maps.co/search', params=params)
r.raise_for_status()
data=r.json()
span.set_attribute('response', data)
ifdata:
return {'lat': float(data[0]['lat']), 'lng': float(data[0]['lon'])}
else:
raiseModelRetry('Could not find the location')
asyncdefget_weather(ctx: RunContext[Deps], lat: float, lng: float) ->dict[str, Any]:
"""Get the weather at a location."""ifctx.deps.weather_api_keyisNone:
return {'temperature': '21°C', 'description': 'Sunny'} # Dummy responseparams= {
'apikey': ctx.deps.weather_api_key,
'location': f'{lat},{lng}',
'units': 'metric',
}
withlogfire.span('calling weather API', params=params) asspan:
r=awaitctx.deps.client.get('https://api.tomorrow.io/v4/weather/realtime', params=params)
r.raise_for_status()
data=r.json()
span.set_attribute('response', data)
values=data['data']['values']
code_lookup= {
1000: 'Clear, Sunny', 1100: 'Mostly Clear', 1101: 'Partly Cloudy',
1102: 'Mostly Cloudy', 1001: 'Cloudy', 2000: 'Fog', 2100: 'Light Fog',
4000: 'Drizzle', 4001: 'Rain', 4200: 'Light Rain', 4201: 'Heavy Rain',
5000: 'Snow', 5001: 'Flurries', 5100: 'Light Snow', 5101: 'Heavy Snow',
6000: 'Freezing Drizzle', 6001: 'Freezing Rain', 6200: 'Light Freezing Rain',
6201: 'Heavy Freezing Rain', 7000: 'Ice Pellets', 7101: 'Heavy Ice Pellets',
7102: 'Light Ice Pellets', 8000: 'Thunderstorm',
}
return {
'temperature': f'{values["temperatureApparent"]:0.0f}°C',
'description': code_lookup.get(values['weatherCode'], 'Unknown'),
}
weather_agent=Agent(
model,
system_prompt=(
'Be concise, reply with one sentence. ''Use the `get_lat_lng` tool to get the latitude and longitude of the locations, ''then use the `get_weather` tool to get the weather.'
),
tools=[get_lat_lng, get_weather],
deps_type=Deps,
retries=3,
)
asyncdefmain() ->None:
"""Main entry point for the weather application."""asyncwithAsyncClient() asclient:
weather_api_key=os.getenv('WEATHER_API_KEY')
geo_api_key=os.getenv('GEO_API_KEY')
deps=Deps(client=client, weather_api_key=weather_api_key, geo_api_key=geo_api_key)
asyncwithweather_agent.run_stream(
'What is the weather like in London and in Wiltshire? Please respond with plan and then start to execute it',
deps=deps,
) asresult:
asyncforchunkinresult.stream():
print(chunk, end='', flush=True)
print() # New line after responsedebug(result)
if__name__=='__main__':
asyncio.run(main())
🔢 Python, Pydantic AI & LLM Client Version
0.0.26
❌ Error Output
pd-3.12➜ pydanticai_experimental git:(main) ✗ python weather.py
10:16:59.368 weather_agent run prompt=What is the weather like in London and in Wiltshire? Please respond with plan and then start to execute it
10:16:59.387 preparing model request params run_step=1
10:16:59.388 model request
Logfire project URL: https://logfire.pydantic.dev/worldinnovationsdepartment/pd-experimental
10:17:00.337 response stream structured
Certainly. Here's the plan and execution:Plan:1. Get latitude and longitude for London2. Get latitude and longitude for Wiltshire3. Use the coordinates to fetch weather information for both locationsExecution:10:17:01.962 running tools=['get_lat_lng', 'get_lat_lng']
Problem:
• The agent prints the plan but does not execute the tool calls.
• The LLM returned tools to call, but they were not executed.
Thanks in advance! ❤️
The text was updated successfully, but these errors were encountered:
✅ Initial Checks
• I confirm that I’m using the latest version of Pydantic AI
📝 Description
Hello, Pydantic team 👋
I encountered an issue with my agent setup. Below is a minimal example of the error using a weather agent.
Expected Behavior
I want to leverage the LLM’s capabilities to print the plan to the user before execution, while simultaneously performing the tool call. This is possible with vanilla APIs. The best way to reproduce the issue is to use Claude and the example below.
I expect only a streaming response.
Expected Output:
User: Asks about the weather and the plan
<agent_run>
Agent: Prints plan
Agent: Executes tool call
Agent: Prints response based on tool calls
</agent_run>
Actual Behavior
User: Asks about the weather and the plan
<agent_run>
Agent: Prints plan
END
</agent_run>
I see that the LLM returned tools to call, but the agent did not call them for some reason.
❓ Question
How can I achieve this behavior with pydantic-ai? What am I doing wrong?
Thank you for your support! ❤️
📌 Example Code
🔢 Python, Pydantic AI & LLM Client Version
❌ Error Output
Problem:
• The agent prints the plan but does not execute the tool calls.
• The LLM returned tools to call, but they were not executed.
The text was updated successfully, but these errors were encountered: