Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plan and Execute is not working inside one Agent #985

Open
WorldInnovationsDepartment opened this issue Feb 25, 2025 · 5 comments
Open

Plan and Execute is not working inside one Agent #985

WorldInnovationsDepartment opened this issue Feb 25, 2025 · 5 comments
Assignees

Comments

@WorldInnovationsDepartment
Copy link

WorldInnovationsDepartment commented Feb 25, 2025

✅ Initial Checks
• I confirm that I’m using the latest version of Pydantic AI

📝 Description
Hello, Pydantic team 👋

I encountered an issue with my agent setup. Below is a minimal example of the error using a weather agent.


Expected Behavior

I want to leverage the LLM’s capabilities to print the plan to the user before execution, while simultaneously performing the tool call. This is possible with vanilla APIs. The best way to reproduce the issue is to use Claude and the example below.
I expect only a streaming response.

Expected Output:

User: Asks about the weather and the plan
<agent_run>
Agent: Prints plan
Agent: Executes tool call
Agent: Prints response based on tool calls
</agent_run>

Actual Behavior

User: Asks about the weather and the plan
<agent_run>
Agent: Prints plan
END
</agent_run>

I see that the LLM returned tools to call, but the agent did not call them for some reason.

❓ Question

How can I achieve this behavior with pydantic-ai? What am I doing wrong?
Thank you for your support! ❤️


📌 Example Code

from __future__ import annotations as _annotations

import asyncio
import os
from dataclasses import dataclass
from typing import Any

import logfire
from devtools import debug
from httpx import AsyncClient

from pydantic_ai import Agent, ModelRetry, RunContext
from pydantic_ai.models.anthropic import AnthropicModel

logfire.configure(send_to_logfire='if-token-present')

@dataclass
class Deps:
    """Dependencies required for the weather application."""
    client: AsyncClient
    weather_api_key: str | None
    geo_api_key: str | None

model = AnthropicModel('claude-3-5-sonnet-20240620')

async def get_lat_lng(ctx: RunContext[Deps], location_description: str) -> dict[str, float]:
    """Get the latitude and longitude of a location."""
    if ctx.deps.geo_api_key is None:
        return {'lat': 51.1, 'lng': -0.1}  # Dummy response (London)

    params = {'q': location_description, 'api_key': ctx.deps.geo_api_key}
    with logfire.span('calling geocode API', params=params) as span:
        r = await ctx.deps.client.get('https://geocode.maps.co/search', params=params)
        r.raise_for_status()
        data = r.json()
        span.set_attribute('response', data)

    if data:
        return {'lat': float(data[0]['lat']), 'lng': float(data[0]['lon'])}
    else:
        raise ModelRetry('Could not find the location')

async def get_weather(ctx: RunContext[Deps], lat: float, lng: float) -> dict[str, Any]:
    """Get the weather at a location."""
    if ctx.deps.weather_api_key is None:
        return {'temperature': '21°C', 'description': 'Sunny'}  # Dummy response

    params = {
        'apikey': ctx.deps.weather_api_key,
        'location': f'{lat},{lng}',
        'units': 'metric',
    }
    with logfire.span('calling weather API', params=params) as span:
        r = await ctx.deps.client.get('https://api.tomorrow.io/v4/weather/realtime', params=params)
        r.raise_for_status()
        data = r.json()
        span.set_attribute('response', data)

    values = data['data']['values']
    code_lookup = {
        1000: 'Clear, Sunny', 1100: 'Mostly Clear', 1101: 'Partly Cloudy',
        1102: 'Mostly Cloudy', 1001: 'Cloudy', 2000: 'Fog', 2100: 'Light Fog',
        4000: 'Drizzle', 4001: 'Rain', 4200: 'Light Rain', 4201: 'Heavy Rain',
        5000: 'Snow', 5001: 'Flurries', 5100: 'Light Snow', 5101: 'Heavy Snow',
        6000: 'Freezing Drizzle', 6001: 'Freezing Rain', 6200: 'Light Freezing Rain',
        6201: 'Heavy Freezing Rain', 7000: 'Ice Pellets', 7101: 'Heavy Ice Pellets',
        7102: 'Light Ice Pellets', 8000: 'Thunderstorm',
    }
    return {
        'temperature': f'{values["temperatureApparent"]:0.0f}°C',
        'description': code_lookup.get(values['weatherCode'], 'Unknown'),
    }

weather_agent = Agent(
    model,
    system_prompt=(
        'Be concise, reply with one sentence. '
        'Use the `get_lat_lng` tool to get the latitude and longitude of the locations, '
        'then use the `get_weather` tool to get the weather.'
    ),
    tools=[get_lat_lng, get_weather],
    deps_type=Deps,
    retries=3,
)

async def main() -> None:
    """Main entry point for the weather application."""
    async with AsyncClient() as client:
        weather_api_key = os.getenv('WEATHER_API_KEY')
        geo_api_key = os.getenv('GEO_API_KEY')
        deps = Deps(client=client, weather_api_key=weather_api_key, geo_api_key=geo_api_key)

        async with weather_agent.run_stream(
            'What is the weather like in London and in Wiltshire? Please respond with plan and then start to execute it',
            deps=deps,
        ) as result:
            async for chunk in result.stream():
                print(chunk, end='', flush=True)
            print()  # New line after response
        debug(result)

if __name__ == '__main__':
    asyncio.run(main())

🔢 Python, Pydantic AI & LLM Client Version

0.0.26

❌ Error Output

pd-3.12➜  pydanticai_experimental git:(main) ✗ python weather.py
10:16:59.368 weather_agent run prompt=What is the weather like in London and in Wiltshire? Please respond with plan and then start to execute it
10:16:59.387   preparing model request params run_step=1
10:16:59.388   model request
Logfire project URL: https://logfire.pydantic.dev/worldinnovationsdepartment/pd-experimental
10:17:00.337     response stream structured
Certainly. Here's the plan and execution:

Plan:
1. Get latitude and longitude for London
2. Get latitude and longitude for Wiltshire
3. Use the coordinates to fetch weather information for both locations
Execution:
10:17:01.962       running tools=['get_lat_lng', 'get_lat_lng']

Problem:
• The agent prints the plan but does not execute the tool calls.
• The LLM returned tools to call, but they were not executed.

Thanks in advance! ❤️
@Kludex
Copy link
Member

Kludex commented Feb 25, 2025

Please format your description.

If necessary, use the following to collapse content, and make it easier to read:

<details>
<summary>Details</summary>

</details>

@Kludex Kludex self-assigned this Feb 25, 2025
@WorldInnovationsDepartment
Copy link
Author

@Kludex done

@Kludex
Copy link
Member

Kludex commented Feb 25, 2025

@WorldInnovationsDepartment I talked to @dmontagu , and we are trying to solve that issue in #951. 🙏

@Kludex Kludex assigned dmontagu and unassigned Kludex Feb 25, 2025
@WorldInnovationsDepartment
Copy link
Author

Thank you so much, everyone! Your framework is truly the best! 🚀

@Kludex
Copy link
Member

Kludex commented Mar 7, 2025

The Agent.iter() was merged.

Is that enough here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants