Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add LiteLLM support in line with AsyncOpenAI and with reproducible examples #318

Open
wants to merge 9 commits into
base: main
Choose a base branch
from

Conversation

vmayoral
Copy link

This PR is a follow up from #125, which unsuccessfully delivered LiteLLM support. See conversation there for further context.

Given the late architectural updates, there's no real need for any relevant code changes to support LiteLLM. Correspondingly, this PR delivers a working example under the examples/model_providers support to easy adoption for others. In particular, the example leverages AsyncOpenAI proposed extensions, as well as a default generic configuration demonstrating basic use with SOTA models.

Empirical testing of proposed changes:

(env) alias@brain:~/Alias/research/openai-agents-python (main*) $ python3 examples/agent_patterns/litellm.py  # with gpt-4o
Code calls itself now,  
Infinite loops echoing,  
Depth finds its own end.

(env) alias@brain:~/Alias/research/openai-agents-python (main*) $ python3 examples/agent_patterns/litellm.py   # with claude-3-7
# Recursion Haiku

Function calls itself,
Diving deeper, then returns—
Loops within themselves.

(env) alias@brain:~/Alias/research/openai-agents-python (main*) $ python3 examples/agent_patterns/litellm.py    # with qwen2.5:14b
Function calls itself,
Depth of thought in endless loop,
Circle spins to start.

Signed-off-by: Víctor Mayoral Vilches <[email protected]>
Signed-off-by: Víctor Mayoral Vilches <[email protected]>
Signed-off-by: Víctor Mayoral Vilches <[email protected]>
In particular:
Traceback (most recent call last):
  File /Users/alias/Alias/research/openai-agents-python/examples/agent_patterns/litellm.py, line 4, in <module>
    from agents import OpenAIChatCompletionsModel,Agent,Runner
  File /Users/alias/Alias/research/openai-agents-python/src/agents/__init__.py, line 44, in <module>
    from .models.openai_chatcompletions import OpenAIChatCompletionsModel
  File /Users/alias/Alias/research/openai-agents-python/src/agents/models/openai_chatcompletions.py, line 57, in <module>
    from openai.types.responses.response_usage import InputTokensDetails, OutputTokensDetails
ImportError: cannot import name 'InputTokensDetails' from 'openai.types.responses.response_usage' (/Users/alias/Alias/research/openai-agents-python/env/lib/python3.12/site-packages/openai/types/responses/response_usage.py). Did you mean: 'OutputTokensDetails'?

Signed-off-by: Víctor Mayoral Vilches <[email protected]>
Signed-off-by: Víctor Mayoral Vilches <[email protected]>
Signed-off-by: Víctor Mayoral Vilches <[email protected]>
@@ -7,7 +7,7 @@ requires-python = ">=3.9"
license = "MIT"
authors = [{ name = "OpenAI", email = "[email protected]" }]
dependencies = [
"openai>=1.66.5",
"openai>=1.68.2",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should not be part of the PR, as it has nothing to do with the added example.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my local testing, it was neccessary, otherwise it wouldn't work with 1.66.5.

Happy to change it though if that gets this through.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants