-
Notifications
You must be signed in to change notification settings - Fork 986
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add LiteLLM support in line with AsyncOpenAI and with reproducible examples #318
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Víctor Mayoral Vilches <[email protected]>
Signed-off-by: Víctor Mayoral Vilches <[email protected]>
Signed-off-by: Víctor Mayoral Vilches <[email protected]>
In particular: Traceback (most recent call last): File /Users/alias/Alias/research/openai-agents-python/examples/agent_patterns/litellm.py, line 4, in <module> from agents import OpenAIChatCompletionsModel,Agent,Runner File /Users/alias/Alias/research/openai-agents-python/src/agents/__init__.py, line 44, in <module> from .models.openai_chatcompletions import OpenAIChatCompletionsModel File /Users/alias/Alias/research/openai-agents-python/src/agents/models/openai_chatcompletions.py, line 57, in <module> from openai.types.responses.response_usage import InputTokensDetails, OutputTokensDetails ImportError: cannot import name 'InputTokensDetails' from 'openai.types.responses.response_usage' (/Users/alias/Alias/research/openai-agents-python/env/lib/python3.12/site-packages/openai/types/responses/response_usage.py). Did you mean: 'OutputTokensDetails'? Signed-off-by: Víctor Mayoral Vilches <[email protected]>
Signed-off-by: Víctor Mayoral Vilches <[email protected]>
Signed-off-by: Víctor Mayoral Vilches <[email protected]>
Signed-off-by: Víctor Mayoral Vilches <[email protected]>
@@ -7,7 +7,7 @@ requires-python = ">=3.9" | |||
license = "MIT" | |||
authors = [{ name = "OpenAI", email = "[email protected]" }] | |||
dependencies = [ | |||
"openai>=1.66.5", | |||
"openai>=1.68.2", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should not be part of the PR, as it has nothing to do with the added example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my local testing, it was neccessary, otherwise it wouldn't work with 1.66.5
.
Happy to change it though if that gets this through.
This PR is a follow up from #125, which unsuccessfully delivered LiteLLM support. See conversation there for further context.
Given the late architectural updates, there's no real need for any relevant code changes to support LiteLLM. Correspondingly, this PR delivers a working example under the
examples/model_providers
support to easy adoption for others. In particular, the example leveragesAsyncOpenAI
proposed extensions, as well as a default generic configuration demonstrating basic use with SOTA models.Empirical testing of proposed changes: