-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chunk tolerance annotation in streaming completion docs #5190
base: main
Are you sure you want to change the base?
Conversation
@auto-d please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
Contributor License AgreementContribution License AgreementThis Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
|
@@ -1008,6 +1008,8 @@ class OpenAIChatCompletionClient(BaseOpenAIChatCompletionClient, Component[OpenA | |||
client = ChatCompletionClient.load_component(config) | |||
Note: When usage information is requested (see `documentation <https://platform.openai.com/docs/api-reference/chat/streaming#chat/streaming-choices>`_.) with the `create_stream` method, `max_consecutive_empty_chunk_tolerance` should be increased to permit the trailing empty chunk carrying the usage information. E.g. `completion_client.create_stream(... , max_consecutive_empty_chunk_tolerance=2, extra_create_args={"stream_options": {"include_usage": True}})`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, apologize for the confusion in my comment in a different issue. I think this should go to the API doc of create_stream
method in the BaseOpenAIChatCompletionClient
.
Why are these changes needed?
Minor documentation changes requested by @ekzhu in #5078.
The user guide recommends users of the
create_stream
method (in the chat completion client classes) set theinclude_usage
option to ensure final token usage is sent by the server when the completion streaming finishes. However, the final chunk sent from the server containing the usage information is empty, and this fails a check for empty chunks in _openai_client.py. To retrieve usage and avoid inducing this error, users can adjust the chunk tolerance (max_consecutive_empty_chunk_tolerance
) when invoking the streaming completion. This PR adjusts the API docs and includes a suggested revision for the user guide as well.Related issue number
#5078
Checks