Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release: 1.69.0 #2260

Merged
merged 7 commits into from
Mar 27, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "1.68.2"
".": "1.69.0"
}
4 changes: 3 additions & 1 deletion .stats.yml
Original file line number Diff line number Diff line change
@@ -1,2 +1,4 @@
configured_endpoints: 82
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-5ad6884898c07591750dde560118baf7074a59aecd1f367f930c5e42b04e848a.yml
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-6663c59193eb95b201e492de17dcbd5e126ba03d18ce66287a3e2c632ca56fe7.yml
openapi_spec_hash: 7996d2c34cc44fe2ce9ffe93c0ab774e
config_hash: 9351ea829c2b41da3b48a38c934c92ee
20 changes: 20 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,25 @@
# Changelog

## 1.69.0 (2025-03-27)

Full Changelog: [v1.68.2...v1.69.0](https://github.com/openai/openai-python/compare/v1.68.2...v1.69.0)

### Features

* **api:** add `get /chat/completions` endpoint ([e6b8a42](https://github.com/openai/openai-python/commit/e6b8a42fc4286656cc86c2acd83692b170e77b68))


### Bug Fixes

* **audio:** correctly parse transcription stream events ([16a3a19](https://github.com/openai/openai-python/commit/16a3a195ff31f099fbe46043a12d2380c2c01f83))


### Chores

* add hash of OpenAPI spec/config inputs to .stats.yml ([515e1cd](https://github.com/openai/openai-python/commit/515e1cdd4a3109e5b29618df813656e17f22b52a))
* **api:** updates to supported Voice IDs ([#2261](https://github.com/openai/openai-python/issues/2261)) ([64956f9](https://github.com/openai/openai-python/commit/64956f9d9889b04380c7f5eb926509d1efd523e6))
* fix typos ([#2259](https://github.com/openai/openai-python/issues/2259)) ([6160de3](https://github.com/openai/openai-python/commit/6160de3e099f09c2d6ee5eeee4cbcc55b67a8f87))

## 1.68.2 (2025-03-21)

Full Changelog: [v1.68.1...v1.68.2](https://github.com/openai/openai-python/compare/v1.68.1...v1.68.2)
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "openai"
version = "1.68.2"
version = "1.69.0"
description = "The official Python library for the openai API"
dynamic = ["readme"]
license = "Apache-2.0"
Expand Down
2 changes: 1 addition & 1 deletion src/openai/_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -721,7 +721,7 @@ def add_request_id(obj: BaseModel, request_id: str | None) -> None:
cast(Any, obj).__exclude_fields__ = {*(exclude_fields or {}), "_request_id", "__exclude_fields__"}


# our use of subclasssing here causes weirdness for type checkers,
# our use of subclassing here causes weirdness for type checkers,
# so we just pretend that we don't subclass
if TYPE_CHECKING:
GenericModel = BaseModel
Expand Down
4 changes: 2 additions & 2 deletions src/openai/_streaming.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ def __stream__(self) -> Iterator[_T]:
if sse.data.startswith("[DONE]"):
break

if sse.event is None or sse.event.startswith("response."):
if sse.event is None or sse.event.startswith("response.") or sse.event.startswith('transcript.'):
data = sse.json()
if is_mapping(data) and data.get("error"):
message = None
Expand Down Expand Up @@ -161,7 +161,7 @@ async def __stream__(self) -> AsyncIterator[_T]:
if sse.data.startswith("[DONE]"):
break

if sse.event is None or sse.event.startswith("response."):
if sse.event is None or sse.event.startswith("response.") or sse.event.startswith('transcript.'):
data = sse.json()
if is_mapping(data) and data.get("error"):
message = None
Expand Down
2 changes: 1 addition & 1 deletion src/openai/_utils/_transform.py
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ def _get_annotated_type(type_: type) -> type | None:
def _maybe_transform_key(key: str, type_: type) -> str:
"""Transform the given `data` based on the annotations provided in `type_`.
Note: this function only looks at `Annotated` types that contain `PropertInfo` metadata.
Note: this function only looks at `Annotated` types that contain `PropertyInfo` metadata.
"""
annotated_type = _get_annotated_type(type_)
if annotated_type is None:
Expand Down
2 changes: 1 addition & 1 deletion src/openai/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

__title__ = "openai"
__version__ = "1.68.2" # x-release-please-version
__version__ = "1.69.0" # x-release-please-version
16 changes: 10 additions & 6 deletions src/openai/resources/audio/speech.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,9 @@ def create(
*,
input: str,
model: Union[str, SpeechModel],
voice: Literal["alloy", "ash", "coral", "echo", "fable", "onyx", "nova", "sage", "shimmer"],
voice: Union[
str, Literal["alloy", "ash", "ballad", "coral", "echo", "fable", "onyx", "nova", "sage", "shimmer", "verse"]
],
instructions: str | NotGiven = NOT_GIVEN,
response_format: Literal["mp3", "opus", "aac", "flac", "wav", "pcm"] | NotGiven = NOT_GIVEN,
speed: float | NotGiven = NOT_GIVEN,
Expand All @@ -75,8 +77,8 @@ def create(
`tts-1`, `tts-1-hd` or `gpt-4o-mini-tts`.

voice: The voice to use when generating the audio. Supported voices are `alloy`, `ash`,
`coral`, `echo`, `fable`, `onyx`, `nova`, `sage` and `shimmer`. Previews of the
voices are available in the
`ballad`, `coral`, `echo`, `fable`, `onyx`, `nova`, `sage`, `shimmer`, and
`verse`. Previews of the voices are available in the
[Text to speech guide](https://platform.openai.com/docs/guides/text-to-speech#voice-options).

instructions: Control the voice of your generated audio with additional instructions. Does not
Expand Down Expand Up @@ -142,7 +144,9 @@ async def create(
*,
input: str,
model: Union[str, SpeechModel],
voice: Literal["alloy", "ash", "coral", "echo", "fable", "onyx", "nova", "sage", "shimmer"],
voice: Union[
str, Literal["alloy", "ash", "ballad", "coral", "echo", "fable", "onyx", "nova", "sage", "shimmer", "verse"]
],
instructions: str | NotGiven = NOT_GIVEN,
response_format: Literal["mp3", "opus", "aac", "flac", "wav", "pcm"] | NotGiven = NOT_GIVEN,
speed: float | NotGiven = NOT_GIVEN,
Expand All @@ -164,8 +168,8 @@ async def create(
`tts-1`, `tts-1-hd` or `gpt-4o-mini-tts`.

voice: The voice to use when generating the audio. Supported voices are `alloy`, `ash`,
`coral`, `echo`, `fable`, `onyx`, `nova`, `sage` and `shimmer`. Previews of the
voices are available in the
`ballad`, `coral`, `echo`, `fable`, `onyx`, `nova`, `sage`, `shimmer`, and
`verse`. Previews of the voices are available in the
[Text to speech guide](https://platform.openai.com/docs/guides/text-to-speech#voice-options).

instructions: Control the voice of your generated audio with additional instructions. Does not
Expand Down
16 changes: 12 additions & 4 deletions src/openai/resources/beta/realtime/sessions.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,10 @@ def create(
tool_choice: str | NotGiven = NOT_GIVEN,
tools: Iterable[session_create_params.Tool] | NotGiven = NOT_GIVEN,
turn_detection: session_create_params.TurnDetection | NotGiven = NOT_GIVEN,
voice: Literal["alloy", "ash", "ballad", "coral", "echo", "sage", "shimmer", "verse"] | NotGiven = NOT_GIVEN,
voice: Union[
str, Literal["alloy", "ash", "ballad", "coral", "echo", "fable", "onyx", "nova", "sage", "shimmer", "verse"]
]
| NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
# The extra values given here take precedence over values defined on the client or passed to this method.
extra_headers: Headers | None = None,
Expand Down Expand Up @@ -147,7 +150,8 @@ def create(

voice: The voice the model uses to respond. Voice cannot be changed during the session
once the model has responded with audio at least once. Current voice options are
`alloy`, `ash`, `ballad`, `coral`, `echo` `sage`, `shimmer` and `verse`.
`alloy`, `ash`, `ballad`, `coral`, `echo`, `fable`, `onyx`, `nova`, `sage`,
`shimmer`, and `verse`.

extra_headers: Send extra headers

Expand Down Expand Up @@ -227,7 +231,10 @@ async def create(
tool_choice: str | NotGiven = NOT_GIVEN,
tools: Iterable[session_create_params.Tool] | NotGiven = NOT_GIVEN,
turn_detection: session_create_params.TurnDetection | NotGiven = NOT_GIVEN,
voice: Literal["alloy", "ash", "ballad", "coral", "echo", "sage", "shimmer", "verse"] | NotGiven = NOT_GIVEN,
voice: Union[
str, Literal["alloy", "ash", "ballad", "coral", "echo", "fable", "onyx", "nova", "sage", "shimmer", "verse"]
]
| NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
# The extra values given here take precedence over values defined on the client or passed to this method.
extra_headers: Headers | None = None,
Expand Down Expand Up @@ -309,7 +316,8 @@ async def create(

voice: The voice the model uses to respond. Voice cannot be changed during the session
once the model has responded with audio at least once. Current voice options are
`alloy`, `ash`, `ballad`, `coral`, `echo` `sage`, `shimmer` and `verse`.
`alloy`, `ash`, `ballad`, `coral`, `echo`, `fable`, `onyx`, `nova`, `sage`,
`shimmer`, and `verse`.

extra_headers: Send extra headers

Expand Down
13 changes: 12 additions & 1 deletion src/openai/resources/responses/input_items.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

from __future__ import annotations

from typing import Any, cast
from typing import Any, List, cast
from typing_extensions import Literal

import httpx
Expand All @@ -17,6 +17,7 @@
from ..._base_client import AsyncPaginator, make_request_options
from ...types.responses import input_item_list_params
from ...types.responses.response_item import ResponseItem
from ...types.responses.response_includable import ResponseIncludable

__all__ = ["InputItems", "AsyncInputItems"]

Expand Down Expand Up @@ -47,6 +48,7 @@ def list(
*,
after: str | NotGiven = NOT_GIVEN,
before: str | NotGiven = NOT_GIVEN,
include: List[ResponseIncludable] | NotGiven = NOT_GIVEN,
limit: int | NotGiven = NOT_GIVEN,
order: Literal["asc", "desc"] | NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
Expand All @@ -64,6 +66,9 @@ def list(

before: An item ID to list items before, used in pagination.

include: Additional fields to include in the response. See the `include` parameter for
Response creation above for more information.

limit: A limit on the number of objects to be returned. Limit can range between 1 and
100, and the default is 20.

Expand Down Expand Up @@ -94,6 +99,7 @@ def list(
{
"after": after,
"before": before,
"include": include,
"limit": limit,
"order": order,
},
Expand Down Expand Up @@ -130,6 +136,7 @@ def list(
*,
after: str | NotGiven = NOT_GIVEN,
before: str | NotGiven = NOT_GIVEN,
include: List[ResponseIncludable] | NotGiven = NOT_GIVEN,
limit: int | NotGiven = NOT_GIVEN,
order: Literal["asc", "desc"] | NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
Expand All @@ -147,6 +154,9 @@ def list(

before: An item ID to list items before, used in pagination.

include: Additional fields to include in the response. See the `include` parameter for
Response creation above for more information.

limit: A limit on the number of objects to be returned. Limit can range between 1 and
100, and the default is 20.

Expand Down Expand Up @@ -177,6 +187,7 @@ def list(
{
"after": after,
"before": before,
"include": include,
"limit": limit,
"order": order,
},
Expand Down
24 changes: 12 additions & 12 deletions src/openai/resources/responses/responses.py
Original file line number Diff line number Diff line change
Expand Up @@ -149,8 +149,8 @@ def create(
context.

When using along with `previous_response_id`, the instructions from a previous
response will be not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
response will not be carried over to the next response. This makes it simple to
swap out system (or developer) messages in new responses.

max_output_tokens: An upper bound for the number of tokens that can be generated for a response,
including visible output tokens and
Expand Down Expand Up @@ -321,8 +321,8 @@ def create(
context.

When using along with `previous_response_id`, the instructions from a previous
response will be not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
response will not be carried over to the next response. This makes it simple to
swap out system (or developer) messages in new responses.

max_output_tokens: An upper bound for the number of tokens that can be generated for a response,
including visible output tokens and
Expand Down Expand Up @@ -486,8 +486,8 @@ def create(
context.

When using along with `previous_response_id`, the instructions from a previous
response will be not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
response will not be carried over to the next response. This makes it simple to
swap out system (or developer) messages in new responses.

max_output_tokens: An upper bound for the number of tokens that can be generated for a response,
including visible output tokens and
Expand Down Expand Up @@ -961,8 +961,8 @@ async def create(
context.

When using along with `previous_response_id`, the instructions from a previous
response will be not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
response will not be carried over to the next response. This makes it simple to
swap out system (or developer) messages in new responses.

max_output_tokens: An upper bound for the number of tokens that can be generated for a response,
including visible output tokens and
Expand Down Expand Up @@ -1133,8 +1133,8 @@ async def create(
context.

When using along with `previous_response_id`, the instructions from a previous
response will be not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
response will not be carried over to the next response. This makes it simple to
swap out system (or developer) messages in new responses.

max_output_tokens: An upper bound for the number of tokens that can be generated for a response,
including visible output tokens and
Expand Down Expand Up @@ -1298,8 +1298,8 @@ async def create(
context.

When using along with `previous_response_id`, the instructions from a previous
response will be not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
response will not be carried over to the next response. This makes it simple to
swap out system (or developer) messages in new responses.

max_output_tokens: An upper bound for the number of tokens that can be generated for a response,
including visible output tokens and
Expand Down
Loading