This was a defining week for OpenAI’s platform strategy. Codex and OpenAI models are now available on Amazon Bedrock, the Microsoft partnership was restructured for multi-cloud flexibility, and OpenAI open-sourced Symphony — a complete agent orchestration spec built on Codex. GPT-5.5 rolled out to all tiers, the Agents SDK shipped two updates, and Codex CLI crossed a major feature milestone with persisted goal workflows. Meanwhile, Google committed up to $40 billion to Anthropic, reshaping the competitive landscape.
If you’re planning enterprise AI strategy for the next 12 months, this week’s news changes the math on vendor lock-in, deployment flexibility, and multi-agent architectures.
Codex on AWS: The Multi-Cloud Bet Pays Off
The week’s biggest story started on April 27 when OpenAI and Microsoft announced an amended partnership. The key change: Microsoft remains OpenAI’s primary cloud partner, but OpenAI can now serve products across any cloud provider on a non-exclusive basis. Microsoft’s IP license extends through 2032, its revenue share payments to OpenAI are capped, and Microsoft will no longer receive revenue share from OpenAI. The agreement gives OpenAI the operational flexibility it needs for a multi-cloud future — and they used it the very next day.
On April 28, OpenAI and AWS announced an expanded strategic partnership. Three things launched in limited preview:
- OpenAI models on Amazon Bedrock — including GPT-5.5 and GPT-5.4, served within AWS infrastructure
- Codex on AWS — Codex CLI, desktop app, and VS Code extension powered by OpenAI models served from Bedrock
- Amazon Bedrock Managed Agents, powered by OpenAI — production-ready agent deployment with AWS security, governance, and compliance controls baked in
Why this matters for enterprise teams: If your organization runs on AWS, you can now use Codex within your existing procurement, billing, and security frameworks. AWS customers can apply Codex usage toward their cloud commitments. All customer data stays within Bedrock’s boundary — no data leaves AWS. For enterprises that have been blocked by multi-cloud compliance or vendor-lock concerns, this removes a major barrier.
The announcement also revealed that more than 4 million people now use Codex every week — the first public user metric for the platform.
Symphony: Open-Source Agent Orchestration
On April 27, OpenAI open-sourced Symphony, an agent orchestrator specification that turns issue trackers like Linear into a control plane for coding agents.
Symphony is primarily a SPEC.md file — language-agnostic, reference-implemented in Elixir — that defines how agents autonomously pull work from boards, implement features, create PRs, shepherd them through CI, and attach proof-of-work artifacts. It uses the Codex App Server (a JSON-RPC API) under the hood for headless, programmatic agent execution.
The results OpenAI shared are striking: some internal teams saw a 500% increase in landed PRs in the first three weeks of adoption.
Why this matters for consulting and enterprise teams: Symphony is not a product — it’s a spec. That means any team can implement it against their own toolchain. For an organization considering agentic development pipelines, Symphony provides a battle-tested reference architecture. The use of the Codex App Server as the execution backbone also hints at where OpenAI sees agent orchestration going: headless, API-driven, and toolchain-agnostic.
GitHub repo: github.com/openai/symphony
GPT-5.5 Is Live Across All Channels
OpenAI fully rolled out GPT-5.5 — its “smartest model yet” — to ChatGPT (Plus, Pro, Business, Enterprise) and to the Chat Completions and Responses APIs on April 24.
Key specs:
- 1M token context window
- Image input, structured outputs, function calling
- Tool search, built-in computer use, hosted shell, Skills, MCP
- Reasoning effort defaults to medium
- Extended prompt caching only (no in-memory caching)
Benchmark highlights:
- Terminal-Bench 2.0: 82.7% (vs. GPT-5.4’s 75.1%)
- Expert-SWE: 73.1% (vs. GPT-5.4’s 68.5%)
- OSWorld-Verified: 78.7% (vs. Claude Opus 4.7’s 78.0%)
- FrontierMath Tier 1-3: 51.7% (vs. GPT-5.4’s 47.6%)
- CyberGym: 81.8% (vs. GPT-5.4’s 79.0%)
GPT-5.5 matches GPT-5.4’s per-token latency while using fewer tokens to complete Codex tasks — meaning better quality at comparable speed. The new model also powers all Codex-relevant tools: computer use, hosted shell, apply patch, Skills, MCP, and tool search.
OpenAI also published the GPT-5.5 System Card alongside the model and launched a Bio Bug Bounty program.
Codex CLI v0.128.0 — Persisted Goals, Bedrock, and Plugin Marketplace
The Codex CLI repository saw exceptionally high velocity this week, shipping version 0.128.0 with an alpha 2 hotfix on April 30.
Major new features:
- Persisted
/goalworkflows — A 5-PR series adding foundation, app-server API, model tools, core runtime, and TUI controls for creating, pausing, resuming, and clearing long-running agent tasks codex updatecommand — Self-update mechanism built into the CLI- Configurable TUI keymaps — Custom terminal UI keyboard bindings
- Plugin marketplace — Marketplace installation, remote bundle caching, remote uninstall, plugin-bundled hooks
- Expanded permission profiles — Built-in default profiles, sandbox CLI profile selection, cwd controls
- MultiAgentV2 configuration — Thread caps, wait-time controls, root/subagent hints
Notable fixes:
- Windows sandbox/PTY fixes — pseudoconsole startup, elevated runner process, core shell environment inheritance
- Bedrock model support —
apply_patchtool fixed for Bedrock models, GPT-5.4 reasoning levels, updated endpoint metadata - MCP/plugin cleanup for stdio server cleanup and approval persistence
- TUI reliability — terminal resize reflow, markdown list spacing, shell-mode escape
Deprecations: The --full-auto flag is deprecated in favor of explicit permission profiles and trust flows.
Source: github.com/openai/codex/releases
Agents SDK: v0.15.0 and v0.14.6
The OpenAI Agents SDK Python package shipped two releases this week.
v0.15.0 (May 1) — Model refusals are now surfaced explicitly as ModelRefusalError instead of being treated as empty text output or causing MaxTurnsExceeded errors in structured-output agents. Developers can provide a model_refusal error handler to Runner.run_sync() for graceful handling.
v0.14.6 (Apr 25) — Updated examples and defaults to GPT-5.5. Added BoxMount support for Box storage mounts in sandbox agents. Sandbox documentation now comprehensively covers manifests, mounts, secret handling, and provider integration (S3, GCS, R2, Azure Blob, Box).
Source: github.com/openai/openai-agents-python/releases
Advanced Account Security
On April 30, OpenAI introduced Advanced Account Security — an opt-in setting for ChatGPT and Codex accounts targeting high-risk users (journalists, elected officials, researchers).
Features include:
- Passkey or physical security key only — password-based sign-in is disabled
- Secure account recovery — no email/SMS recovery; requires backup passkeys or recovery keys
- Shorter sessions with login alerts and active session management
- Automatic training exclusion — conversations won’t be used for model training
- Yubico partnership for discounted YubiKey bundles
Starting June 1, 2026, members of the Trusted Access for Cyber program will be required to enable this level of protection. OpenAI says this work will extend to enterprise environments.
Competitive Landscape: Google Bets $40B on Anthropic
Alphabet announced plans to invest up to $40 billion in Anthropic — $10 billion immediate commitment with $30 billion tied to performance targets. The deal includes compute capacity via Google Cloud and signals Google’s determination to keep Claude as a primary competitive alternative to GPT models.
DeepSeek also rolled out V4 of its model, with early benchmarks showing it closing the gap with frontier Western models on cost-efficient inference. The Council on Foreign Relations noted this marks “a new phase in the U.S.-China AI rivalry.”
OpenAI’s own IPO preparations continue, with the WSJ reporting that the company has missed some key revenue and user targets. The Stargate infrastructure project surpassed 10GW (with 3GW added in the last 90 days alone), underscoring the massive capex required to maintain frontier model leadership.
What to Watch
- Symphony adoption: Watch for community implementations of the spec. If Elixir isn’t your stack, the spec is designed to be language-agnostic — expect ports to Python and TypeScript.
- Codex on Bedrock GA timeline: The limited preview will expand. Enterprises running AWS should start testing feasibility now.
- Advanced Account Security for enterprises: The extension of passkey-only auth to enterprise environments will change how organizations manage Codex and ChatGPT access.
- GPT-5.5 pricing economics: The higher per-token rate but lower total token consumption per task changes cost modeling for agentic pipelines. Run the numbers before committing to quarterly budgets.
That’s the Codex ecosystem this week. Between multi-cloud expansion, open-source orchestration specs, and a new frontier model, there’s more strategic signal here than in most months. Check back next Friday for the next edition.
— Kevin Kaminski, Principal Architect at Big Hat Group. We help enterprises design and deploy AI agent systems on Azure, AWS, and Windows 365.