The week ending March 22 was dominated by three moves: OpenAI’s planned acquisition of Astral โ€” the company behind Python’s fastest-growing toolchain โ€” the release of GPT-5.4 mini and nano models purpose-built for coding agents and subagent delegation, and confirmation of a $110 billion funding round that values the company at $840 billion. Two Codex platform releases also landed, and the ChatGPT model picker got a significant simplification.


OpenAI Acquires Astral: uv, Ruff, and ty Join the Codex Ecosystem

OpenAI announced its intent to acquire Astral, the company behind uv (Python’s fastest package manager), Ruff (the Rust-based linter that replaced Flake8 and isort for many teams), and ty (their emerging type checker). The deal is pending regulatory approval.

The strategic logic is straightforward: Codex agents need reliable, fast developer tooling in-sandbox, and Astral’s tools are already the default choice for performance-sensitive Python workflows. Embedding them natively into Codex’s execution environment eliminates a class of setup friction and unlocks tighter integration โ€” think agent-driven dependency resolution, linting-as-validation-step, and type-checked code generation without external tool configuration.

The acquisition announcement also disclosed that Codex now has over 2 million weekly active users, with 3x user growth and 5x usage growth since the start of the year. Those numbers contextualize OpenAI’s willingness to invest in developer tooling infrastructure rather than just model capabilities.

For teams using uv or Ruff today: Astral has stated the open-source projects will continue independently. The question is whether the integration roadmap prioritizes Codex-specific features over general-purpose improvements. Worth monitoring.


GPT-5.4 Mini and Nano: Purpose-Built for Agent Workloads

GPT-5.4 mini and nano shipped March 17, positioned as the smallest and fastest models in the GPT-5.4 family, optimized specifically for coding and subagent delegation.

GPT-5.4 mini is the headline model:

  • 400k context window with support for text, image inputs, tool use, function calling, web search, file search, and skills
  • Over 2x faster than GPT-5 mini with performance approaching full GPT-5.4 across coding, reasoning, and multimodal benchmarks
  • Priced at $0.75 / $4.50 per million input/output tokens โ€” a significant cost reduction for high-volume agent pipelines
  • Uses only 30% of the GPT-5.4 quota in Codex, enabling cost-effective subagent delegation for simpler tasks

GPT-5.4 nano targets the lowest-latency tier for trivial operations where speed matters more than reasoning depth.

The practical impact: teams running multi-agent architectures can now route simpler subtasks to mini/nano without burning full GPT-5.4 quota. For Codex CLI users, GPT-5.4 mini is available directly for tasks that don’t require heavy reasoning.


$110 Billion Funding Round Closes

OpenAI secured $110 billion in new funding, led by Amazon ($50B), SoftBank ($30B), and Nvidia ($30B). The round includes a $100 billion expansion of OpenAI’s compute capacity agreement with AWS.

Post-investment valuation: $840 billion. The capital is earmarked for enterprise expansion and infrastructure scale-out, consistent with leadership’s stated priority of making coding tools and business products the company’s core focus.


Codex Platform: Two Releases in One Week

V0.116.0 (March 19) brought ChatGPT device-code sign-in for the app-server TUI, smoother plugin setup with suggestion allowlists, a new user prompt hook, and improved realtime session behavior. Bug fixes addressed startup stalls, history restoration, Linux sandbox reliability, and agent job finalization. The Python SDK public API and examples were refreshed.

V0.115.0 (March 16) introduced full-resolution image inspection, enhanced JS REPL and realtime WebSocket support, new v2 app-server filesystem RPCs with a Python SDK, smarter approval routing, and better app integrations. Reliability improvements landed across subagents, TUI, MCP, and proxy handling.

A detailed review from Zack Proser highlighted significant advancements in error handling, multi-turn conversations, code quality, and contextual awareness, plus a new “preview iteration system” that offers multiple implementation approaches before committing to one.


ChatGPT: Simplified Model Picker, Deep Research Sunset

The ChatGPT model picker was streamlined on March 17, replacing the full model list with plan-based options: Instant, Thinking, and Pro. A new Configure menu provides access to automatic model switching and legacy model access for power users. The “Nerdy” base style was sunsetted.

GPT-5.4 mini rolled out to ChatGPT on March 18 โ€” Free and Go users can access it via the Thinking feature, and it serves as a rate-limit fallback for other tiers. Enterprise customers can maintain Auto routing to the mini model. GPT-5 Thinking mini will be retired in 30 days.

Legacy deep research mode is being removed March 26. The current deep research experience and historical conversations remain accessible. This applies to both consumer and Enterprise/EDU plans.


Enterprise & EDU Updates

  • Impact surveys expanded to ChatGPT Edu (March 20) โ€” now includes an Edu-specific question set with exportable results. Workspace owners can launch admin-created surveys on demand; OpenAI-created surveys begin on or after March 31.
  • Streamlined model picker for Enterprise/EDU mirrors the consumer changes, with plan-aware options and an “Auto-switch to thinking” setting via Configure.

Strategic Refocus: Coding Tools and Business Users

The Wall Street Journal reported that OpenAI executives are finalizing a strategy to reduce side projects and concentrate on coding tools and business customers. CEO Fidji Simo emphasized the need to “nail” core product offerings before expanding further.

Combined with the Astral acquisition, the GPT-5.4 mini/nano launch, and the $110B infrastructure investment, the message is clear: OpenAI is betting its near-term future on becoming the default platform for AI-assisted software development at enterprise scale.


What to Watch

  • Astral integration roadmap. How quickly uv, Ruff, and ty become first-class citizens in the Codex sandbox โ€” and whether the open-source projects maintain independent momentum post-acquisition.
  • GPT-5 Thinking mini retirement. 30-day sunset clock started March 18. Teams using it in production should validate their migration path to GPT-5.4 mini now.
  • Legacy deep research removal (March 26). If your workflows depend on the legacy mode, transition to the current deep research experience before the cutoff.
  • Enterprise survey rollout. OpenAI-created impact surveys launching March 31 โ€” workspace admins should review the survey configuration options before they go live.

Check back next week for the latest from the OpenAI and Codex ecosystem. If you’re evaluating how these changes affect your enterprise AI strategy, Big Hat Group can help.