Anthropic just passed OpenAI in enterprise revenue. That sentence alone would have been unthinkable twelve months ago, but here we are: $30 billion in annualized revenue, over 1,000 enterprise customers each spending north of $1 million per year, and eight of the Fortune 10 on the platform. In the same week, Anthropic shipped a new flagship model, a managed agent runtime, and an enterprise collaboration tool with real governance controls. The question for IT leaders is no longer which AI vendor to watch โ it is whether your organization’s adoption strategy can keep pace with what is now available.
This briefing covers the five developments from the past week that matter most for enterprise IT decision-makers, with specific guidance on what to do about each one.
Anthropic Hits $30B and the Enterprise Tipping Point
The headline number is dramatic โ Anthropic crossed $30 billion in annualized revenue in early April 2026, up from $9 billion at the end of 2025. But the composition of that revenue is what matters for enterprise planning. Eighty percent comes from business customers. The company now counts more than 1,000 enterprises spending over $1 million annually, and its Fortune 10 penetration (eight of ten) exceeds any other AI platform vendor.
This is not consumer-app growth fueled by free tiers and viral loops. This is procurement-driven, contract-based enterprise adoption at a scale and velocity that the industry has not seen since the early days of cloud. Anthropic’s valuation has followed: investors are now offering terms at roughly $800 billion, more than doubling the $350 billion valuation from just two months ago.
For organizations evaluating enterprise AI consulting partnerships, the signal is clear. Anthropic has moved from “interesting alternative” to “default enterprise platform” in the span of a single fiscal year. If your AI strategy is still anchored exclusively to OpenAI, it is time to re-evaluate.
Enterprise Takeaway: Anthropic’s enterprise revenue trajectory changes the competitive landscape. Multi-vendor AI strategies that include Claude are no longer a hedge โ they are a best practice. Ensure your Azure consulting partner can deploy and manage Claude through Bedrock and Vertex in addition to Azure OpenAI.
Opus 4.7 GA: The Model Upgrade That Actually Matters
Claude Opus 4.7 went generally available on April 16, 2026, and it is the most capable publicly available model for agentic software engineering tasks. It scored 64.3% on SWE-bench Pro, retaking the top position among GA models for agentic coding. Visual capabilities improved by over 3x, and a new “xhigh” effort level gives developers finer control over the reasoning-latency tradeoff on difficult problems.
But the feature that deserves attention from IT operations teams is the /ultrareview command in Claude Code. Unlike standard code review that catches syntax errors, /ultrareview simulates a senior human reviewer โ flagging subtle design flaws, logic gaps, and architectural concerns. For organizations already using Claude Code in their development workflows, this is a meaningful step toward AI-assisted code quality at the architectural level, not just the line level.
A related note on model lifecycle: Anthropic is deprecating Sonnet 4 and Opus 4 (the non-4.6 generations), with a migration deadline of June 15, 2026. If you have production integrations pinned to those model IDs, the clock is ticking.
Enterprise Takeaway: Test Opus 4.7 in your development pipelines now. The /ultrareview capability is particularly relevant for teams doing regulated software development where code review quality is an audit concern. And schedule your Sonnet 4 / Opus 4 migration immediately โ June 15 is eight weeks away.
Managed Agents and Cowork GA: The Governance Story
Two releases from the same week tell one story: Anthropic is building the governance layer that enterprises have been waiting for.
Claude Managed Agents entered public beta on April 8. This is a fully managed runtime for long-running autonomous AI workflows โ file operations, command execution, web browsing, and code execution inside secure sandboxes. Sessions persist through disconnections and can run autonomously for hours. The pricing model is straightforward: standard API token rates plus $0.08 per session-hour, with no flat licensing fee and no per-agent charge.
For IT operations teams, the significance is in what you do not have to build. Managed Agents handles the agent loop, tool execution, sandboxing, and state management. The barrier to deploying agentic AI for IT operations just dropped from “custom infrastructure project” to “API integration.”
Claude Cowork reached general availability on April 9 with a set of enterprise management features that read like an IT administrator’s wish list: role-based access control through SCIM integration with existing identity providers, group-level spend limits, usage analytics, and โ critically โ native OpenTelemetry support. That last item means Cowork activity logs export in standard OTel format directly to Splunk, Datadog, Elastic, or whatever SIEM your security team already runs.
This is the first desktop AI agent with native OpenTelemetry support. That distinction matters because it means AI agent activity can be monitored, audited, and alerted on using the same observability infrastructure you use for everything else. No separate dashboard, no vendor-specific log format, no gaps in your security posture. IT teams managing large Cloud PC fleets will want to evaluate how Cowork’s RBAC integrates with their existing identity and endpoint governance.
Enterprise Takeaway: If your organization has been waiting for “enterprise-ready” AI agents, the wait is over. Cowork’s RBAC and OTel support specifically address the governance concerns that have blocked adoption in regulated industries. Start a proof of concept with your security and compliance teams โ the integration points they need are now available out of the box.
Project Glasswing and the 23-Year-Old Linux Bug: Security Implications
Two security stories from the past week deserve to be read together.
First, Anthropic researcher Nicholas Carlini used Claude Code to discover a remotely exploitable heap buffer overflow in the Linux kernel’s NFS driver โ a critical vulnerability that had been hiding in plain sight since March 2003. The bug allowed a denied lock request to force the server to write a 1,056-byte response into a 112-byte buffer. Carlini has since identified five Linux kernel vulnerabilities total, with hundreds of potential crashes still awaiting human validation. As he put it: “We now have a number of remotely exploitable heap buffer overflows in the Linux kernel. I have never found one of these in my life before.”
Second, Anthropic launched Project Glasswing, a cybersecurity initiative built on their unreleased Claude Mythos model. Mythos has found thousands of high-severity zero-day vulnerabilities across every major operating system and web browser. The model is so capable โ and the misuse risk so high โ that Anthropic is restricting access to approximately 50 partner organizations, including AWS, Apple, Cisco, Google, and Microsoft. They have stated they have no plans to release this particular model publicly.
For enterprise security teams, the implication is binary: AI-driven vulnerability discovery is no longer theoretical. It is finding real bugs that human researchers missed for decades. Your vulnerability management program needs to account for both sides of this โ the dramatically increased rate of disclosed vulnerabilities you will need to patch, and the potential for adversaries to use similar capabilities offensively.
Enterprise Takeaway: Expect a significant increase in disclosed vulnerabilities across your software stack over the coming months. Accelerate your patch management cycles and ensure your Windows 365 and endpoint management strategies include rapid response capabilities. The era of 23-year-old unpatched bugs surviving in production is ending โ but only if you can keep up with the disclosure rate.
What to Do This Week
Here are five concrete actions for IT leaders based on this week’s developments.
Audit your model dependencies. Sonnet 4 and Opus 4 are deprecated with a June 15 migration deadline. Identify every production integration, automation, and API call that references these model IDs. Test against the 4.6 generation in staging.
Evaluate Claude Managed Agents for one IT operations workflow. Pick a well-scoped, repeatable task โ log analysis, configuration auditing, documentation generation โ and run a proof of concept. The pay-as-you-go pricing makes experimentation low-risk.
Brief your security team on Project Glasswing. The vulnerability disclosure rate from AI-assisted research is about to inflect upward. Your vulnerability management and patch cadence need to be ready.
Test Cowork’s OpenTelemetry integration. If your organization uses Splunk, Datadog, or Elastic, connect Cowork’s OTel export to your existing SIEM and validate that AI agent activity appears in your dashboards. This is the fastest path to AI governance without new tooling.
Revisit your AI vendor strategy. With Anthropic now the revenue leader in enterprise AI and the governance tooling maturing rapidly, a single-vendor OpenAI strategy carries increasing concentration risk. Ensure your architecture supports multi-model deployment across Azure OpenAI, Bedrock, and Vertex.
Need help planning your migration before June 15? Contact us โ we can help.
If your organization needs support navigating these changes โ from model migration planning to deploying managed agents in production to integrating AI governance into your existing security infrastructure โ Big Hat Group specializes in enterprise AI consulting that bridges the gap between what the platforms ship and what your organization actually needs to operate safely and effectively.
Kevin Kaminski is Principal Architect at Big Hat Group, where he helps enterprises deploy AI, Azure, and Windows 365 solutions that work in the real world. Connect with Big Hat Group for Azure consulting, Windows 365 consulting, and enterprise AI consulting.