Claude crossed the enterprise rubicon this week. Not with a single headline, but with five independent signals that converged simultaneously: 1M token context at standard pricing, a $100M partner network with Accenture and Deloitte, an AI agent that found 22 real CVEs in Firefox, four Claude Code releases in seven days, and a new research institute dedicated to studying AI’s societal impact. Individually, these are announcements. Together, they’re a phase transition.
The executive summary:
- 1M token context window is GA for Opus 4.6 and Sonnet 4.6 โ standard pricing, no premium tier, no waitlist
- Claude Code shipped v2.1.75 through v2.1.79 with MCP elicitation hooks, 128K output token limits, remote control bridging, and an 18MB memory reduction
- $100M Claude Partner Network launched with Accenture, Deloitte, Cognizant, and Infosys as anchor partners
- Anthropic Institute established under co-founder Jack Clark to study AI’s impact on jobs, governance, and the economy
- 22 CVEs discovered in Firefox by Claude during a two-week security audit โ including one rated CVSS 9.8
What Shipped This Week in Claude Code (v2.1.75โ79)
Anthropic shipped four Claude Code releases in seven days. That’s not a release cycle โ that’s a velocity statement. They’re iterating on developer tooling faster than most enterprises ship internal features. Here’s what changed across v2.1.75 through v2.1.79:
| Version | Key Changes |
|---|---|
| v2.1.76 | MCP elicitation support, /effort command, session naming (-n flag), sparse worktree checkout |
| v2.1.77 | Default output tokens raised to 64K (Opus 4.6), upper bounds to 128K for Opus 4.6 and Sonnet 4.6 |
| v2.1.78 | StopFailure hook for API errors, persistent plugin data (${CLAUDE_PLUGIN_DATA}), agent frontmatter support |
| v2.1.79 | VSCode /remote-control command, AI-generated session titles, ~18MB startup memory reduction |
If you’ve been following how Claude Code fits into enterprise coding harnesses, these releases represent meaningful maturation of the plugin and agent ecosystem.
MCP Elicitation โ Smarter Tool Interactions
v2.1.76 introduced Elicitation and ElicitationResult hooks, giving plugins the ability to intercept and customize MCP server elicitation flows. This is significant for teams building custom toolchains: your plugins can now override how Claude Code responds to MCP server prompts, enabling richer tool interactions without manual intervention.
Combined with the persistent plugin data (${CLAUDE_PLUGIN_DATA}) added in v2.1.78 and agent frontmatter support for effort, maxTurns, and disallowedTools, the plugin ecosystem is getting the kind of lifecycle management that enterprise deployments need. If your team has been configuring agent behavior with AGENTS.md, these hooks extend that control surface into the MCP layer.
/remote-control Command for Headless Workflows
The new /remote-control command in v2.1.79 lets you bridge a local Claude Code session to claude.ai/code, continuing your work from a browser or phone. Session tabs now get AI-generated titles based on your first message, making it easier to manage multiple concurrent sessions.
The practical implication: you can kick off a complex refactoring task in your terminal, then monitor and steer it from your phone while grabbing coffee. For CI/CD integration, this opens the door to headless workflows where Claude Code runs in a pipeline and exposes a browser-accessible control surface.
64K and 128K Output Token Limits Explained
Default max output tokens for Opus 4.6 jumped to 64K, with upper bounds for both Opus 4.6 and Sonnet 4.6 increased to 128K tokens. This matters more than the numbers suggest. Previously, complex code generation tasks โ large module refactors, comprehensive test suite generation, multi-file scaffolding โ would hit output limits and produce truncated results.
At 128K output tokens, Claude Code can generate roughly 400โ500 lines of well-commented code in a single turn without truncation. For teams working with complex project structures, this removes a significant practical constraint.
18MB Memory Reduction โ Why It Matters at Scale
v2.1.79 cut startup memory by approximately 18MB. On a single developer’s machine, this is negligible. Across a fleet of developer workstations or CI/CD runners spinning up Claude Code instances in parallel, the aggregate reduction is meaningful. It also reflects the kind of performance engineering discipline โ dozens of stability fixes across CLI, voice mode, streaming, sandboxing, git worktrees, permissions, WSL2, and VSCode integration โ that signals a tool maturing toward production-grade reliability.
1 Million Token Context Window Now GA
The full 1M context window is generally available for Claude Opus 4.6 and Sonnet 4.6 at standard pricing with no long-context premium. Opus 4.6 at $5/$25 per million tokens and Sonnet 4.6 at $3/$15 โ the same rate whether you send 9K or 900K tokens. Media limits expanded to 600 images or PDF pages per request, up from 100. No beta header required.
We’ve been running Claude Code in our consulting workflows for months. When the 1M token context went GA this week, we immediately tested it against a client’s legacy codebase. The difference between 200K and 1M tokens isn’t 5x more context โ it’s the difference between seeing a function and understanding a system.
Standard Pricing โ What Changed from Preview
Previously, long-context usage above 200K tokens carried a premium multiplier. That’s gone. The pricing table is now flat:
| Model | Input (per 1M tokens) | Output (per 1M tokens) | Max Context |
|---|---|---|---|
| Opus 4.6 | $5 | $25 | 1M tokens |
| Sonnet 4.6 | $3 | $15 | 1M tokens |
For enterprise budgets, the math just changed. Processing a 40,000-line codebase (roughly 300Kโ500K tokens depending on language) costs the same per-token as processing a single file. That eliminates the economic penalty for whole-codebase analysis, comprehensive document review, and multi-system reasoning tasks.
The Models API also got a capabilities endpoint (March 18) โ GET /v1/models/{model_id} now returns max_input_tokens, max_tokens, and a capabilities object. Developers can programmatically discover what each model supports, which is essential for applications that need to gracefully handle different model tiers.
On the benchmark side, Opus 4.6 scored 78.3% on MRCR v2 (8-needle retrieval) at 1M tokens, significantly outperforming GPT-5.4 at 36.6% and Gemini 3.1 Pro at 25.9% at the same context length. Raw scores aren’t everything, but a 2x gap over the nearest competitor at maximum context is hard to dismiss.
Cursor’s Pricing Update for 1M Context Support
Cursor IDE quickly removed its 2x pricing multiplier for inputs exceeding 200K tokens following Anthropic’s announcement, with extended context available through Max Mode. This is a good signal for the broader ecosystem โ as we covered in our analysis of the AI coding CLI landscape, downstream tool pricing tends to follow upstream model pricing with a lag. That lag just collapsed.
$100M Claude Partner Network Launches
Anthropic committed $100 million for 2026 to a new partner program, and the anchor partners tell the real story: Accenture, Deloitte, Cognizant, and Infosys. When the Big Four consulting firms build dedicated Claude practices, it means one thing โ their enterprise clients are asking for it.
The network offers training via Anthropic Academy, dedicated technical support, joint go-to-market investment, a Partner Portal, and the first technical certification: Claude Certified Architect, Foundations. Additional certifications for sellers, architects, and developers are planned for later in 2026.
Accenture, Deloitte, Cognizant, Infosys โ Who’s In
The numbers are telling:
- Accenture plans to train 30,000 professionals on Claude
- Deloitte has opened Claude access to approximately 350,000 associates globally
- A Code Modernization starter kit targets legacy codebase migration โ directly relevant to the enterprise COBOL-to-modern-stack pipelines that consulting firms live on
The network is free to join for organizations bringing Claude to market. That’s a deliberate growth play: Anthropic is subsidizing the partner ecosystem to accelerate enterprise adoption, banking on volume over margin.
What This Means for Enterprise AI Adoption
If your consulting partners are building dedicated Claude practices and you haven’t started evaluating, you’re not behind the curve โ you’re behind the people staffing for it. The partner network is a procurement signal, not a technology signal. It means Claude has passed the enterprise vendor risk assessment at Accenture and Deloitte. For organizations that use Big Four recommendations as buying signals (and most large enterprises do), this materially de-risks Claude adoption.
The Code Modernization starter kit deserves attention. Legacy codebase migration is one of the highest-value use cases for AI-assisted development, and having a structured methodology backed by SI partners makes it easier to justify the investment internally. We’ve been building with Claude Code for exactly this kind of work, and the combination of 1M token context + structured migration tooling is a significant step forward.
Anthropic Institute: AI Safety Gets Its Own Entity
Anthropic launched the Anthropic Institute, a new research unit consolidating its Frontier Red Team, Societal Impacts, and Economic Research groups under co-founder Jack Clark. Clark takes on the role of head of public benefit โ a title that signals institutional weight rather than a product marketing exercise.
Research Focus and Industry Implications
The Institute will study how powerful AI systems affect jobs, the economy, governance, and the legal system. The founding team includes Matt Botvinick (formerly Google DeepMind and Yale Law), Anton Korinek (University of Virginia economics), and Zoรซ Hitzig (formerly OpenAI).
The first major output is already published: a survey of 80,508 Claude users across 159 countries and 70 languages โ the largest multilingual qualitative AI study ever conducted. Key findings:
- 67% view AI positively globally
- Top hopes: professional excellence (18.8%), personal transformation (13.7%), life management (13.5%)
- Top concerns: unreliability (26.7%), jobs/economy (22.3%), autonomy/agency (21.9%)
For enterprise leaders, the practical takeaway is that Anthropic is investing in the governance and safety infrastructure that risk-averse organizations require before making platform commitments. The Anthropic Institute gives compliance teams and boards something concrete to reference when approving AI vendor relationships.
Developer & API Updates
Models API Capabilities Endpoint
The Models API now returns structured capability metadata via GET /v1/models and GET /v1/models/{model_id}. The response includes max_input_tokens, max_tokens, and a capabilities object. This is a small change with significant implications for multi-model applications โ you can now programmatically route tasks to the right model based on its actual capabilities rather than maintaining a hardcoded lookup table.
Extended Thinking Display Field
A new thinking.display: "omitted" option lets developers omit thinking content from responses for faster streaming while preserving the signature for multi-turn continuity. Billing is unchanged โ you still pay for the thinking tokens. The use case is latency-sensitive applications (chat interfaces, real-time coding assistants) where you want the quality benefits of extended thinking without streaming the intermediate reasoning to the user.
Security & Research Highlights
Firefox Security Audit โ 22 CVEs Found
An AI agent found 22 confirmed security vulnerabilities in one of the most-audited codebases on earth. Claude Opus 4.6 discovered 22 new CVEs during a two-week audit of Firefox in January 2026, with 14 classified as high-severity โ representing nearly a fifth of all high-severity Firefox vulnerabilities patched in 2025.
The most critical was CVE-2026-2796 (CVSS 9.8), a JIT miscompilation in JavaScript WebAssembly. Most issues were fixed in Firefox 148. Anthropic spent approximately $4,000 in API credits. The sobering detail: while Claude could identify vulnerabilities cheaply, creating working exploits proved far harder, succeeding in only 2 of several hundred attempts.
This isn’t a future capability โ it happened this week. If you’re not evaluating AI-assisted security auditing, your AppSec strategy has a gap. The cost-to-value ratio ($4K for 22 CVEs including a CVSS 9.8) makes AI-assisted security auditing viable even for organizations that can’t afford dedicated red teams. Claude Code Security, the tool used for this work, is now in limited research preview.
81,000-Person AI Survey โ Key Takeaways
Beyond the headline numbers, the 81,000-person survey offers useful data for anyone building an internal case for AI adoption. The finding that unreliability (26.7%) tops concerns over job displacement (22.3%) aligns with what we see in enterprise conversations โ leaders aren’t worried AI will replace their teams; they’re worried it will hallucinate in production. That’s a solvable problem with proper guardrails, evaluation frameworks, and human-in-the-loop workflows.
Other Updates Worth Noting
Persistent Cowork thread on mobile and desktop โ Pro and Max plan users can now access a persistent agent thread via Claude Desktop or Claude for iOS/Android. Currently rolling out to Max plans first, with Pro following over the next two days.
Interactive charts and visualizations โ Claude can now create custom charts, diagrams, and other visualizations inline in responses, expanding beyond text-only output.
Improved Excel and PowerPoint add-ins โ The Claude for Excel and PowerPoint add-ins now share full conversation context across applications, support skills, and can connect via LLM gateway for Amazon Bedrock, Vertex AI, or Microsoft Foundry users.
Sydney office announced โ Anthropic’s fourth Asia-Pacific office, joining Tokyo, Bengaluru, and Seoul. The office will support enterprise, startup, and research customers across Australia and New Zealand, with partners including Canva, Quantium, and Commonwealth Bank of Australia.
What This Means for Your Enterprise AI Strategy
The last technical objection for enterprise AI adoption just disappeared. Scale constraints (1M tokens at flat pricing), institutional backing (Big Four consulting partners), proven security capability (22 real CVEs in production code), developer tooling velocity (four releases per week), and governance infrastructure (Anthropic Institute) โ these aren’t items on a roadmap. They shipped this week.
The question for enterprise IT leaders isn’t whether to adopt AI-assisted development and analysis tooling. It’s whether you’re positioned to move fast enough. Your competitors’ consulting partners are training 30,000 people and opening Claude access to 350,000 associates. The window for gaining an advantage through early adoption is closing.
If your team is evaluating how Claude Code, 1M token context, and the broader Anthropic ecosystem fit into your engineering workflows, the practical next step is a structured pilot โ not another POC, but a scoped production workload that generates measurable results. We’ve been running exactly these kinds of engagements, and the combination of capabilities that shipped this week makes the case significantly stronger than it was seven days ago.
Sources
- Anthropic โ 1M Context GA Blog Post
- Anthropic โ Claude Partner Network
- Anthropic โ The Anthropic Institute
- Anthropic โ 81,000 Person AI Survey
- Claude Code Changelog
- Claude.ai Release Notes
- Claude Developer Platform Changelog
- Firefox Security Audit โ The Digital Fortress
- Firefox Security Audit โ TechBuzz
- Cursor Forum โ 1M Context Pricing
- Anthropic News