Anthropic delivered its most consequential product week of 2026. Computer Use arrived in research preview, letting Claude control your Mac to complete tasks autonomously. Claude Code Auto Mode introduced a safer middle ground for agent permissions using an AI classifier. And in a federal courtroom, a judge called the Pentagon’s “supply chain risk” designation against Anthropic “an attempt to cripple” the company. Here is everything that matters for enterprise teams this week in Claude Weekly.
Computer Use: Claude Can Now Control Your Desktop
Anthropic launched Computer Use in research preview for Claude Pro and Max subscribers on macOS. Claude can now click, scroll, navigate apps, and browse the web โ handling tasks when no direct integration exists. It uses a permission-first approach, requesting access before touching new applications.
The capability becomes significantly more powerful when paired with Dispatch, the persistent mobile-to-desktop conversation feature that launched the prior week. The combination lets users assign Claude a task from their phone and have it work on their computer while they are away โ a pattern that moves Claude from “assistant you talk to” toward “agent that works for you.”
For enterprise teams, this is early but directional. Computer Use is macOS-only, explicitly labeled as a research preview, and accuracy will improve. But the architectural pattern โ persistent threads, cross-device handoffs, autonomous desktop control โ signals where Anthropic’s agent platform is heading. Teams evaluating agent workflows should start thinking about what tasks they would delegate if desktop automation were reliable.
Claude Code Auto Mode: Smarter Permission Handling
Auto Mode shipped as a research preview for Claude Code, available now for Teams customers with Enterprise and API rollout coming soon. It sits between the conservative default (approve every tool call) and the risky --dangerously-skip-permissions flag.
The mechanism: an AI classifier reviews each tool call before execution, checking for destructive actions, data exfiltration, or malicious code. Safe actions proceed automatically. Risky ones get blocked. Enable it with claude --enable-auto-mode and cycle modes with Shift+Tab. It works with both Sonnet 4.6 and Opus 4.6.
This matters because the permission friction in Claude Code was a real adoption barrier. Approving every file write or shell command is tedious for experienced developers, but skipping permissions entirely is a non-starter for enterprise environments. Auto Mode threads the needle โ TechCrunch described it as “more control, but on a leash.”
Enterprise admins retain full control: Auto Mode can be disabled organization-wide via managed settings ("disableAutoMode": "disable"), and it is off by default on Desktop. The admin controls are what make this enterprise-viable rather than just developer-convenient.
Pentagon Lawsuit: Judge Signals Skepticism
In the most closely watched AI-government case of the year, Federal Judge Rita Lin heard arguments on March 25 on Anthropic’s motion to block the Pentagon’s “supply chain risk” designation. Lin called the action “troubling” and said it “looks like an attempt to cripple Anthropic,” noting the label is normally reserved for “adversaries of the US government.”
The backstory: Anthropic CEO Dario Amodei refused the Pentagon’s demand for unfettered AI access, citing concerns about mass surveillance and autonomous weapons. Sworn declarations revealed the Pentagon told Anthropic the two sides were “very close” on contested issues one day after finalizing the designation. Microsoft filed an amicus brief in support of Anthropic.
A decision is expected in the coming days. A ruling in Anthropic’s favor would set a significant precedent for AI companies’ right to maintain safety commitments in government contracts. A ruling against could reshape how every frontier AI company approaches defense work. Either way, enterprise buyers should be tracking this โ the outcome will influence Anthropic’s risk profile as a vendor.
Code Review Continues Rollout
Claude Code Review, which launched March 9, continued its rollout this week. The multi-agent system dispatches parallel review agents on pull requests, cross-checks findings, filters false positives, and ranks bugs by severity. It integrates with GitHub and focuses on logic errors over style nits.
The internal numbers are notable: Anthropic reports substantive review comments jumped from 16% to 54% of PRs after enabling the tool. Average review time is approximately 20 minutes at $15โ$25 per review. Available for Teams and Enterprise customers, with admin controls for spend caps, repo selection, and analytics dashboards.
Interactive Visualizations and Mobile Apps
Two smaller product updates round out the week. Interactive charts and visualizations now render inline in Claude responses โ users can explore data visually, toggle parameters, and download results as HTML. Available on free and paid plans.
Separately, the Claude iOS and Android apps now support fully interactive apps โ live charts, sketch diagrams, and shareable assets rendered within conversations. These updates make Claude more useful for data exploration and presentation workflows without leaving the chat interface.
Enterprise Add-Ins and Office Integration
The Claude for Excel and PowerPoint add-ins received updates that share full conversation context across applications. Actions in Excel now inform PowerPoint and vice versa. The add-ins also added skills support and LLM gateway connectivity for organizations using Amazon Bedrock, Google Vertex AI, or Microsoft Foundry as their inference layer. For enterprises that route AI traffic through approved gateways, this removes a procurement blocker.
IPO Watch and Legal Overhang
Anthropic continues positioning for a potential IPO in H2 2026. The company’s annual recurring revenue has reached an estimated $19 billion โ up from $1 billion in early 2025. The $380 billion valuation from the February Series G round will face public-market scrutiny. Wilson Sonsini has been retained as IPO counsel.
On the legal side, BMG Rights Management filed a new copyright infringement lawsuit this week, alleging unauthorized use of hundreds of protected musical compositions for model training. This adds to the ongoing $1.5 billion copyright class action currently seeking final court approval. Copyright litigation remains a persistent overhang for any IPO timeline.
What to Watch
- Pentagon ruling imminent. Judge Lin’s decision on the preliminary injunction could arrive any day. The outcome will set precedent for AI safety commitments in government contracts and influence Anthropic’s enterprise risk profile.
- Auto Mode expansion. Currently Teams-only โ Enterprise and API rollout is expected in the coming days. Watch for how the AI classifier performs at scale and whether Anthropic publishes transparency details on its safety criteria.
- Computer Use maturation. macOS-only and explicitly early. Windows and Linux support, plus accuracy improvements, are the natural next steps. The research preview framing suggests Anthropic is gathering feedback before a broader launch.
That wraps this week’s Claude Weekly. Between Computer Use, Auto Mode, and a potential landmark court ruling, this is one of the most consequential weeks in the Claude ecosystem this year. Check back next week for the latest.