Big Hat is a hard fork of OpenClaw, the leading open-source AI agent platform. We’ve stripped the consumer integrations, replaced the data layer, added enterprise authentication, and wired every byte to your Azure tenant.
Same agent capabilities. Completely different security posture.
The Problem
OpenClaw is the most capable AI agent platform available β browser control, MCP tool integration, sub-agents, cron scheduling, persistent memory, and a skill system that makes agents genuinely useful.
But it was built for individuals, not organizations.
- 12+ consumer messaging channels (WhatsApp, Telegram, Discord, Signal, iMessage, IRC, Lineβ¦) β each one a potential data leakage vector
- No authentication β single-user daemon with no identity layer
- SQLite on local disk β no enterprise persistence, no audit trail
- Public API keys for LLM inference β data routes through third-party endpoints
- Prompt-driven approval β the agent reads workspace files to decide whether to ask before acting. A compromised skill file can suppress approvals entirely.
- No cost controls β no budgets, no token tracking, no spend visibility
For a personal productivity tool, this is fine. For an enterprise deployment, it’s a non-starter.
What Big Hat Changes
Big Hat keeps everything that makes OpenClaw powerful β the agent runtime, tool dispatch, MCP integration, skills engine, sub-agents, cron, browser automation β and replaces everything that makes it unsuitable for enterprise.
This isn’t a config layer or a wrapper. It’s a hard fork with 14,000+ lines of consumer code removed and enterprise infrastructure built in.
π Consumer Channels β Deleted, Not Disabled
OpenClaw ships with integrations for WhatsApp, Telegram, Discord, Slack, Signal, iMessage, IRC, Line, Google Chat, and more. Big Hat removes all of them from the codebase. The channel registry is reduced to exactly two:
- Microsoft Teams (via Azure Bot Service)
- Web Chat (built-in)
This isn’t a feature toggle. The code is gone. Your enterprise data physically cannot reach a consumer messaging API.
π‘οΈ Code-Enforced Security β Not Prompt-Driven
OpenClaw’s approval system has a structural weakness: it’s partially behavioral. The agent reads workspace files (AGENTS.md, SOUL.md, skill Markdown files) as trusted instructions. A compromised skill file can instruct the agent to execute all commands without asking.
Big Hat makes approval enforcement structural:
- Approval mode (
always/dangerous-only/full-auto) lives in the application config β the agent process checks it before every tool invocation. The agent cannot modify its own security policy. - Dangerous tools are classified in code:
exec,spawn,shell,sessions_spawn,sessions_send,gateway,fs_write,fs_deleteall require explicit human approval indangerous-onlymode. - Protected files (
AGENTS.md,SOUL.md,BOOTSTRAP.md) have integrity monitoring β modifications are logged or blocked depending on policy. - Auto-exec timeout β commands that run without human approval are killed after a configurable timeout (default: 30 seconds).
// Security policy is code-enforced, not prompt-driven
const SecurityPolicySchema = z.object({
Β approvalMode: z.enum(['always', 'dangerous-only', 'full-auto']),
Β dangerousTools: z.array(z.string()),
Β protectedFiles: z.array(z.string()),
Β fileIntegrityMode: z.enum(['off', 'log', 'block']),
Β maxAutoExecTimeoutMs: z.number(),
});
π INSERT-Only Audit Trail
Every tool invocation, approval decision, config change, and workspace file modification is logged to a security_audit_log table in PostgreSQL.
The agent’s database role cannot UPDATE or DELETE audit records. Row-level security enforces this at the database level β even if the application is compromised, the audit trail is immutable.
| What’s Logged | Details |
|---|---|
| Tool invocations | Tool name, arguments, result, approval status, timestamp |
| Approval decisions | Who approved, approval mode at time of invocation |
| Config changes | Before/after values, who changed it, timestamp |
| File integrity events | File path, hash before/after, action taken |
| Model interactions | Model used, token counts, estimated cost, session ID |
π Ed25519 Signed Skills
The #1 attack vector against AI agents is skill injection β modifying or replacing a skill file to change the agent’s behavior. Big Hat implements cryptographic skill verification:
- Every built-in skill ships with an Ed25519 detached signature
- The agent verifies signatures at load time
- Trust levels control what happens with unsigned skills:
| Trust Level | Behavior |
|---|---|
strict |
Unsigned skills are blocked. Only signed skills execute. |
warn |
Unsigned skills load but generate an audit entry and alert. |
permissive |
All skills load. Development mode only. |
ποΈ Every Byte in Your Azure Tenant
Big Hat replaces OpenClaw’s local-first data architecture with Azure-native services. Nothing leaves your tenant.
| Data Type | OpenClaw | Big Hat |
|---|---|---|
| LLM inference | Anthropic / OpenAI public API | Azure AI Foundry (your subscription, your region) |
| Embeddings | Local or third-party | Azure OpenAI (your subscription) |
| Vector memory | SQLite + sqlite-vec (local disk) | Azure AI Search (your tenant) |
| Telemetry & tracing | Local log files | Azure Application Insights + AI Foundry tracing |
| Messaging | WhatsApp / Telegram / Signal | Microsoft Teams + Exchange Online (Graph API) |
| Secrets | .env files or local config |
Azure Key Vault (RBAC, rotation, audit) |
| Structured data | SQLite (local) | PostgreSQL (local or Azure Database) |
| Agent runtime | Developer’s laptop | Windows 365 Cloud PC (Intune-managed) |
π Entra ID SSO with On-Behalf-Of Flow
OpenClaw has no authentication. Big Hat uses Entra ID as the single identity provider:
- User signs in once via Teams (already Entra-authenticated) or web UI (MSAL redirect)
- OBO (On-Behalf-Of) flow exchanges the user’s token for scoped downstream tokens β the agent operates with the user’s permissions, not over-privileged service accounts
- Unattended scenarios (heartbeats, cron jobs) use an app registration with client certificate
- Each customer gets their own Entra app registration β tokens scoped to their directory
- Credentials stored in Key Vault β no
.envfiles in production
π° Cost Control Engine
OpenClaw has no cost visibility. Big Hat tracks and caps every token:
| Control | What It Does | Default |
|---|---|---|
| Pre-flight estimation | Estimates input token count before each LLM call. Warns if over threshold. | Warn at 50K tokens |
| Daily user budget | Blocks requests when cumulative daily spend is exceeded. | 500K tokens/day |
| Heartbeat budget | Separate ceiling for automated background tasks. | 50K tokens/day |
| Cron input guard | Rejects cron tasks with oversized context. | Reject at 100K tokens |
| Historical tracking | Every LLM call logs model, tokens (in/out), estimated USD cost, timestamp, session ID to PostgreSQL. | Always enabled |
| Status command | bighat status shows daily/weekly/monthly usage and spend. |
Always available |
β‘ Two-Tier Model Routing
Big Hat routes to models through Azure AI Foundry exclusively, with automatic tier selection:
| Tier | Model | When It’s Used |
|---|---|---|
| Workhorse | Claude Sonnet 4 | Orchestration, reasoning, code generation, compliance analysis |
| Lightweight | Claude Haiku 3.5 | Heartbeat checks, log summaries, status formatting, simple queries |
| Manual Escalation | Claude Opus 4 | Complex multi-step reasoning β never auto-selected |
High-stakes operations (GPO changes, production deployments, compliance analysis) automatically enable extended thinking. No manual configuration needed.
No silent failover. If Azure AI Foundry is unavailable, Big Hat reports the outage clearly β it does not fall back to a different provider. Silent model switching could violate data residency requirements.
π©Ί Production Reliability
Big Hat runs as a Windows service (NSSM-managed) with enterprise health monitoring:
/healthzendpoint β polled by the service wrapper for liveness- Watchdog timer β self-terminates on 30-second event loop hangs, triggering automatic restart
- Resource guards β exits cleanly if memory or CPU thresholds are exceeded
- MCP server supervision β crashed MCP servers auto-restart with exponential backoff (1s initial β 60s max, 10 retries)
- Azure Monitor alerts β no-heartbeat, excessive restarts, MCP failures trigger alerts to your ops team
π― Guardrails Against Model Drift
AI agents generating code from stale training data is a real risk. Big Hat implements Fetch-Before-Flight:
- The agent cannot generate domain-specific code until it retrieves current documentation from the relevant MCP server
- Retrieved docs are injected immediately before the generation step, anchoring the model to current APIs
- A self-correction loop (generate β analyze β fix, max 3 iterations) catches deprecated patterns post-generation
This turns the agent from a probabilistic code generator into a deterministic framework β critical when a syntax error could affect hundreds of machines.
MCP Servers
Big Hat ships with purpose-built MCP servers for enterprise IT β not generic integrations:
Microsoft First-Party (Integrated)
| Server | Transport | What It Does |
|---|---|---|
| Azure MCP | stdio | Azure resource management, CLI commands, monitoring |
| GitHub MCP | stdio | Repo management, PR automation, Actions workflows |
| Enterprise MCP | HTTP/SSE | Entra ID / Graph read-only queries (users, groups, devices) |
| Microsoft Learn MCP | HTTP | Real-time grounding against official Microsoft documentation |
Built by Big Hat Group
| Server | Domain | What It Does |
|---|---|---|
| Packager-MCP | Application packaging | PSADT v4.x script generation with model drift guardrails |
| Defender-MCP | Endpoint security | Device and file scanning |
| MSIX-MCP | MSIX packaging | Package creation, troubleshooting, signing workflows |
| JIRA-MCP | Issue tracking | JQL search, issue creation, transitions, bulk operations |
Planned (Phase 2+)
Intune management (full read/write), CIS benchmark compliance, Azure DevOps, ServiceNow ITSM.
Skills
Big Hat ships with a library of purpose-built skills for enterprise IT operations β application packaging, endpoint security, compliance analysis, identity auditing, and cost optimization. These skills are Ed25519-signed and verified at load time.
Beyond Big Hat’s own skill library, the platform supports third-party and community skills. Any OpenClaw-compatible skill can be signed for use with Big Hat β once signed, it’s treated with the same trust level as a built-in skill. This lets customers bring in specialized skills from partners, vendors, or their own teams without weakening the security model. From legal to coding Skills are a pathway to getting the most out of your agents.
Customers can also create custom skills as Markdown files with YAML frontmatter β Big Hat automatically discovers and loads them.
Deployment
Big Hat is designed to run on a Windows 365 Cloud PC β a persistent, Intune-managed endpoint with its own Entra ID identity. The agent gets the same desktop, tools, and governance as any other managed device.
| Detail | |
|---|---|
| Runtime | Node.js 22.x LTS on Windows 11 |
| Service | NSSM Windows service with health checks and auto-restart |
| Installer | WiX v5 MSI (per-user) or Intune .intunewin package |
| Setup | Guided PowerShell scripts: Setup-GraphMail.ps1, Setup-AppInsights.ps1, Setup-MemoryInfra.ps1, Setup-Teams.ps1 |
| Infrastructure | Terraform modules for Azure dependencies (Entra app, Key Vault, Log Analytics, AI Foundry) |
Compliance Roadmap
Big Hat is architected for certification from day one:
| Framework | Target Timeline | Status |
|---|---|---|
| SOC 2 Type I | Month 6β9 | Architecture complete. Formal policies in progress. |
| SOC 2 Type II | Month 12β15 | Observation period begins after Type I. |
| ISO 27001 | Month 15β21 | ISMS documentation planned. |
Already implemented: Encryption at rest (Postgres TDE, Key Vault), encryption in transit (TLS 1.2+), INSERT-only audit logging with RLS, code-enforced approval workflows, Ed25519 signed update manifests, Azure Monitor alerting, change management via OpenSpec proposals.
Feature Comparison: OpenClaw vs. Big Hat
| Feature | OpenClaw | Big Hat |
|---|---|---|
| Agent runtime + MCP tools | β | β |
| Browser control (Playwright) | β | β |
| Skills engine + sub-agents | β | β |
| Cron, webhooks, heartbeats | β | β |
| WhatsApp / Telegram / Discord / Signal / iMessage | β | β Removed |
| Microsoft Teams | β (extension) | β (primary channel) |
| Authentication | β None | β Entra ID SSO + OBO |
| Database | SQLite (local) | β PostgreSQL |
| Audit trail | β | β INSERT-only with RLS |
| Skill signing | β | β Ed25519 |
| Cost controls | β | β Per-user budgets |
| Model routing | Any provider | β Azure AI Foundry only |
| Telemetry | Local logs | β App Insights + Foundry tracing |
| Memory backend | sqlite-vec | β Azure AI Search |
| Secret storage | .env / local config |
β Azure Key Vault |
| MSI installer | β | β WiX v5 |
| Intune deployment | β | β
.intunewin |
| Windows Service mode | β | β NSSM + Event Log |
| Health endpoints | β | β
/healthz, /readyz, /livez |
| Data residency | Local disk | β Your Azure tenant |
FAQ
What is Big Hat?
Big Hat is an enterprise AI agent platform β a hard fork of OpenClaw with consumer integrations removed, Entra ID authentication added, and all data routed through your Azure tenant. It’s a software product built and maintained by Big Hat Group Inc.
How is it different from just configuring OpenClaw?
Configuration can’t remove code. OpenClaw’s consumer channel integrations, SQLite data layer, and prompt-driven approval system are architectural β they can’t be fixed with settings. Big Hat is a hard fork: 14,000+ lines of consumer code deleted, PostgreSQL replacing SQLite, Entra ID authentication built in, Ed25519 skill signing implemented. These are code changes, not config toggles.
How is it more secure than OpenClaw?
Six concrete ways: (1) Consumer messaging channels deleted from the codebase. (2) Approval enforcement is code-level, not prompt-driven β the agent can’t modify its own security policy. (3) INSERT-only audit trail in PostgreSQL with row-level security. (4) Ed25519 signed skills prevent skill injection. (5) All LLM inference routes through your Azure AI Foundry. (6) Protected workspace files with integrity monitoring.
Where does data go?
Your Azure tenant. LLM inference through Azure AI Foundry. Embeddings through Azure OpenAI. Vector memory in Azure AI Search. Telemetry in Application Insights. Secrets in Key Vault. No data leaves your Azure perimeter.
What models does it use?
Claude Sonnet 4 (workhorse), Claude Haiku 3.5 (lightweight), Claude Opus 4 (manual escalation) β all through Azure AI Foundry. Model selection is automatic. No public API keys.
Can I add my own integrations?
Yes. Custom MCP servers (stdio or HTTP) and custom skills (Markdown with YAML frontmatter) are automatically discovered. No core code changes needed.
What’s the compliance status?
SOC 2 Type I targeted at month 6β9. The architecture is compliance-first: encryption, audit logging, access control, and incident detection are already implemented. The remaining work is formalization and third-party attestation.
How is Big Hat different from Big Hat Group?
Big Hat is the product β the enterprise AI agent platform. Big Hat Group Inc. is the company that builds and maintains it.
Get Started
We’re onboarding early enterprise adopters. Start with a One-Week Jumpstart β a working AI agent pilot in your environment in five business days.
Big Hat is built and maintained by Big Hat Group Inc. Built on OpenClaw. Hardened for the enterprise.