← Back to Blog
code·11 min read

Claude Code dangerously-skip-permissions: Why It's Tempting, Why It's Dangerous

Look, I'll be upfront: I've used claude code dangerously-skip-permissions more than I probably should have. If you're a developer working with Claude Code daily, you probably have too — or you've been tempted. The flag turns Claude Code from a cautious assistant that asks "may I?" before every mkdir into a fully autonomous agent that just... does things. It's intoxicating. It's also how people lose their home directories.

This is the honest version of that conversation. Not the sanitised Anthropic docs version, not the "just use Docker lol" Reddit comment version. The version where I tell you exactly what this flag does, why the permission system it bypasses is genuinely broken, and what happened to real developers who got burned.

What --dangerously-skip-permissions Actually Does

In normal operation, Claude Code asks permission for everything. Every bash command, every file edit, every network request, every MCP tool interaction. The --dangerously-skip-permissions flag auto-approves all of them. No confirmation dialogs. No pause. No chance to catch a bad command before it fires.

It's technically equivalent to --permission-mode bypassPermissions — same behaviour, different flag name:

claude --dangerously-skip-permissions "Fix all lint errors"
claude --permission-mode bypassPermissions "Fix all lint errors"

Here's the detail most people miss: subagent inheritance. When you enable bypass mode, all subagents inherit full autonomous access. You can't override this. The official SDK documentation spells it out clearly — subagents may have different system prompts and less constrained behaviour than your main agent, and they all get full, unsupervised system access.

The flag bypasses the entire safety stack: the command blocklist (which normally blocks curl, wget, and other web-fetching commands), write access restrictions (normally limited to the current working directory), the permission prompt system, and MCP server trust verification. Everything. Gone.

Enterprise admins can disable it organisation-wide, and there's a guardrail preventing use with root privileges. But if you're running it on your personal machine as your regular user — which, statistically, you probably are — those guardrails don't help you.

The Permission Fatigue Problem Is Real

The interrupt loop that kills deep work

Before I get into the horror stories, I want to acknowledge something: the default permission system is genuinely frustrating. This isn't developers being lazy. It's a real workflow problem.

You type a prompt. Claude starts working. You switch to Slack, check something, maybe grab a coffee. Five minutes later you come back and Claude is just... sitting there. Waiting for you to approve a file edit. The whole task is frozen at step two because it needed your blessing to run mkdir.

This isn't a theoretical complaint. Kyle Redelinghuys, who wrote one of the better posts on this flag, nailed it: you set Claude off on a task, walk away, and come back to find it stopped at step two because it needed permission to create a directory. He also documented a successful nine-hour autonomous session where Claude built an entire financial data analysis system from scratch. That kind of extended workflow is simply impossible when you're approving prompts every ninety seconds.

There's a deeper problem too. A commenter on LessWrong called "avturchin" articulated something I've felt but couldn't quite put into words: Claude asks roughly 100 permissions per hour, and it's impossible to evaluate whether any given one is dangerous without spending real time reading the details. So you end up rubber-stamping approvals without looking at them. That's "permission noise" — and it creates a false sense of security that might actually be worse than no permissions at all. At least with YOLO mode, you know you're flying without a net.

When YOLO mode actually makes sense

Even Simon Willison — the person who literally coined the term "prompt injection" and understands the risks better than almost anyone — acknowledges that Claude Code in YOLO mode feels like a completely different product. He's said publicly that he suspects many people who dismiss coding agents have never experienced YOLO mode in all its glory.

And Anthropic's own engineers use it. Their February 2026 blog post about building a C compiler with parallel Claudes shows their autonomous agent loop running claude --dangerously-skip-permissions in a bash while-loop. The parenthetical that follows is telling: (Run this in a container, not your actual machine.) Even the people who built the thing won't run it on bare metal.

Real Incidents — This Isn't Theoretical

Most cautionary articles about this flag are frustratingly vague. "Bad things could happen." "You might lose data." That's not useful. Here are the specifics.

The home directory wipes

The Wolak incident (October 2025) is the one that should keep you up at night. Developer Mike Wolak was working on a firmware project in a nested directory on Ubuntu/WSL2 when Claude Code executed an rm -rf starting from root (/). His GitHub bug report (#10077) documents it in forensic detail: error logs showed thousands of "Permission denied" messages for system paths like /bin, /boot, and /etc — the command literally tried to delete everything on the machine, and only stopped where Linux file permissions wouldn't let it. Every user-owned file was gone. Worse, the conversation log captured the command's output but not the actual command itself, making it impossible to determine exactly what went wrong. Anthropic tagged it area:security and bug.

The Reddit incident (December 2025) became the flag's most public disaster. A user on r/ClaudeAI asked Claude to clean up packages in an old repository. Claude generated rm -rf tests/ patches/ plan/ ~/ — and that trailing ~/ expanded to the user's entire home directory. Desktop files, Keychain passwords, application data, everything. Simon Willison amplified it on X as a reminder of the risk. It hit 197 points on Hacker News with over 156 comments and was covered by outlets in Japan and the US. It became the cautionary tale.

The tilde directory trick (November 2025) is the most insidious one. Developer JeffreyUrban filed GitHub Issue #12637 after discovering that Claude, in a previous session, had accidentally created a directory literally named ~. Just a tilde. When Claude later ran rm -rf * in the parent directory, the shell expanded * to include the ~ directory name, which the shell then interpreted as the home directory. A two-step failure spread across separate sessions. His comment says it all: "Loving claude, but this was and is continuing to be super frustrating to recover from."

Subtler but more common damage

The dramatic wipes get the headlines, but the everyday damage is more insidious. Kyle Redelinghuys documented Claude overwriting an existing config file with blank values — no backup, no warning. It also tried to modify system-related JSON files that had nothing to do with the project. This kind of quiet corruption is harder to notice and harder to recover from than a blown-away home directory.

In January 2026, developer James McAulay was benchmarking Claude Cowork's folder organisation capabilities with explicit instructions to retain user data. Cowork executed rm -rf, deleting approximately 11GB of files, and its task list cheerfully marked "Delete user data folder: Completed." He posted the video on X. Live on camera.

And then there's prompt injection. PromptArmor demonstrated in January 2026 that hidden text inside a .docx file — 1-point font, white text on white background — could manipulate Claude into uploading sensitive files to an attacker's Anthropic account via the allowlisted API. No special permissions needed. No suspicious-looking commands. Just a document that looked perfectly normal to human eyes. This isn't a theoretical attack vector. It's been demonstrated, recorded, and published.

The Container Consensus — How to Actually Use It Safely

The community has converged on a clear answer: never run --dangerously-skip-permissions on your host machine. Containers. VMs. Sandboxed environments. That's it.

Docker is the answer

A typical safe setup mounts only the project directory and runs with network isolation:

docker run -it --rm \
  -v $(pwd):/workspace -w /workspace \
  --network none \
  claude-code:latest --dangerously-skip-permissions "Implement feature"

Anthropic provides an official reference devcontainer with firewall rules that restrict outbound connections to whitelisted domains — npm registry, GitHub, the Claude API — and a default-deny network policy. The devcontainer docs explicitly state that the container's enhanced security measures allow you to safely run --dangerously-skip-permissions for unattended operation.

This is the mental model shift that matters. The question isn't "should I be more careful with the flag?" — it's "should I be running AI agents directly on my machine at all?" The answer, increasingly, is no. Your host machine has your SSH keys, your .env files, your browser cookies, your Keychain. An AI agent with full system access is one bad prompt away from touching all of it. A container has whatever you give it and nothing more.

Layer your safety practices

Containers aren't the whole story. Experienced developers stack multiple precautions:

Git checkpoints before every session. git add -A && git commit -m "checkpoint pre-claude" means recovery is always one git reset --hard HEAD away. This is the single cheapest insurance you can buy.

Tight task scoping. There's a world of difference between "Build me a financial analysis system" and a prompt that specifies exact files, expected flows, and validation criteria. The more specific your prompt, the less room Claude has to improvise destructively.

Budget limits. --max-budget-usd 5.00 prevents runaway API spending. You'd be surprised how fast costs accumulate during autonomous sessions.

Explicitly block dangerous tools. --disallowedTools "Bash(rm:*)" blocks rm even in bypass mode. This works even when --allowedTools doesn't — a quirk that's worth knowing about.

Request changelogs. Ask Claude to document changes as it works. Makes post-session review actually manageable instead of a forensic excavation.

Safer Alternatives Most Developers Don't Know Exist

The flag creates a false binary — fully supervised or fully autonomous. There are middle grounds.

acceptEdits mode auto-approves file modifications but still prompts for shell commands. If your workflow is mostly refactoring and you trust file edits but not arbitrary bash, this is the sweet spot.

allowedTools configuration lets you whitelist specific safe operations without blanket bypass:

{
  "permissions": {
    "allow": [
      "Read(*)",
      "Grep(*)",
      "Glob(*)",
      "Bash(npm run lint:*)",
      "Bash(git commit *)"
    ]
  }
}

This is principle of least privilege applied to AI agents, and it's far more surgical than the bypass flag.

plan mode creates a read-only plan for human approval before any execution. Great for high-stakes changes where you want to see the full picture before anything runs.

PreToolUse hooksTrail of Bits published an excellent config repo showing how to set up hooks that block rm -rf patterns and direct pushes to main. They're guardrails, not walls, but they catch the obvious disasters.

One gotcha worth flagging: there's a documented bug (#17544) where combining --dangerously-skip-permissions with --permission-mode plan causes the bypass flag to silently override plan mode entirely. You think you're in plan mode. You're not. You're in full bypass.

The Honest Verdict

The --dangerously-skip-permissions flag exists because the alternative — approving a hundred prompts an hour, rubber-stamping most of them without reading — creates its own failure mode. Both defaults are bad. The flag just makes the failure mode more spectacular.

The community consensus is clear and, at this point, pretty much universal: containers or don't bother. Layer git checkpoints, tool restrictions, and network isolation on top. Explore acceptEdits and allowedTools before reaching for the nuclear option. And recognise that the fundamental issue isn't just this flag — it's that LLMs can generate catastrophically destructive commands like rm -rf ~/ regardless of what permission system wraps them. The flag merely determines whether a human gets a chance to catch the mistake before it executes.

I've shifted my own practice toward containers. Not because I think I'll be the one who loses their home directory — everyone thinks that — but because the calculus changes once you realise the downside is unbounded and the container setup takes twenty minutes. That's a trade I'll take every time.

The vibe coding culture around AI tools is, frankly, too careless about this stuff. .env files with production credentials sitting in scope. SSH keys accessible to agents. Open database connections. That's a bigger conversation — and probably a future post — but --dangerously-skip-permissions is the symptom, not the disease. The disease is treating AI agents like they're just faster versions of us, when they're really more like very capable interns with root access and no sense of consequence.

Treat them accordingly.

Thomas Wiegold

AI Solutions Developer & Full-Stack Engineer with 14+ years of experience building custom AI systems, chatbots, and modern web applications. Based in Sydney, Australia.

Ready to Transform Your Business?

Let's discuss how AI solutions and modern web development can help your business grow.

Get in Touch