← Back to Blog
code·8 min read

I Switched From Claude Code to OpenCode — Here's Why

I've been using Claude Code since its early preview days in February 2025. It was my daily driver — the tool I reached for without thinking. I'd tried Codex CLI, kicked the tyres on Copilot, even dabbled with Aider. But I never seriously considered the open-source CLI alternatives. They felt like hobby projects chasing a moving target.

Then I started testing MiniMax M2.5 with OpenCode for a review article, and something clicked. That was the first time I ran OpenCode with intent rather than idle curiosity. And the question I kept coming back to was simple: in a Claude Code vs OpenCode comparison, is the open-source alternative actually good enough for daily use — or is it riding hype and 100K GitHub stars to nowhere?

Turns out, the answer surprised me.

What OpenCode and Claude Code Have in Common

Before getting into where they diverge, it's worth acknowledging how much these two tools overlap. A year ago this wouldn't have been a fair comparison. Today it absolutely is.

Both support natural-language coding in the terminal, multi-file edits, shell command execution, MCP integration, subagents, and custom agents defined via markdown files. Both have LSP integration, GitHub Actions support, plugin systems, and slash commands. The core feature set is nearly identical.

Here's a quick comparison of the key capabilities:

Capability Claude Code OpenCode
AI providers Claude only (Opus, Sonnet, Haiku) 75+ via Models.dev, including local via Ollama
MCP support First-class (stdio, HTTP, OAuth) Full (stdio, SSE, OAuth)
Subagents Up to 10 parallel (Explore, Task, Plan) Build, Plan, General, Scout
Checkpoints/rollback Automatic workspace snapshots Git-based /undo and /redo
IDE extensions VS Code, Zed VS Code, Cursor, Zed, Windsurf
Multi-session Named sessions, forking Native multi-session

The gap has narrowed dramatically. Both projects ship updates almost daily. The competition is clearly driving both forward.

Where They Diverge — And Why It Matters

Model Lock-In vs Provider Freedom

This is the big one. Claude Code runs Claude models. That's it. It's clever about it — automatically round-robining between Haiku for cheap search tasks and Opus for complex reasoning — but you're locked into Anthropic's ecosystem.

OpenCode supports 75+ providers through Models.dev. Claude, GPT, Gemini, Deepseek, local models via Ollama — whatever you want. You can swap models per task, which sounds like a theoretical advantage until you actually do it. Then it becomes hard to go back.

I'll be honest: the local LLM angle is mostly aspirational right now. My M1 Mac Mini and M5 MacBook Air don't have the RAM for serious local coding models. But the architecture is ready for when hardware catches up, and that matters.

Terminal UX — REPL vs Proper TUI

Claude Code prints to stdout. It's a REPL that streams tokens with a spinner. Simple, composable, familiar to anyone who lives in the terminal. But resize your window mid-response and the rendering can break. Scroll back far enough and things get messy.

OpenCode takes a fundamentally different approach. It's built on OpenTUI, a custom framework with a TypeScript API layer and a native Zig backend for rendering. The result is a proper TUI application with its own buffer system — you can scroll freely, resize without breaking layout, and get syntax-highlighted diffs rendered inline.

Theme customisation sounds like a minor thing. It isn't. When you spend hours a day staring at a tool, having it look and feel like your tool makes a genuine difference. OpenCode feels like a proper application. Claude Code feels like a really good script.

Rollback — Different Approaches, Same Goal

Claude Code's automatic workspace snapshots are one of its best features. Every AI-made change gets snapshotted silently, and you can roll back with /rewind or a quick Esc×2. No thinking required — it just works.

OpenCode handles this differently. Its /undo command reverts the last message along with any file changes the AI made, and /redo restores them if you change your mind. Under the hood it uses Git, so your project needs to be a Git repository — which, let's be real, it should be anyway. It's not as granular as Claude Code's snapshot system, but it covers the main use case: "that last change was wrong, take it back." For my workflow, it's enough.

The Real Differences in Daily Use

Speed and Architecture

Claude Code benefits from tight Anthropic integration. Its built-in ripgrep provides fast file search, and LSP navigation clocks in at roughly 50ms versus 45 seconds for traditional text search on large codebases. The automatic model switching keeps costs down without you thinking about it.

OpenCode runs on Bun with the Zig rendering backend. Its persistent server mode eliminates MCP cold boot times on subsequent connections — meaningful if you're running multiple MCP servers. Builder.io's testing found that seven active MCP servers consumed 25% of a 200K-token context window before any user input. Both tools face this problem, but OpenCode's persistent server mitigates the startup penalty.

In practice? The bottleneck is almost always the LLM, not the CLI. Both are fast enough.

Onboarding and Configuration

Claude Code requires Anthropic authentication. You need either an API key or a Claude subscription login. Configuration uses a hierarchical settings system with CLAUDE.md files for project-level instructions.

OpenCode works immediately with any API key you already have. GitHub Copilot tokens, ChatGPT Plus subscriptions, free models through OpenCode Zen — drop in a key and you're coding. No sign-up required for the tool itself.

That zero-friction start is underrated. I've watched colleagues try Claude Code and bounce off the auth setup. OpenCode? They're writing code in under a minute.

Pricing Reality

Claude Code Pro runs $20/month, Max sits at $100–$200/month — significant savings over raw API costs if you're a heavy Claude user. The round-robin model selection makes your tokens go further without manual intervention.

OpenCode is free and MIT-licensed. Bring your own API keys. OpenCode Zen offers a curated model gateway at pass-through pricing. OpenCode Black at $200/month provides enterprise-tier access for teams that want it.

The economics shifted on January 9, 2026, when Anthropic blocked third-party tools from using Claude subscription OAuth tokens. OpenCode users who'd been routing their Claude Max subscriptions through it were immediately affected. You can no longer use a Claude Max subscription through OpenCode — meaning Claude-heavy OpenCode users now pay API rates for Anthropic models. That changes the maths for some people.

What I Don't Miss From Claude Code

This is the section I expected to be longer. It isn't.

OpenCode covers everything I need for my daily workflow. It's fast, the provider flexibility is genuinely useful rather than theoretical, and the TUI is better for extended sessions. The ecosystem is healthy — 700+ contributors, 9,200+ commits, multiple releases per day. When I sit down to work in the morning, I reach for OpenCode without hesitation.

That said, I'm not going to pretend it's all smooth sailing. Stability has been bumpy recently. The maintainers acknowledged in a February 2026 GitHub issue that recent releases had been more turbulent than usual. Moving fast has trade-offs, and OpenCode is moving very fast.

There's also the security angle. An unauthenticated remote code execution vulnerability (CVE-2026-22812) was disclosed in January 2026, scoring a CVSS 8.8. Previous versions started an HTTP server that let any website execute arbitrary shell commands on your machine. The fix shipped in v1.1.10, and the server is now disabled by default — but it's a sobering reminder that open source means more eyeballs and more attack surface. Worth knowing about, especially if you're running older versions.

The Bigger Picture — We're Still Figuring This Out

Claude Code and OpenCode are two answers to the same question: how should developers write code with AI? The CLI coding agent space has exploded — Aider, Cline, Gemini CLI, Codex CLI, and more are all competing for the same terminal real estate.

Open source competing at this level is a net positive for everyone. The pressure between these tools is producing rapid innovation on both sides. Claude Code shipped plugins, LSP support, and agent teams in the past three months. OpenCode shipped a complete TypeScript rewrite, desktop apps, and IDE extensions in roughly the same period.

My take? We're at the frontier of AI-assisted development, and the "right" tool will keep changing. Install both. Try others. Stay flexible. The fact that an open-source project with 104K GitHub stars can genuinely compete with Anthropic's flagship dev tool — a company valued at roughly $380 billion — says something important about where this space is heading.

The tools will keep getting better. The question isn't which one wins. It's whether you're paying attention while the ground shifts under all of us.

Thomas Wiegold

AI Solutions Developer & Full-Stack Engineer with 14+ years of experience building custom AI systems, chatbots, and modern web applications. Based in Sydney, Australia.

Ready to Transform Your Business?

Let's discuss how AI solutions and modern web development can help your business grow.

Get in Touch