Table of Contents

Superpowers is an open-source agentic skills framework that transforms AI coding agents into disciplined software engineers by enforcing non-negotiable workflows: design before code, tests before features, and structured review between every task. Created by Jesse Vincent in October 2025 and accepted into the Anthropic marketplace in January 2026, it accumulated over 27,000 GitHub stars in its first three months—roughly 9,000 per month.1

What Is Superpowers?

Most developers using AI coding agents have hit the same wall: the model starts strong, drifts after a few turns, loses context across files, and ships code that compiles but misses the spec. Superpowers is Jesse Vincent’s answer to that drift—30 years of software development methodology distilled into composable “skills” that AI agents activate automatically based on context.

A skill in this framework is a markdown file. Each file describes a specific workflow: when to trigger it, what steps to follow, what outcomes to verify. The framework ships with skills for brainstorming, planning, test-driven development, systematic debugging, Git worktree management, code review, and subagent-driven development. Agents don’t pick and choose which processes to follow—the framework makes them mandatory.2

Jesse Vincent describes the ambition plainly on his blog: “An implementation plan that’s clear enough for an enthusiastic junior engineer with poor taste, no judgement, no project context, and an aversion to testing to follow.”3 That’s the target audience for every plan the system produces—not the AI, which Vincent clearly trusts less than that hypothetical junior hire.

Simon Willison, writing about the framework at launch, called Vincent “one of the most creative users of coding agents” and highlighted the system’s token efficiency: despite its comprehensiveness, it remains “token light,” pulling minimal documentation into the main context while using subagents to handle implementation details. A complete project reportedly used roughly 100,000 tokens total.4

How Does It Work?

The full Superpowers workflow runs through seven phases:

  1. Socratic Brainstorming — Before any code is touched, the agent asks clarifying questions about requirements, edge cases, and technology choices. The session produces a design document you approve in chunks.
  2. Isolated Git Worktrees — The agent creates a safe development branch, protecting main from mid-feature chaos.
  3. Detailed Planning — Tasks are broken into 2–5 minute units with exact file paths, code snippets, and acceptance criteria specific enough for unsupervised execution.
  4. Subagent-Driven Development — Specialized parallel subagents handle infrastructure, UI logic, and testing simultaneously, each starting with a fresh context to prevent accumulated drift.
  5. Test-Driven Development — RED-GREEN-REFACTOR is enforced, not suggested. The framework “actually deletes code written before tests exist,” according to its documentation.2 Tests precede implementation, period.
  6. Systematic Code Review — Dedicated reviewer agents check specification compliance first, then code quality—a two-stage gate before any task closes.
  7. Branch Completion — The agent handles integration, comprehensive testing, and documentation before signaling done.

The practical effect, as practitioner Richard Joseph Porter reports: features spanning 15+ files now “execute consistently without losing earlier decisions.” He estimates timeline predictability improves significantly when work is decomposed into discrete tasks with unambiguous criteria.5

Installation

Getting Superpowers running on Claude Code takes under a minute:

Terminal window
/plugin marketplace add obra/superpowers-marketplace
/plugin install superpowers@superpowers-marketplace
/exit

For OpenCode or Codex, setup requires repository cloning, symlinks, and manual configuration—meaningfully more friction. Claude Code remains the primary target platform.

Core Commands

CommandFunction
/using-superpowersActivates Superpowers context
/superpowers:brainstormInitiates requirements dialogue
/superpowers:write-planGenerates detailed task plan
/superpowers:execute-planLaunches parallel subagent execution

Why Does It Matter? The Evidence

The question behind any new dev methodology is whether the structure pays for itself. The data here is mixed but directionally useful.

Where It Helps

When TDD enforcement is active, test coverage typically reaches 85–95%, according to usage reports—enterprise-level coverage achieved without code review cycles or team pressure.6 Parallel subagents, when properly coordinated, reportedly produce 3–4x acceleration compared to sequential single-agent approaches on multi-file features.6

The broader agentic context supports the case for structure. Anthropic’s 2026 Agentic Coding Trends Report documents that Claude Code completed a task in a 12.5-million-line codebase in seven hours of autonomous work, achieving 99.9% numerical accuracy.7 TELUS teams using agentic coding workflows shipped engineering code 30% faster while accumulating 500,000 hours in total time savings.7 These results emerged from teams that had established clear workflows and oversight patterns—not from unconstrained agent autonomy.

The Productivity Paradox

Superpowers exists partly in response to a counterintuitive finding that unsupervised agentic development has surfaced repeatedly: AI tools don’t automatically make experienced developers faster. A July 2025 METR randomized controlled trial found that experienced open-source developers working on their own repositories were 19% slower when using AI tools.8 Developers predicted AI would save them 24% of time—the actual result was the opposite.

A separate Anthropic study found developers scored 17% lower on comprehension tests when learning new coding libraries with AI assistance, raising concerns about skill formation alongside raw productivity metrics.9

Superpowers directly addresses both failure modes. The mandatory brainstorming phase forces developers to articulate requirements before delegating—preventing the cognitive offloading that degrades understanding. The structured review gates prevent AI-generated code from bypassing the learning and verification that keep developers competent.

The Framework Landscape

How does Superpowers fit within the broader field of agentic development tools?

FrameworkPrimary UseMethodologyBest For
SuperpowersClaude Code/Codex agent disciplineSkills-based enforcementComplex multi-file features, TDD mandates
LangChainLLM application chainingPipeline orchestrationMulti-step LLM workflows
LangGraphStateful agent graphsGraph-based state machinesComplex agent coordination
CrewAIMulti-agent teamsRole-based collaborationResearch, analysis tasks
AutoGenConversational agentsMulti-agent dialogueCode generation, debugging
Semantic KernelEnterprise integrationPlugin-based skillsMicrosoft ecosystem

The distinction is purpose: LangChain, LangGraph, and CrewAI are infrastructure for building agentic systems. Superpowers is a methodology for using an existing agentic coding agent more effectively. They operate at different layers of the stack and aren’t direct competitors.

What’s Proven vs. What’s Promised

The community skepticism is worth taking seriously. One Hacker News commenter raised a pointed question: if an AI model has already ingested a hundred books on test-driven development, what does feeding it a short skill file about TDD actually add?4 The honest answer—that the value may lie in enforcement rather than knowledge transfer—is consistent with how the framework markets itself, but it’s a hypothesis, not a measurement.

What is measurable: the growth trajectory. Twenty-seven thousand GitHub stars in three months, three consecutive days of 1,500+ star growth that was described as unprecedented, and official acceptance into the Anthropic plugin marketplace on January 15, 2026 all point to practitioners finding the system valuable enough to adopt and evangelize.16

What remains unmeasured: whether it beats a carefully designed custom prompt, whether the improvement holds across languages and codebases, and whether the cognitive overhead of managing structured workflows compounds fatigue over longer engagements.

How Practitioners Are Using It

Richard Joseph Porter’s workflow5 offers a practical heuristic: if a feature touches three or more files, requires an architectural decision, or has meaningful uncertainty in approach, Superpowers is worth the overhead. If a change is localized and clearly scoped, native Claude Code without the framework is faster.

This matches the framework’s own documentation. Superpowers lists its best use cases as: complex multi-file features, production code requiring high quality and test coverage, and teams frustrated with inconsistent AI agent behavior. It explicitly flags quick bug fixes and exploratory prototyping as poor fits.

The workflow structure also addresses the specific failure mode that practitioners most consistently report with unstructured agentic coding: context window exhaustion on long features. Because each subagent starts fresh with a specific, scoped task, the main session context remains clean. A complete large feature reportedly uses roughly 100,000 tokens total—significantly less than a naive single-session approach to the same scope.4

Frequently Asked Questions

Q: Does Superpowers work with coding agents other than Claude Code? A: Yes, but with more setup friction. OpenCode and Codex require manual repository cloning, symlinks, and configuration. Claude Code is the primary target with two-command installation from the Anthropic marketplace.

Q: Does enforcing TDD and brainstorming make Superpowers too slow for fast-moving projects? A: The overhead is real and intentional. The framework is explicitly not for prototypes or quick fixes. Practitioners recommend using it only for features touching 3+ files or requiring architectural decisions—contexts where upfront planning recovers its cost.

Q: How does Superpowers handle the token cost of parallel subagents? A: Each subagent starts with a focused, scoped context rather than the full project history. This keeps individual context windows small. Practitioner reports suggest 100,000 tokens for a complete large feature—competitive with the drift-prone single-session alternative, which burns context accumulating failed attempts.

Q: Is Superpowers maintained as Claude Code evolves? A: As of February 2026, the framework is actively maintained and is included in the official Anthropic plugin marketplace, which provides some continuity assurance. The skills-as-markdown-files architecture is intentionally hackable—practitioners routinely fork and extend skills for their specific workflows.

Q: What’s the biggest practical limitation the community has identified? A: Cognitive overhead. Managing structured workflows across complex features with multiple subagents creates its own mental load. Several Hacker News respondents noted that tool proliferation in agentic development has made the cognitive burden a meaningful bottleneck—separate from whether the code quality improves.4


Footnotes

  1. ByteIota. “Superpowers Agentic Framework: 27K GitHub Stars.” byteiota.com, 2026. https://byteiota.com/superpowers-agentic-framework-27k-github-stars/ 2 3

  2. GitHub. “obra/superpowers: An agentic skills framework & software development methodology that works.” github.com/obra/superpowers, 2025–2026. 2

  3. Vincent, Jesse. “Superpowers: How I’m using coding agents in October 2025.” blog.fsck.com, October 9, 2025. https://blog.fsck.com/2025/10/09/superpowers/

  4. Willison, Simon. “Superpowers: How I’m using coding agents in October 2025.” simonwillison.net, October 10, 2025. https://simonwillison.net/2025/Oct/10/superpowers/ 2 3 4

  5. Porter, Richard Joseph. “Superpowers Plugin for Claude Code: How I Ship Big Features with Confidence.” richardporter.dev, 2026. https://richardporter.dev/blog/superpowers-plugin-claude-code-big-features 2 3

  6. Pillitteri, Pasquale. “Superpowers for Claude Code: Complete Guide 2026.” pasqualepillitteri.it, 2026. https://pasqualepillitteri.it/en/news/215/superpowers-claude-code-complete-guide 2 3

  7. Anthropic. “2026 Agentic Coding Trends Report.” resources.anthropic.com, 2026. https://resources.anthropic.com/2026-agentic-coding-trends-report 2 3

  8. METR. “Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity.” metr.org, July 10, 2025. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

  9. Tessl. “Anthropic: 8 agentic coding trends shaping software engineering in 2026.” tessl.io, 2026. https://tessl.io/blog/8-trends-shaping-software-engineering-in-2026-according-to-anthropics-agentic-coding-report/

Enjoyed this article?

Stay updated with our latest insights on AI and technology.