Table of Contents

In January 2026, fast.ai co-founder Rachel Thomas published a sobering critique of “vibe coding”—the practice of generating large quantities of complex AI code without human review. Her analysis, which gained significant traction on Hacker News (350+ points), draws a provocative parallel between AI coding tools and gambling addiction, warning that developers often feel productive while actually slowing down. The research-backed critique challenges the assumption that more AI assistance always means better outcomes, revealing a complex landscape where AI’s benefits depend heavily on how developers engage with these tools.

What Is Vibe Coding?

Vibe coding is the creation of large quantities of highly complex AI-generated code, often with the intention that the code will never be read by humans. The term, which emerged from developer culture in late 2024 and early 2025, describes a workflow where developers prompt AI coding assistants to generate hundreds of lines of code at once, accepting the output with minimal review.1

The practice has become widespread enough that executives have pushed for AI-driven layoffs, managers pressure developers to meet AI-generated code quotas, and junior engineers worry about obsolescence. College students question whether studying computer science remains worthwhile. As Thomas notes, “People of all career stages hesitate to invest in their own career development. Won’t AI be able to do their jobs for them anyway a year from now? What is the point?”1

The term gained mainstream recognition through Andrej Karpathy’s social media posts in early 2025, where he described simply “vibing” with AI to build applications. However, fast.ai’s critique distinguishes between productive human-AI collaboration and the problematic patterns of unchecked code generation.

How Does Vibe Coding Work?

Vibe coding typically follows a predictable pattern: a developer provides a high-level prompt to an AI coding assistant like GitHub Copilot, Cursor, or Claude, receives substantial code blocks in response, and iterates through prompts without fully understanding the underlying implementation. The dopamine hit comes from seeing immediate results—functioning applications built in hours rather than days.2

This workflow contrasts sharply with what fast.ai terms “Dialog Engineering”—a structured approach where humans and AI collaborate in small, iterative steps. In Dialog Engineering, developers write a line or two of code, receive AI suggestions, and maintain deep understanding of the codebase. Thomas advocates for this alternative, which she describes as creating “a powerful feedback loop where each step makes both you and the AI smarter.”3

The core difference lies in cognitive engagement. Vibe coding outsources understanding; Dialog Engineering augments it. The former produces code that works until it doesn’t; the latter produces code that can be maintained, debugged, and extended over time.

Why Does Vibe Coding Matter?

The significance of this critique extends beyond individual productivity to fundamental questions about software quality, developer skill development, and the long-term sustainability of AI-augmented development workflows.

The Productivity Paradox

A randomized controlled trial from METR published in July 2025 provides striking evidence for Thomas’s concerns. The study measured how early-2025 AI tools affected experienced open-source developers working on real repository issues. The results contradicted both developer expectations and expert predictions: when AI tools were allowed, developers took 19% longer to complete tasks compared to working without AI assistance.4

This finding aligns with what Thomas describes as a fundamental problem with metrics in AI systems. When a measure becomes a target, it ceases to be a good measure—Goodhart’s Law in action. Developers optimize for code volume and apparent progress while actual quality and velocity suffer.5

The Dark Flow Phenomenon

Thomas’s most compelling contribution is the application of psychological research on “dark flow” to AI-assisted coding. Psychologist Mihaly Csikszentmihalyi first formalized the concept of flow—a state of full absorption and energized focus where skills match challenges appropriately. However, researchers studying gambling addiction identified a sinister variation called “dark flow.”1

In multiline slot machines, players experience “losses disguised as wins” (LDWs)—outcomes that celebrate minor credits despite net losses. Research shows players physiologically respond to LDWs as if they were actual wins, entering a highly absorbed state disconnected from reality.6

Thomas draws explicit parallels between slot machines and vibe coding:

Gambling FeatureVibe Coding EquivalentRisk
Losses disguised as winsCode that “works” but contains hidden bugsFalse sense of progress
Celebratory sounds/animationsLines of code generated, tests passingDopamine without value
Random outcomesUnpredictable AI behaviorUnreliable results
Illusion of controlPrompt “engineering” without understandingMisattributed agency

Both activities create what Csikszentmihalyi called “junk flow”—addictive superficial experiences that feel productive but don’t foster growth. “The problem is that it’s much easier to find pleasure or enjoyment in things that are not growth-producing but are attractive and seductive,” he noted in a 2014 interview.1

The Unreliable Narrator Problem

Thomas identifies another critical issue: developers using AI tools become unreliable narrators of their own productivity. The METR study found that developers consistently overestimated AI’s benefits despite evidence to the contrary. Thomas describes her own experience with an AI researcher whose AI-generated blog posts read noticeably different from their earlier work—yet the author believed quality remained unchanged.1

This phenomenon extends to code quality assessments. Developers report building “tools I felt really great about, just to realize that I did not actually use them or they did not end up working as I thought they would.”2 The immediate gratification of generated code masks long-term maintainability problems that only emerge weeks or months later.

Learning Implications

Perhaps the most concerning aspect of vibe coding involves skill development. Research on learning with AI assistants suggests that excessive reliance on automated help can impede genuine understanding. A 2024 survey on LLMs for code generation noted that while these tools have “garnered remarkable advancements across diverse code-related tasks,” the gap between academic benchmarks and practical development remains significant.7

When developers skip the cognitive work of understanding code—tracing execution paths, considering edge cases, internalizing patterns—they miss the learning opportunities that build expertise. Thomas, who holds a PhD in mathematics and has taught thousands of developers through fast.ai courses, emphasizes that understanding requires struggle. The friction that AI removes is often precisely what drives learning.

When AI Helps vs. Hinders

The fast.ai critique is not anti-AI. Thomas works at Answer.AI and notes that “we use AI every day. AI is useful!” The question is not whether to use AI, but how.

Conditions Where AI Excels

AI coding assistants demonstrate clear value in specific contexts:

  • Boilerplate generation: Creating repetitive structures that require no architectural decisions
  • Documentation assistance: Drafting comments and explanations that humans then review
  • Exploring unfamiliar APIs: Suggesting possible approaches when learning new libraries
  • Refactoring with human oversight: Restructuring code under close supervision

Conditions Where AI Hinders

Conversely, AI assistance becomes problematic when:

  • Building complex systems: Architectural decisions require understanding trade-offs AI cannot evaluate
  • Debugging unfamiliar code: Without understanding the codebase, AI-generated fixes often miss root causes
  • Learning foundational concepts: The struggle of problem-solving is essential for skill development
  • Maintaining long-term projects: Code that can’t be understood can’t be maintained

The Code Quality Problem

Emerging research on AI-generated code quality presents a mixed picture. While benchmarks like HumanEval and MBPP show impressive AI performance on isolated coding tasks, real-world studies reveal complications.

The METR study focused on PRs from large, high-quality open-source codebases— precisely the kind of production code that matters for software engineering. Unlike algorithmic benchmarks, these tasks required human judgment for completion, including style standards, testing requirements, and documentation. AI tools slowed development rather than accelerating it.4

Early GPT-4 experiments demonstrated the model could “solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more,” with researchers noting performance “strikingly close to human-level.”8 However, subsequent real-world evaluations have tempered these findings, particularly for complex, multi-file changes in established codebases.

Practical Recommendations

Based on the fast.ai critique and supporting research, developers can adopt several practices to harness AI benefits while avoiding vibe coding pitfalls:

  1. Iterate in small steps: Work line-by-line or function-by-function rather than generating large code blocks
  2. Maintain understanding: Never accept code you cannot explain or modify independently
  3. Review generated code: Treat AI output as suggestions requiring critical evaluation, not final solutions
  4. Preserve learning opportunities: Use AI for augmentation when you understand the domain, but engage directly with problems when learning
  5. Measure real outcomes: Track actual delivery metrics rather than perceived productivity

Conclusion

The fast.ai critique of vibe coding represents a necessary corrective to uncritical AI enthusiasm. By applying rigorous psychological research and empirical studies, Rachel Thomas reveals that the feeling of productivity AI tools create often diverges from actual results. The “dark flow” state—absorbing, addictive, and ultimately empty—poses genuine risks to both code quality and developer skill development.

However, this critique points toward a more sustainable path. Dialog Engineering and similar approaches demonstrate that human-AI collaboration can work when structured around human understanding rather than AI delegation. The goal isn’t avoiding AI tools—it’s using them in ways that preserve the cognitive engagement necessary for both learning and quality.

For an industry facing pressure to adopt AI without question, Thomas’s warning serves as essential reading: “It is worth experimenting with AI coding agents to see what they can do, but don’t abandon the development of your own skills.” The 350+ Hacker News upvotes suggest this message resonates with developers who have experienced the gap between AI promise and practice.


Frequently Asked Questions

Q: What exactly is “vibe coding”? A: Vibe coding refers to generating large quantities of complex AI code without human review, often accepting hundreds of lines of output with minimal understanding. The term emerged in late 2024 and gained prominence through critiques from fast.ai and developer communities.

Q: Does research support the claim that AI coding tools slow developers down? A: Yes. A July 2025 randomized controlled trial from METR found that experienced developers using AI tools took 19% longer to complete real-world tasks compared to working without AI, despite believing they were working 20% faster.4

Q: What is “dark flow” and how does it relate to AI coding? A: Dark flow is a psychological state identified in gambling research where players enter an absorbed, flow-like state while actually losing money. Fast.ai’s Rachel Thomas argues vibe coding creates similar conditions—developers feel productive while producing unmaintainable code.1

Q: Is fast.ai against using AI for coding? A: No. Fast.ai advocates for “Dialog Engineering”—a collaborative approach where humans and AI work in small iterative steps. The critique targets unchecked code generation, not thoughtful human-AI collaboration.3

Q: How can developers avoid the pitfalls of vibe coding? A: Work in small steps, maintain understanding of all code you deploy, review AI suggestions critically, preserve learning opportunities by engaging directly with problems, and measure real outcomes rather than perceived productivity.


Footnotes

  1. Thomas, R. (2026, January 28). Breaking the Spell of Vibe Coding. fast.ai. https://www.fast.ai/posts/2026-01-28-dark-flow/ 2 3 4 5 6

  2. Ronacher, A. (2026, January 18). Agent Psychosis: Are We Going Insane? lucumr.pocoo.org. https://lucumr.pocoo.org/2026/1/18/agent-psychosis/ 2

  3. Howard, J. (2024, November 7). A New Chapter for fast.ai: How To Solve It With Code. fast.ai. https://www.fast.ai/posts/2024-11-07-solveit.html 2 3

  4. METR. (2025, July 10). Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity. metr.org. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/ 2 3 4

  5. Thomas, R. (2019, September 24). The Problem with Metrics is a Big Problem for AI. fast.ai. https://www.fast.ai/posts/2019-09-24-metrics.html

  6. Dixon, M. J., et al. (2017). Dark Flow, Depression and Multiline Slot Machine Play. Journal of Gambling Studies, 33(1), 73-91. https://pmc.ncbi.nlm.nih.gov/articles/PMC5846824/

  7. Jiang, J., et al. (2024). A Survey on Large Language Models for Code Generation. arXiv

    .00515. https://arxiv.org/abs/2406.00515

  8. Bubeck, S., et al. (2023). Sparks of Artificial General Intelligence: Early Experiments with GPT-4. arXiv

    .12712. https://arxiv.org/abs/2303.12712

Enjoyed this article?

Stay updated with our latest insights on AI and technology.