What Is Vibe Coding?
Vibe coding is a style of software development in which a human describes the desired behavior in natural language and an LLM produces the actual code, with the human reviewing the result mostly by feel rather than by reading every line. The term was coined by Andrej Karpathy in a February 2025 post on X — “There’s a new kind of coding I call ‘vibe coding,’ where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.” Within a year it had become mainstream enough to be named Collins English Dictionary’s Word of the Year for 2025.
The mental model is “ordering a custom dish through a translator who knows the chef.” You explain what you want; the model translates that into code; you taste-test the result. Day-to-day, vibe coding is most common at the prototyping end of the spectrum — internal tools, hackathon MVPs, UI mockups, scripts to glue services together — but the underlying pattern is now reaching into production work as well, especially through tools like Cursor, Windsurf, Claude Code, and Replit Agent.
How to Pronounce Vibe Coding
vibe coding (/vaɪb ˈkoʊdɪŋ/)
How Vibe Coding Works
The defining loop is short: speak or type an intent, let the model generate or modify the code, run it, and react to whether the result feels right. Karpathy’s original example used Cursor Composer with Claude Sonnet, often via SuperWhisper for voice input — a setup where the human speaks and Cursor produces, runs, and iterates the code. The cultural twist is that this style accepts not reading the generated code line-by-line. Note that this is a real departure from traditional code review culture, where reading every diff before merging is the default.
The Common Toolchain
By 2026 the toolchain has crystallized around a few archetypes: AI-native IDEs (Cursor, Windsurf, Claude Code) for software professionals, instant-preview app builders (v0, Bolt, Lovable, Replit Agent) for non-engineers, and CLI-style coding agents (Aider, Devin) for autonomous work. Each provides the same basic experience: natural-language input, generated code, immediate feedback. Karpathy stated in December 2025 that 80% of his code is now AI-generated. It is worth keeping in mind that he later qualified this in 2026, suggesting the term “vibe coding” itself is evolving into “agentic engineering” for professional use.
Historical Background
The path to vibe coding ran through several waves of AI-assisted development. ChatGPT (late 2022) and GitHub Copilot (general availability 2022) made code completion mainstream. Cursor’s Composer, Aider, and Devin pushed beyond completion into “feature-level” generation through 2024. By early 2025 the experience had become fluid enough that Karpathy’s tweet captured a recognizable pattern, giving it a name. The term’s rapid spread — from a Twitter post to a dictionary’s word of the year inside ten months — reflects how quickly developer practice is changing.
Adoption Statistics in 2026
The shift is large enough that it shows up in survey data. By early 2026, an estimated 92% of US developers reported using some form of AI-assisted coding regularly, the AI-coding tool market had grown to roughly $8.5 billion globally, and approximately 60% of new code written in 2026 was reported to be AI-generated. These numbers should be read with caution — they aggregate everything from autocomplete to fully autonomous agents — but the trend line is unmistakable. Vibe coding moved from a Twitter moment in early 2025 to a default working style in eighteen months.
It is important to note that not all of this volume is “pure” vibe coding in Karpathy’s original sense. Most of it is somewhere on a spectrum from light AI assistance (Copilot-style completion) through pair-coding (Cursor Composer) to fully autonomous agents (Devin, Replit Agent). The cultural label “vibe coding” became a shorthand for the whole spectrum, even though practitioners distinguish carefully when discussing tooling and quality concerns.
Vibe Coding Usage and Examples
Quick Start
Using Cursor as the canonical example, the minimum vibe coding loop looks like this:
# 1. Open the project
$ cursor my-app
# 2. Open Composer (Cmd+I) and describe what you want
"Add a text input and a button. When the button is pressed, show the text in a toast."
# 3. Review the proposed diff. Accept.
# 4. Run the preview, see if it feels right.
# 5. Iterate: "Make the button blue. Center it."
Common Implementation Patterns
Pattern A: Throwaway Prototyping
"Make a carousel that auto-advances every 3 seconds."
→ Mock appears almost instantly.
→ User decides yes/no in seconds.
Best for: hackathons, design demos, idea validation. The code is meant to be discarded, so speed beats polish.
Avoid when: the prototype is going to production. The “don’t read the code” habit becomes a quality liability.
Pattern B: Pair Programming
"Replace the auth flow with Clerk. Keep the existing tests passing."
→ AI proposes a diff.
→ Human reviews, requests changes.
→ Tests pass → commit.
Best for: small-to-medium changes in an existing codebase. The AI drafts, the human enforces quality.
Avoid when: the change touches security-sensitive code or domain-specific logic the AI does not understand well.
Pattern C: Spec-Driven Development
1. Write the spec in Markdown.
2. "Implement the spec in FastAPI, including unit tests."
3. AI produces code + tests.
4. Human edits the spec; AI re-syncs the code.
Best for: CRUD APIs, defined business logic, anything that fits a clear spec.
Avoid when: the requirements are exploratory and change while you write them.
Practitioners also commonly describe a meta-pattern that runs across all three sub-patterns above: the “iterate-on-feel” loop. After each generation, the developer runs the code, observes whether the output matches their mental model, and refines the next prompt to close the gap. The skill is less about writing perfect prompts up front and more about reading outputs critically and recognizing when something is subtly wrong. This is why the saying “the AI does the typing, the human does the noticing” has become a useful summary of effective vibe coding.
Anti-Pattern: Vibe-Coding Everything
# Don't do this
- Run AI-generated DB migrations on production without review.
- Ship AI-generated auth logic untested.
- Deploy without a security audit.
Vibe coding is well suited to throwaway code; it is dangerous when the output reaches production unchecked. Note that AI-generated code is prone to plausible-looking but nonexistent function calls, subtly wrong defaults, and outdated APIs — all of which are caught easily by review and tests, and easily missed by vibes alone.
Advantages and Disadvantages
Advantages
- Raw speed: an idea can become a running prototype in under an hour.
- Lower bar to entry: people who cannot code at all can ship working software.
- Language fluency: switching between TypeScript, Python, and Go feels effortless.
- Boilerplate elimination: API clients, schemas, and CRUD scaffolds become free.
- Implicit learning: reading the AI’s output exposes you to idioms and best practices you did not know.
You should also weigh that several of these advantages compound over time. The faster you can iterate, the more iterations you can run, the better your eventual product. The lower the entry bar, the more people can contribute small improvements. The implicit-learning effect means even casual vibe coders pick up patterns they would not have studied otherwise. The compound nature of these gains is part of why teams that adopt vibe-style workflows tend to keep adopting them rather than rolling back.
Disadvantages
- Quality drift: code that runs but is hard to read, untested, and brittle.
- Security risks: subtle authentication or library-choice mistakes slip through.
- Hallucinations: APIs, package names, and config flags that look plausible but do not exist.
- Skill atrophy: less hand-coding can shallow your understanding of fundamentals over time.
- API costs: heavy vibe coding can run up real bills, especially with reasoning-heavy models.
Vibe Coding vs Traditional Coding vs Agentic Engineering
By 2026 the discourse has expanded beyond a single term. Karpathy himself distinguishes “vibe coding” (the casual, code-blind style) from “agentic engineering” (AI does most of the work but professional review and infrastructure stay in place). Traditional coding remains, especially in regulated environments. The three styles coexist but emphasize different trade-offs.
| Aspect | Vibe Coding | Traditional Coding | Agentic Engineering |
|---|---|---|---|
| Primary author | AI (human briefs only) | Human | AI, supervised by human |
| Human review | Minimal or none | Mandatory | Structured, mandatory |
| Tests | Hit or miss | Human-written | AI-written, human-approved |
| Production-ready | Discouraged | Default | Encouraged |
| Best for | Prototypes, hobby projects | Regulated work, hot paths | Most production work |
The simple framing many teams now use: vibe code the prototype, then re-implement under agentic-engineering discipline once you decide to ship. The two are not enemies — they are different tools for different stages of the same project.
Common Misconceptions About Vibe Coding
Misconception 1: “Vibe coding makes engineers obsolete”
Why this confusion arises: viral demos of “anyone can build an app” reach a much broader audience than the quieter discussion of where vibe coding falls short. The reasoning piggybacks on existing anxieties about AI replacing professional work.
The reality: vibe coding eliminates typing and boilerplate, not architecture decisions, business-requirement translation, or accountability. Most public commentary from senior engineers — including Karpathy — emphasizes that high-leverage design judgment becomes more valuable, not less. The work being automated is the part of programming that already felt mechanical.
Misconception 2: “Vibe coding means literally never reading the code”
Why this confusion arises: Karpathy’s “forget the code even exists” line is memorable and got quoted everywhere, often without his subsequent caveats. The catchy framing stuck because it captured an attitude shift, but it overstated the practice.
The reality: in real workflows, most vibe coders skim diffs rather than read every line — closer to “spot-checking a fluent translation” than “trusting blindly.” Production-bound code typically gets a more careful pass. The “don’t read” framing describes a feeling, not an absolute rule.
Misconception 3: “Vibe coding is for beginners; pros don’t use it”
Why this confusion arises: the framing “you don’t need to know how to code” makes vibe coding sound like a beginner shortcut. The reasoning is intuitive but wrong about who actually benefits most.
The reality: experienced engineers tend to gain the most because they spot AI mistakes instantly and steer prompts effectively. Beginners often miss the mistakes and ship bugs. Karpathy, Simon Willison, and many other senior engineers actively use vibe-style flows. “Easy to start” is not the same as “best for beginners.”
You should keep in mind that all three misconceptions tend to feed each other. If you believe vibe coding eliminates engineers, you also tend to believe code review is unnecessary, which makes you less likely to spot the AI mistakes that suggest beginners aren’t actually well-served by the tool. The cleaner mental model is the more nuanced one: vibe coding shifts where human attention is most useful, but it does not remove the need for that attention.
Real-World Use Cases
- Internal tools: forms, dashboards, simple CRUD apps for a single department.
- UI/UX prototyping: instant mockups for user testing.
- Data analysis scripts: pandas/Polars one-offs for ad-hoc questions.
- Migrations: rewriting a codebase from one framework to another, with iterative review at each step.
- Non-engineer automations: marketers and analysts shipping their own scripts.
- Education: “I want to see how X works” turned into runnable code in seconds, accelerating exploration of unfamiliar libraries.
Another way to think about vibe coding is as a different unit of work. Traditional programming’s unit of work is the line or function: write it, test it, iterate. Vibe coding’s unit of work is the feature or behavior: describe it, accept it, observe it. Many of the productivity gains and quality risks come from this shift in granularity. When the unit gets larger, you do more useful work per minute, but you also see less of what is happening inside each unit.
This change in unit-of-work also explains why vibe coding feels different to long-time programmers. The mechanical satisfaction of typing in a function and watching it work is replaced by the higher-level satisfaction of getting closer to a working system, even though you may not always know exactly which lines did the trick. For some this is a welcome trade; for others it removes part of what made programming enjoyable. Both reactions are common in the 2026 discourse.
One area where vibe coding has reshaped expectations especially fast is internal-tools development. Tasks that used to take a week of engineering time — building an admin dashboard, ingesting a new data source into a Slack alert, wiring up a small approval workflow — now reasonably take a single afternoon. Many businesses report that operations teams now ship the kind of small automations they previously had to escalate to engineering, which both speeds up the business and reshapes how engineering teams allocate their time.
Frequently Asked Questions (FAQ)
Q1. Can I build a production app with vibe coding?
Prototypes and internal tools are fine. For external-facing or payment-handling production code, combine vibe coding with code review, automated tests, and a security audit. Karpathy himself reframed his 2025 stance for production use cases in 2026, recommending “agentic engineering” — AI does the work, but oversight and verification remain professional. The practical rule of thumb many teams use is that anything in front of a customer or moving money requires the full agentic-engineering treatment, while internal tooling can stay closer to pure vibe coding.
Q2. Which AI models are best for vibe coding?
As of 2026 the strongest options are Claude Sonnet 4.6 and Claude Opus 4.6, GPT-5, and the model bundles inside Cursor. For coding-benchmark performance (e.g., SWE-bench Verified), Claude Opus 4.6 and OpenAI o3 are popular. For everyday speed and cost, Sonnet 4.6 and GPT-5 are the workhorses. Many production setups route between models based on the difficulty of the prompt, escalating to Opus 4.6 only when easier models hesitate.
Q3. How did vibe coding become a household term?
Andrej Karpathy’s February 2025 X post — “There’s a new kind of coding I call ‘vibe coding'” — captured a real shift in developer practice and spread quickly. The phrase was named Collins English Dictionary’s Word of the Year for 2025, signaling that it had reached a wider audience well beyond the engineering community. By the time mainstream press covered it, the underlying tools had already evolved several times.
Q4. Should beginners learn vibe coding instead of “real” coding?
Treat them as complements. A baseline grasp of variables, control flow, and types lets you spot AI mistakes; without it, you may ship bugs you cannot recognize. The modern recommendation is to learn fundamentals while practicing vibe-style flows, not to skip one for the other. Computer-science programs in 2026 increasingly weave AI tools through the curriculum rather than banning them, on the theory that students will use these tools in the workforce regardless.
Q5. Who owns the code, and who is liable for bugs?
As of 2026 the default is “the human who prompted and accepted the code is responsible.” Open questions remain around training-data licensing (e.g., GPL contamination) and AI vendor liability. For commercial use, review the AI vendor’s terms and run a code-scanning tool to flag any risky license patterns.
Conclusion
- Vibe coding, coined by Andrej Karpathy in February 2025, is a development style where humans describe intent in natural language and the LLM produces the code, with the human reviewing by feel rather than line-by-line.
- The toolchain centers on AI-native IDEs (Cursor, Windsurf, Claude Code) and instant-preview builders (v0, Bolt, Lovable, Replit Agent).
- Karpathy reported in late 2025 that 80% of his code was AI-generated; surveys suggest a large majority of US developers had adopted some form of vibe coding by 2026.
- It excels at prototypes, internal tools, and exploratory work; production use requires the discipline of agentic engineering — review, tests, security audit.
- By 2026 Karpathy was describing the term as already passé for professional use, replaced by the more disciplined “agentic engineering.”
- It is a culturally significant term: Collins English Dictionary’s Word of the Year for 2025, indicating the practice has crossed firmly into mainstream awareness.
The bigger picture is a generational shift in what software engineering looks like. Vibe coding demonstrates that LLMs can produce useful code fluently enough that the bottleneck moves from “writing code” to “deciding what to build and verifying that it works.” Whether you call the next phase “agentic engineering” or something else, the through-line is the same: human attention concentrated on intent and oversight, with the mechanical layer of programming increasingly handled by models. The skills that retain value are exactly the ones that mattered most to senior engineers all along.
References
📚 References
- ・Wikipedia, “Vibe coding” https://en.wikipedia.org/wiki/Vibe_coding
- ・Andrej Karpathy on X (Feb 2025) https://x.com/karpathy/status/1886192184808149383
- ・The New Stack, “Vibe coding is passé” https://thenewstack.io/vibe-coding-is-passe/
- ・IBM, “What is Vibe Coding?” https://www.ibm.com/think/topics/vibe-coding



































Leave a Reply