AI & Development12 min read

Vibe Coding

Why building apps without engineers sounds great until your prototype meets real users, real data, and real attackers

All articles

Key Takeaway

Vibe coding (building software by describing what you want to AI) went from an Andrej Karpathy tweet to Collins Word of the Year in under nine months. 92% of developers now use AI coding tools daily, and 25% of Y Combinator's latest batch runs on 95% AI-generated code. But the data tells a different story than the hype: 45% of AI-generated code contains OWASP Top-10 vulnerabilities, vibe-coded projects accumulate technical debt 3x faster, and real incidents (from 1.5 million exposed auth tokens to Amazon's 6.3 million lost orders) show what happens when speed replaces engineering. We break down where vibe coding works, where it breaks, and how to bridge the gap.

What Vibe Coding Is (and Why Everyone's Talking About It)

In February 2025, Andrej Karpathy (co-founder of OpenAI, former head of AI at Tesla) posted something that caught fire. He described a new way of coding where you "fully give in to the vibes, embrace exponentials, and forget that the code even exists." He called it vibe coding. The idea: you talk to an AI, describe what you want, and it writes the software. You don't read the code. You don't review the diffs. You just run it, see if it works, and if something breaks, you paste the error back in and let the AI fix it. Nine months later, Collins Dictionary named it their Word of the Year for 2025. That should tell you how far past the developer bubble this thing has gone.

The Tools That Made It Real

Vibe coding isn't one tool. It's a category now. Bolt and Lovable let you describe an app and get a running prototype in your browser. No local setup, no terminal, nothing. Replit Agent takes it further: hand it a project description and it plans, codes, tests, and deploys on its own. v0 from Vercel generates frontend components from prompts, and it's become the go-to for developers who want speed without giving up control. Then there's Cursor and Claude Code for developers who want AI working inside their actual codebase, not generating throwaway projects. These tools are real. They work. And they've dropped the cost of going from "idea" to "working demo" from weeks to hours.

The adoption numbers are hard to ignore. 92% of US developers now use AI coding tools daily. GitHub reports 46% of all new code is AI-generated. 25% of Y Combinator's Winter 2025 batch has codebases that are 95% AI-generated, and every one of those founders is highly technical, choosing AI speed on purpose. Gartner predicts 60% of all new code will be AI-generated by the end of 2026.

Where Vibe Coding Genuinely Works

I'm not here to tell you vibe coding is bad. We use AI coding tools every day at Byte Dimensions. We wrote about that in our piece on AI-assisted development. For certain jobs, vibe coding is genuinely great. Prototypes and demos top the list. You need to show a client or investor what something could look like? You don't need production-grade code for that. You need a working demo, fast. Vibe coding gets you there in hours instead of weeks. Internal tools that three people use and don't touch sensitive data? Vibe-coded is fine. Hackathons, side projects, learning exercises where a junior dev wants to see how a framework works? Perfect. And MVPs for validation: testing whether anyone actually wants your product before you spend six figures building it properly. That's a smart use of the technology.

Good Use Cases for Vibe Coding

  • Prototypes and investor demos: speed matters more than code quality
  • Internal tools with limited users and no sensitive data
  • Hackathon projects and proof-of-concepts
  • Learning projects for developers exploring new frameworks
  • MVP validation: test the idea before you commit to building it right
  • Landing pages and marketing sites
  • Personal projects and side experiments

The Three-Month Wall

Here's what the hype leaves out. Vibe-coded projects hit a wall. Developers and researchers call it the "Spaghetti Point," and it shows up around month three. The codebase has grown beyond what anyone, human or AI, can hold in their head. The AI's context window only sees fragments. You add a new feature, and two existing ones break. Velocity drops to near zero because the team spends all their time fighting fires instead of shipping. A developer on Reddit put it bluntly: "AI will fix one thing but destroy 10 other things in your code." That tracks with the data. Vibe-coded projects accumulate technical debt 3x faster than traditionally developed software. And there's a deeper problem researchers call "comprehension debt." The people responsible for the code don't actually understand what it does. They prompted it into existence. They never read it line by line. And now they can't debug it without asking the AI, which doesn't remember what it produced last week.

The Security Numbers Are Ugly

This is where it stops being an academic debate. Georgetown's Center for Security and Emerging Technology found that 86% of AI-generated code failed basic XSS defense mechanisms. Across the board, 45% of AI-produced code samples contain OWASP Top-10 vulnerabilities: injection flaws, broken authentication, missing input validation. AI co-authored code has 1.7x more major security issues than code written by humans. Escape.tech scanned 5,600 publicly accessible vibe-coded applications. What they found: over 2,000 high-impact vulnerabilities, more than 400 exposed secrets (API keys, database credentials, auth tokens), and 175 instances of personal data left in the open. Those are live production systems. Actual user data. Hanging in the wind. And the trend is accelerating. CVE entries attributed to AI-written code jumped from 6 in January 2026 to more than 35 by March.

If your vibe-coded app handles user data, payment information, or authentication, stop and get a security review before you launch. 45% of AI-generated code has known vulnerabilities, and the AI doesn't flag them. This isn't paranoia. Live apps with paying customers have been breached because nobody checked what the AI wrote.

When It Goes Wrong: Real Incidents

These aren't hypotheticals. Moltbook, an AI social network where the founder stated publicly that he "didn't write one line of code," left its Supabase database wide open. Row Level Security was never configured. 1.5 million authentication tokens and 35,000 email addresses exposed. Anyone could impersonate any user on the platform. Lovable, one of the most popular vibe coding platforms, shipped a bug that inverted access control logic in its generated code. Authenticated users got blocked. Anonymous visitors got full access to everything. 170 production applications affected. 18,000 users exposed. It earned its own CVE: CVE-2025-48757. Replit's AI agent deleted 1,206 executive records and 1,196 company records during an explicit code freeze. The instructions said, in all caps, not to make changes. The agent later admitted to "panicking." Months of business data, gone. And then there's Enrichlead, a startup built entirely with Cursor where the founder wrote zero lines of code. After launch: API keys maxed out through unauthorized usage, customers bypassed subscription paywalls, random database entries appeared. The root cause? Authorization was enforced client-side only. The AI built something that worked for honest visitors and fell apart the second anyone tried anything unexpected.

Amazon: When AI Code Hits Production at Scale

In March 2026, AI-assisted code changes at Amazon contributed to a series of outages. On March 2, an incident caused 120,000 lost orders and 1.6 million website errors. Three days later, a separate outage caused a 99% drop in orders across North American marketplaces. 6.3 million lost orders. Amazon's response was telling. Dave Treadwell, SVP of e-commerce services, launched a 90-day code safety reset across 335 critical systems. Senior engineers now have to sign off on all AI-assisted changes from junior and mid-level developers. When one of the world's most sophisticated engineering organizations has to pump the brakes on AI-generated code, that says something about where we are.

Timeline: Vibe Coding's Rise and Its Incidents

Feb 2025Karpathy Coins the Term

Andrej Karpathy posts about "vibe coding" on X, describing a workflow where you "fully give in to the vibes" and let AI write all the code. The term goes viral instantly.

Mar 2025Y Combinator Reveals AI Code Stats

YC CEO Garry Tan reveals that 25% of the Winter 2025 batch has codebases that are 95% AI-generated. TechCrunch confirms all founders involved are highly technical.

Nov 2025Collins Word of the Year

"Vibe coding" is named Collins Dictionary's Word of the Year for 2025, beating out "broligarchy" and "task masking." The term has crossed from tech jargon into mainstream vocabulary.

Early 2026Moltbook Breach

AI social network Moltbook, built without writing "one line of code," exposes 1.5 million authentication tokens and 35,000 email addresses. Root cause: Row Level Security never configured.

Early 2026Lovable Access Control Inversion

Lovable-generated code inverts authentication logic across 170 production apps: authenticated users blocked, anonymous users granted full access. 18,000+ users affected. Assigned CVE-2025-48757.

Feb 2026Replit Agent Deletes Production Data

Replit's AI agent wipes 1,206 executive records and 1,196 company records during an explicit ALL-CAPS code freeze. Agent admits to "panicking." Months of business data lost.

Mar 2, 2026Amazon AI Code Incident #1

AI-assisted code changes contribute to an Amazon.com incident causing 120,000 lost orders and 1.6 million website errors.

Mar 5, 2026Amazon AI Code Incident #2

A separate AI-related outage causes a 99% drop in orders across North American marketplaces. 6.3 million orders lost. Amazon initiates a 90-day code safety reset across 335 systems.

The Prototype-to-Production Gap

AI gets you roughly 80% of the way to a working application. That last 20% is where the actual engineering lives: security, error handling, scaling, edge cases, compliance, monitoring. And that 20% takes 80% of the effort. A security firm called LaSoft audited an actual vibe-coded SaaS product that was live and taking money from customers. They found nine critical issues. Not minor nitpicks. Production API keys baked into the public JavaScript bundle. No database transactions on multi-step operations, so customers could end up in broken half-states. N+1 database queries that brought the server CPU to 100% with just 200 simultaneous connections. Payment webhooks without idempotency keys, causing duplicate charges every time the provider retried. No server-side input validation at all. No GDPR compliance: no data export, no right-to-erasure, nothing. No backups. Not a single test. Not a line of documentation. And here's the thing. The app worked. If you clicked through the happy path, it looked fine. The problems only showed up when actual people did unpredictable things at scale.

What the SaaS Audit Actually Found

  • Production API keys exposed in the public JavaScript bundle, visible in DevTools
  • No database transactions: multi-step operations left users in broken partial states
  • N+1 queries: 200 simultaneous visitors brought the database to 100% CPU in 90 seconds
  • Payment webhooks without idempotency: duplicate charges on every retry
  • Single server, no backups, no health checks, no recovery plan
  • Unrestricted file uploads: any file type accepted, stored in a publicly accessible directory
  • No server-side input validation: SQL injection vectors wide open
  • GDPR non-compliance: no data export, no right-to-erasure, no processing agreements
  • Not a single automated test and no documented business logic

When to Vibe Code and When to Stop

Here's a simple rule. If nobody depends on your software working correctly, vibe code all you want. Ship fast, learn fast, break things. But once actual people rely on it with live data, actual money, or any expectation of uptime? It needs engineering. Not just code generation. Actual engineering. Authentication that holds up against people actively trying to break it, not just happy-path demos. Input validation on the server, not just the client. Database architecture that won't collapse under load. Monitoring that tells you something broke before your users do. Tests that catch regressions before they ship. The smartest founders we work with use vibe coding to get to a running prototype in days, then bring in engineers to turn that into something production-grade. They don't throw the prototype away. They use it as a spec. "This is what it should do." And then they build it properly.

How We Handle This at Byte Dimensions

We use AI every day. Claude, Cursor, Copilot. They're in every project. We wrote about how that works in practice in our piece on AI-assisted development. The difference between what we do and pure vibe coding is what happens after the AI generates code. Every line goes through review. We run security checks. We write tests. We design database schemas that don't fall over under load. We build interactive prototypes fast, often in days, because AI handles the scaffolding. But when those prototypes turn into production applications, we do the engineering work that AI still can't do reliably: security architecture, proper authentication, input validation, monitoring, backup strategies, GDPR compliance. For founders who've already vibe-coded an MVP and need to take it to production, that's a big part of what we do. We audit what exists, figure out what can stay and what needs rebuilding, and get it to production-grade without starting from scratch. The prototype isn't wasted work. It's the clearest possible spec for what the product should do.

Already vibe-coded something that's getting traction? Don't panic and don't start over. Get a security audit first. That's your biggest risk. Then identify the three or four architectural problems that will block you from scaling (usually database design, authentication, and server-side validation). Fix those, add test coverage for business-critical paths, and you've got a real foundation to build on.

The Bottom Line

Vibe coding is real and it isn't going away. The tools keep getting better, and the productivity gains for prototyping and exploration are legitimate. We use AI-assisted coding in every project at Byte Dimensions and we're faster because of it. But there's a gap between "it works on my screen" and "it works for ten thousand strangers who include people actively trying to break it." That gap is called software engineering. AI hasn't closed it. Not yet. The best approach in 2026 isn't "vibe code everything" or "never vibe code." It's knowing where one ends and the other begins. Start fast. Validate the idea. Then build it right.

Frequently Asked Questions

Sources

  • Andrej Karpathy — Original "vibe coding" post on X (February 2025)
  • TechCrunch — "A quarter of YC startups have codebases almost entirely AI-generated" (March 2025)
  • Garry Tan (Y Combinator CEO) — "25% of W25 batch, 95% of code is LLM generated" (X, March 2025)
  • Collins Dictionary — Word of the Year 2025: "Vibe Coding" (November 2025)
  • GitHub — 46% of all new code is AI-generated (2025 Developer Survey)
  • Georgetown CSET — 86% of AI-generated code failed XSS defense mechanisms
  • Escape.tech — Security scan of 5,600 vibe-coded apps: 2,000+ vulnerabilities found
  • LaSoft — "We Audited a Vibe-Coded SaaS Product and Found 9 Critical Issues"
  • The New Stack / Tom's Hardware — Amazon AI-assisted code outages and 90-day safety reset (March 2026)
  • Fortune — "An AI agent destroyed this coder's entire database" (March 2026)
  • Red Hat Developer — "The Uncomfortable Truth About Vibe Coding" (February 2026)
  • Wiz Security — Moltbook breach analysis: 1.5M authentication tokens exposed
Reno Toonen
Reno Toonen

Founder & Lead Developer at Byte Dimensions

Has been shipping AI-assisted code in production since 2023, using Claude, Copilot, and Cursor daily. Regularly turns vibe-coded prototypes into production-grade applications. Runs Byte Dimensions, where AI speed meets actual software discipline.

Published April 1, 2026

Share this article

Related Articles