The ApproachTrack RecordInsightsDemoAboutLet's Talk
← Insights

Engineering

Vibe coding is real.
But it still needs a pilot.

AI coding tools can now build in weeks what used to take months. The catch? Speed without judgment is just a faster way to build the wrong thing.

Something shifted in the last year. Founders who couldn’t write a line of code six months ago are now shipping working prototypes. Cursor, Windsurf, Claude — these tools are genuinely impressive, and the pace of improvement hasn’t slowed down. Vibe coding is real.

I use these tools every day. I’m not here to tell you they’re a gimmick or that you need to go back to writing everything by hand. That ship has sailed, and honestly, good riddance to a lot of the manual work.

But I’ve also seen what happens when founders lean on them without understanding what they can and can’t do. And the failure mode is almost always the same: the code works, the demo looks great, and then six months later you’re staring at a codebase that can’t be safely extended without pulling everything apart.

What vibe coding tools are actually good at

They’re exceptional at execution within a well-defined scope. Give a good coding agent a clear problem, the right context, and a constrained surface area — it’ll write code faster than any human and often better than a junior developer. Boilerplate, CRUD operations, API integrations, UI scaffolding. All of that? Genuinely accelerated.

Where they fall apart is judgment. They don’t know your business. They don’t know that the data model you’re asking them to scaffold will need to support multi-tenancy in six months. They don’t know that the auth pattern they’re implementing — technically correct for the simple case — is going to be a security nightmare when you add enterprise SSO. They optimize for “this works” without any view of “this will need to scale.”

The tool does exactly what you ask. The problem is that early-stage founders often don’t yet know the right questions to ask.

And that’s not a knock on founders. It’s just the nature of the 0-to-1 phase. You’re figuring out the product while simultaneously trying to build it. The architectural implications of today’s decisions aren’t obvious yet.

The failure mode no one talks about

Here’s what I see happen. A non-technical founder uses Cursor or a similar tool to get to a working V1. The product looks good. Users start coming in. Investors get interested. And then the codebase hits a wall — some combination of performance issues, security gaps, or a feature request that requires restructuring something that was never designed to be restructured.

At that point, hiring an engineer to fix it is harder than starting fresh. They’re inheriting code they didn’t write, built on assumptions they don’t understand, using patterns that were chosen because the AI defaulted to them, not because they were right for the product.

I talked to a founder recently who had built a reasonably sophisticated B2B SaaS product almost entirely with AI coding tools. Smart person, great product instincts, no engineering background. When they finally brought on their first engineer, that engineer spent the first six weeks doing nothing but untangling the data model. Six weeks of a senior engineer’s time, at $200k+ annual salary, just to get to a place where new features could be built safely. That’s expensive.

What this looks like in practice

A founder building a workflow automation product vibe-coded their way to a working prototype in about three weeks. Impressive. But they’d designed each workflow as a self-contained object with logic embedded directly in the record. Made sense for the demo — each workflow just ran itself.

The problem surfaced when they needed to add a feature their first paying customer asked for: the ability to share workflow templates across an organization and let teams customize them. That feature required a fundamentally different data model — templates, instances, inheritance. The existing structure couldn’t support it without a full rewrite of the core data layer.

Three weeks to build. Four weeks to unwind one architectural assumption.

Where AI tools help vs. where judgment is irreplaceable

Think of the build process in two distinct layers: what the tools handle well, and what still requires a human who’s seen what happens when things go wrong.

Items in the top-right quadrant are where wrong decisions compound. AI tools don't know your growth trajectory — a human has to make these calls.

Click to zoom

The bottom-left quadrant is where vibe coding shines. Fast, low-risk, and honestly better than most junior developers on a good day. The top-right is where it can quietly destroy your future. The tools will happily implement auth, data models, and infrastructure patterns — they just won’t know if those patterns are right for where you’re going.

What the right setup looks like

I’m not arguing for slowing down. The whole point of building AI-native is that you move faster. But speed with intention beats speed alone.

The architectural decisions — data model, auth strategy, multi-tenancy approach, API design — those get made deliberately, by someone who’s seen what happens when they go wrong. Then the execution layer gets handed to the tools. Let them write the scaffolding, the tests, the integrations. That’s where they save enormous amounts of time.

The metaphor I keep coming back to is aviation. Modern commercial planes can fly themselves in most conditions. Autopilot is real, it works, and it makes flying more reliable, not less. But you still need a pilot — not because the autopilot is incompetent, but because judgment in edge cases is what separates a safe landing from an incident report. The autopilot doesn’t know what it doesn’t know. That’s the pilot’s job.

Vibe coding tools are autopilot. Extraordinary within their envelope. You still need someone in the left seat who’s been in enough edge cases to know when to take the controls.

Founder Takeaway

Use AI coding tools — they’re genuinely powerful and the productivity gains are real. But before you start generating code, make sure someone with architectural experience has signed off on the foundational decisions: your data model, your auth and permissions design, your approach to multi-tenancy, your API contract.

Those decisions are cheap to make correctly at the start. They’re expensive to unwind after you’ve built on top of them. The tools will execute anything you point them at. The question is whether what you’re pointing them at is the right thing.

Get the architecture right first. Then let the tools fly.