A primer on vibe coding
Issue 251: How to collaborate with code generation tools more like a creative partner than a command line
Vibe coding is a phenomenon that became popular with the continued advancement of code generation this year. I highly recommend Carley Ayers's piece on vibe coding, which she wrote while at Figma. For me, the term is perfectly fitting. I grab my cold brew, put on a multi-hour synthwave playlist, and start making software in inconsequential ways. No unit testing, no Continuous Integration...just building for the vibes.
But what happens when you want the vibes to be something practical? In general, many of these tools operate the same way: you type into a text dialog with a prompt of what you want to build. The aspirational promise is that they can take your input and produce exactly what you envision—what people call “one-shot prompting.”
Those who are already avid vibe coders may know many of these techniques, but my hope is you’ll learn something new or sharpen what you already practice.
There’s no shortage of one-shot prompting tools—Bolt, Cursor, Lovable, Replit, Vercel v0, and Figma Make, to name a few. Though they share similar interactions (enter a prompt and automagically generate an idea), each offers unique features that might suit different workflows. Personally, I use Cursor and Replit interchangably. My recommendation is give them all a try and figure out which one is best for you. Instead of optimizing for any one tool, these tips are general principles you can apply across your vibe coding workflow.
My vibe coding tips
Build context
One of the most important steps in vibe coding is setting up proper context. Just like you’d onboard a new teammate, your LLM benefits from clear documentation and configuration files that define how your system works. Writing markdown files that describe architecture, naming conventions, and workflows helps establish shared norms. Config files, whether for frameworks, environments, or preferences—act as a signal to the model, giving it a clearer sense of your intent and expectations. The more structure you provide, the more predictably it can collaborate.
IA before AI
Plan before you prompt. Having a sense of the Information Architecture (IA) and conceptual modeling are foundational to effective vibe coding. IA is the skeleton of your system; the way components relate, how data flows, and how functionality is structured. When you define these clearly, you give the AI a map instead of asking it to guess the terrain. This reduces the chances of logic loops, misnamed functions, or cascading rewrites later on. AI works best when it’s building on a strong conceptual foundation; otherwise, it can paint itself into a corner. Vibe coding isn’t about skipping planning, it’s about translating a well-thought-out structure into working software, one prompt at a time until you get to a high vibe flow state.
100 shot > 1 shot
One-shot prompting is overrated. I prefer small prompts with clear intent. Just like writing clean code, I treat prompting like making baby pull requests—small, focused changes that move the system forward without creating unnecessary complexity. This method allows for tighter feedback loops, more precise outcomes, and easier debugging.
Sometimes, I’ll drop an entire sequence of steps into my prompt to scaffold the interaction. Instead of saying, “Build an auth system,” I break it down like this:
You're helping me build an authentication system for a web app.
Let's start with step-by-step requirements. Don't generate any code yet. First, help me define the high-level components:
1. What modules are required for a simple email/password auth system?
2. What are the user flows (signup, login, forgot password)?
3. What backend services or storage will we need?
4. What are the edge cases we should prepare for?
From there, you can refine the specifics:
Great. Based on those components, generate the backend route handlers in Express for signup and login only. Use TypeScript. Don’t worry about middleware or validation just yet.
This approach not only gives the LLM a clear path to follow, but also mirrors how you’d collaborate with a teammate—through context-rich, incremental discussion. Success doesn’t come from writing the perfect prompt up front; it comes from building momentum through intentional iteration.
Be sequential
Avoid compound prompts. The moment you write “and” or “or,” it’s a signal to slow down and simplify. LLMs work best when each task has a clear, unambiguous objective. When you combine multiple requests like, “Build a login page and generate tests” or “Use either OAuth or email/password”, you increase the chances of vague, partial, or inconsistent outputs.
Instead, treat each prompt as a single, focused instruction. Break large goals into atomic steps, just like you would with modular code or well-scoped tickets. Prompt one thing at a time: define the endpoint, then write the handler, then add validation, then write the tests. You can even number your steps and guide the model through them in order.
Sequential prompting makes it easier to course-correct, debug, and layer improvements without needing to untangle a mess of assumptions. It’s not just about getting better output. It’s about making your process legible to both you and the model.
Debug together
The rite of passage for vibe coding is when you ask the LLM to change the style of a button and it rewrites your entire codebase. It’s a shared pain, but also a turning point when you stop treating the LLM as a code vending machine and begin treating it like a pair programmer learning your system.
Debugging isn’t just about fixing what’s broken. It’s about aligning mental models between you and the model. One of the most effective things you can do is ask the LLM to reason about what it’s doing. Instead of fixing things manually, prompt it to identify what went wrong, why it might’ve happened, and how to fix it.
Here are a few ways I debug with the model:
Validate TypeScript routing logic: “Check if any of the route handlers in this file are missing type annotations or are using deprecated syntax that might break in Next.js.” (this is the most common errors for me)
Check framework assumptions: “You used
app.get()
in this Express route—are there any middleware or headers missing that are expected by the auth layer?”Ask it to compare its own output:
“Compare this new version of the function with the one from earlier. What’s changed, and why?”Have it walk through control flow: “Step through this function line-by-line and tell me what each part does. Where might this fail if the input is malformed?”
Catch side effects: “You updated the UI component, but did you also change any state management logic that might have unintended consequences?”
The more you treat the model like a collaborator with questions, reviews, and sanity checks, the more resilient your code becomes. Vibe coding isn’t about outsourcing responsibility. It’s about shared authorship. Debugging together makes that collaboration real.
Recap
AI is incredibly good at programming and retrieving information, but it’s not a mind reader. That’s your job. For now, the potential of a LLM is contingent on the quality of the prompter (you). As the human in the loop, your role is to provide knowledge and context; parts the model can’t guess. When you get that right, the collaboration feels fluid. When you don’t, the vibes die down quickly.
If the output isn’t working, don’t just blame the model. Ask yourself: was the prompt specific? Was the task clear and atomic? Did you build enough context before asking for code?
Simply put, vibe coding is sketching with code in the best flow state possible. I’ve been vibe coding my entire career and most of it was done manually with no AI. You’re working on an idea and making it tangible through interaction. Prompt, respond, tweak, repeat. The more you treat it as a creative back-and-forth, the more the model becomes a partner, not a tool to wrestle with. Happy vibing.
Tunes
In order to vibe code, you need great music to keep the flow state. Here are a few recommendations: Tron: Legacy1 , Tron: Legacy Reconfigured, Ghosts I-IV by Nine Inch Nails, The Animatrix Soundtrack, _+ by BT, Gryff playlist on Soundcloud, Daft Punk: Alive 2007, The Mix Up by The Beastie Boys, Atlas by FM-84, or this 10 hour Cyberpunk tribute2.
Introducing Tapestry
Speaking of vibe coding, I built an app. Tapestry is high touch recruiting in the intelligence era. Using only Replit and Visual Electric, I shipped this alpha as I wanted in line for coffee, respawns in Call of Duty, and a few vibe coding sessions. Overall, I guess it was less than a day of work total.
If you want to follow along, give it a watch on GitHub. I also have referrals to Replit and Visual Electric if you’d like to try them yourself.
Hyperlinks + notes
A collection of references for this post, updates, and weekly reads.
Faster, Smarter, Cheaper: AI Is Reinventing Market Research by a16z
A great talk by Amrita Saraogi coming up at Button Conference: How to design the personality of AI tools
How Top AI Tools Onboard New Users in 2025 by Kate Syuma
Approaching AI as a design leader: rethinking the customer journey with a layer of AI-first by Jehad Affoneh
Jeremy Dodd will be speaking at ValueUX, don’t miss it! ValueUX Conference
Incredible soundtrack, terrible movie
Too short in my opinion
The Tron: Legacy soundtrack is freakin awesome. Thanks for this post and I have some new things to try out.
I also appreciate the music recommendations!
Good tips! Another good option is to use a top tier reasoning model to build out your plan and iterate on that. Claude Code now has a “plan mode” so you can do it in context.