The secret to vibe coding. And the only skill that matters in the age of AI.

The secret to vibe coding. And the only skill that matters in the age of AI.

Original can be found here: https://x.com/jesseposner/status/2025680970784137238?s=20

The most important new skill in software engineering isn’t prompt engineering. It’s the ability to extract process from practice in real time while the practice is happening.

I’ve been building with AI since the day ChatGPT was released to the public. But something changed in the last week, across nine repos and 500+ commits, that feels like a permanent unlock. A methodology emerged: concurrent agent workstreams managed by a meta-layer, specific scaffolding patterns, verification protocols, prompt templates. None of it was planned. All of it was discovered by doing the work and then naming what happened. That act of naming is the skill.

The shape

You build something. It fails, or succeeds in a way you didn’t expect. You name what happened. Not a vague retrospective, not “lessons learned.” You name the specific thing: “the agent skipped the design loops because the prompt framed them as suggestions.” That naming produces a pattern: prompts must make critical steps into hard requirements with explicit checkpoints. The pattern becomes a choice you make going forward. The choice becomes process. And then the process itself is subject to the same cycle.

This is not reflection. Reflection happens after. This happens during. You’re building and watching yourself build at the same time, and the watching changes the building. The Greeks had separate words for this. Episteme: knowledge you can state. Techne: knowledge that only exists in the act of making. Phronesis: the practical wisdom to know which situation calls for which response. What I’m describing is all three at once, collapsing into a single recursive act. You’re writing software, designing the process for writing software, and evolving the process for designing processes, all in the same afternoon.

The scale

The meta-process is too complex for one person to hold.

Right now I’m working across five tmux windows, each with multiple panes. An automated agentic benchmarking lab runs for weeks in one. A kernel fix feeds into a brainstorm for the next feature stream in another. A research agent is reading documentation in a third. A planning agent is structuring the next phase in a fourth. A flex pane sits ready for code reviews as the other streams produce PRs. The meta-agent in each window helps me track what’s happening in that space. And when I need to see the whole board, the strategic AI visualizes the layout, maps the dependencies, identifies the critical path, and tells me what to spin up next.

This is not just writing code. Every one of those panes might be brainstorming, researching, planning, or meta-planning. The full cycle of building software (what should we build, what does the landscape look like, how should we structure it, how should we structure the process for structuring it) can each be delegated to their own agents in their own spaces. The meta-agent helps me oversee the entire workflow, not just the implementation.

The cognitive load isn’t coding. It’s orchestrating. Context management at a scale that exceeds human working memory. I can name patterns as they emerge. What I cannot do is hold the state of every active workstream while crafting the next agent prompt while remembering that a different stream shipped code I haven’t reviewed yet.

The partnership

So the meta lane has a collaborator. A strategic AI that holds the context I can’t.

When I need to send an agent into a new workstream, I don’t write the prompt alone. I describe the intent, the AI drafts the prompt structure, tells me where to place instructions in the document, explains why. When that agent is running and I have a gap in my attention, the AI offers options: think through the strategic layer, or check on a different workstream. When I say “let’s check on that other lane,” it reads the code, runs the tests, summarizes the architecture, and tells me whether the lane is done. I never loaded that context. The meta lane carried it.

The meta-agent also runs check-ins on the meta-process itself. What’s working, what isn’t, where do we course correct. Because its context is kept purely strategic (never polluted with implementation detail), it can manage complex projects over weeks and months without losing the thread. And when context does run thin, we handle it explicitly: clean handoffs before compaction, markdown logs of progress and decisions, memory files that survive session boundaries. And the meta-process has its own meta-process for persistence and cohesion.

The recursion

This is the recursion that matters. You’re using an AI to manage your process for managing AIs, and that AI maintains its own continuity through deliberate context engineering. The human provides direction, taste, and the judgment calls that only come from using the product. The meta lane AI provides working memory, context management, and a structured surface for decisions to happen on. Neither side is complete without the other. The human without the meta lane drowns in cognitive load. The meta lane without the human optimizes for the wrong thing. The system is the relationship.

Every concrete pattern I’ve found is an instance of this relationship at work. A spike proved the technology worked and simultaneously proved we didn’t want it. The insight isn’t “do spikes.” It’s that contact with a working prototype produces knowledge that specifications cannot. An agent said “all tests pass” when it was 90% done. The insight isn’t “verify your agents.” It’s that most people building with AI cannot distinguish a real result from a plausible echo of one, because they’re looking at the self-report instead of the artifact. A single context window couldn’t hold the why and the how without one compressing the other. I watched intent erode into implementation, token by token, named it “vision compression,” and the name produced the architecture: separate them into different context windows. Each pattern emerged the same way. Build, encounter surprise, name it, let the name produce a principle. Not designed. Discovered. And discovered in partnership with an AI that remembers what I said three hours ago and asks the right question when my attention is free.

The process is a product. It evolves through use, not through design. You do the thing. You name the thing. You look at the pattern. You name the pattern. You choose the pattern. And then you hold it loosely, because the next round of contact with reality will reshape it. Named problems become workable. Unnamed problems become technical debt with philosophical implications. The difference between building with AI and being built by AI is whether you can name what’s happening while it’s happening, and that naming is only possible when something is holding the context you can’t.

The distinction between writing software and thinking about writing software has collapsed into a single act, and that act requires a partnership between human judgment and machine memory that neither side can perform alone. The people who figure that out first won’t just ship faster. They’ll build things the others can’t even spec, because the spec emerges last, from a process that only exists in the doing of it.

More from Digital

Next →
← Previous

No idea but I assume most things are at a minimum AI edited.

Of course it was. I couldn’t get past the first two paragraphs unfortunately. Getting more and more allergic to it.

was this written by an AI? reads like it…

I like this post, now I’m vibing an agent orchestrator