You're the Broker, Not the Builder
AI coding assistants are good enough to do the building. Your job is to be the broker between the problem and a scalable, resilient solution. Here's what that looks like in practice.
TL;DR
- Your job with AI coding tools isn’t building. It’s brokering between the problem and a scalable, resilient solution.
- Make it plan. Read the plan. Iterate. This is where you add the most value, and it’s your safety net when context resets.
- You’re not the QA. Bake test expectations into instructions and automation. If AI is writing the tests, raise the bar.
- Fix the pipeline, not the data. The AI will find the fastest “done.” Your job is pushing past quick fixes toward durable solutions.
- Use the processes you already know you should. Release branches, semver, local testing. AI variance is real, and process protects against it.
- Commit your work. The boring last step is the one that saves you from losing everything to a forgotten branch.
In January I wrote a post called I Was Wrong About AI. The core of it was a mindset shift. I’d been treating AI tools like a fancy autocomplete, and once I stopped doing that, the results changed in kind. That post was about accepting AI as a real collaborator. This one is about what happens after you accept it.
Because once you start trusting AI to do the building, a different question takes over. If the AI is writing the code, what exactly is your job? I’ve spent the last three months figuring that out, and the answer is simpler than I expected.
You’re the broker. Your job is standing between the problem and a solution that is scalable, resilient, and right. The AI will find the fastest “done” if you let it. Every time. Your job is defining the requirements, constraints, and quality bar so that shortcuts aren’t available.
Make It Plan. Then Read the Plan.
This is the highest-leverage habit I’ve developed. When you give an AI agent a task, it wants to start writing code immediately. It will tell you it doesn’t need a plan. Override it.
Make it write the plan in markdown first. The schema. The data flow. The architecture decisions. Which files it intends to touch and why. Then read that plan and push back on it. This is where you add the most value, and it costs you five minutes.
I use Claude Code’s plan mode constantly. I’ll describe a feature, ask it to spec it out, then iterate on the spec before a single line of code gets written. The conversation sounds like a design review. “What happens when this field is null? Why are you creating a new table instead of extending the existing one?” Those questions are cheap to ask at the planning stage and expensive to answer after implementation.
The plan serves a second purpose too. When context compacts, when the agent gets amnesia mid-thread, when you pick it up the next morning, the plan is still there. You can hand it to a fresh context and say “here’s what we agreed on, keep going.” Without it, you’re starting from scratch every time the window resets.
And don’t stop reading once the plan is approved. Follow along as it works. AI will try to hide complexity behind clean summaries. It’ll touch ten files and tell you it “refactored the auth module.” Expand that. Read the diff. Buried in there are decisions it made without asking you. Maybe it changed a database index. Maybe it moved a validation from the controller into a context module. Those might be fine decisions. They might not. But if you don’t look, you’re trusting choices you never actually reviewed.
The planning phase is where you set direction. The execution phase is where you make sure it didn’t quietly veer off course.
You’re Not the QA.
It’s tempting to treat AI like a junior developer who needs their work checked line by line. I did that for months. It doesn’t scale, and honestly, it’s a miserable way to work.
Instead of reviewing every output manually, bake your expectations into the process. Write clear test expectations into your instructions. Put them in project memories so they persist across sessions. Set up pre-commit hooks that enforce your standards before code ever hits a branch.
Here’s the thing that changed my thinking on this. If you’re using AI to write code, and the AI can also write tests, maybe 100% test coverage isn’t an unreasonable bar anymore. For a human, that’s recreational. Nobody actually maintains 100% coverage on a real codebase.
But the AI doesn’t get bored. It doesn’t skip the edge case because it’s Friday afternoon. Set the coverage threshold high, wire it into your automation, and let the machine hold itself accountable.
I keep a running list of test patterns in my project instructions. Every time I catch something the AI missed, I don’t just fix it. I add the pattern so it doesn’t happen again. The goal is building a system where my manual review is the last line of defense, not the only one.
Fix the Pipeline, Not the Data.
AI agents want to succeed. That’s their whole thing. They will find the fastest path to “done” and take it without thinking about what comes next.
Here’s what that looks like in practice. You’re running your test suite and a test fails because a database record has bad data. You ask the AI to fix it.
What does it do? It fixes the record. Maybe it writes a migration to clean up the bad data. Test passes. Done.
But that’s not actually done. The bad data came from somewhere. There’s a code path that created that record with missing or malformed fields, and that code path is still there. Next week the same problem shows up again. You spend twenty minutes confused about why the same issue is back before you realize you never fixed the actual cause.
The broker’s job is asking “what happens next time?” every time the AI presents a fix. Not because the AI is being lazy. It genuinely thinks it solved the problem. But it’s optimizing for the immediate task, not for the system over time.
Push past the quick fix. Make it find the pipeline that produced the bad data and fix that instead.
Use the Processes You Know You Should.
Every good engineer knows the processes they should be following. Release branches. Semantic versioning. Changelogs. Running the full test suite locally before pushing.
Most of us cut corners on these because the overhead feels expensive relative to the work. When you’re the one writing every line of code, the ceremony of proper release process can feel like it doubles the effort for marginal benefit.
That calculus flipped. With AI handling the mechanical work, maintaining these processes costs almost nothing. Creating a release branch, bumping the version, writing a changelog entry. That’s a thirty-second instruction to your agent. There’s no excuse to skip it anymore.
And the need for process actually went up. AI agents vary wildly in quality from prompt to prompt. The same agent, the same model, can produce senior-level work in one session and junior-level work in the next. You can’t predict it.
Process protects you against that variance. When every change goes through the same pipeline (branch, test, review, merge) it doesn’t matter whether the agent was having a good day or a bad one. The process catches what the agent misses.
I use git worktrees to isolate feature work. I run fly deploy against staging before production. I tag releases. None of this is glamorous. All of it has saved me from shipping something broken.
Commit Your Work.
This is the shortest section because the lesson is simple, and I’ve learned it the hard way more than once.
You’re deep in a thread. The agent just finished a solid chunk of work across a dozen files. It’s good. You’re pleased.
And then you think, “let me just try one more thing.” You chase a tangent. The context gets long. You start a new thread to continue. And somewhere in there, you forget to commit.
A week later you’re cleaning up worktrees or pruning branches and you realize that good work is gone. Not because anything went wrong. Because you forgot the boring last step.
The broker’s last job is making sure the work actually lands. Open the PR. Merge the branch. Close the loop. It’s the step that turns a productive afternoon into shipped work instead of a cautionary tale.
AI keeps getting better. The variance between good and bad sessions is shrinking. The tools understand more context, make fewer mistakes, need less hand-holding. Every month the argument for “just let it build” gets stronger.
But the broker role doesn’t shrink with it. It gets more important. As AI gets more capable, the cost of turning it loose with bad requirements goes up. A mediocre agent with clear constraints and a solid plan will outperform a brilliant agent with vague instructions every single time. The leverage is in the brokering, not the building.
This isn’t babysitting. It’s making sure that the work you ship is work that actually needed to be built, built the way it needed to be built. That’s the job. And it’s a good one.
David Kerr is the founder of Kerrberry Systems. He builds custom software for businesses that want a partner, not a vendor. Find him on LinkedIn or GitHub.