I Was Wrong About AI

For a year I said AI tools only added 20% efficiency. I was measuring the wrong thing.

D
David Kerr Founder, Kerrberry Systems

Developer working with AI coding tools

TL;DR

  • My 20% efficiency claim was true — but I was using AI wrong, like a power drill to hammer nails
  • AI tools have improved significantly, and my low expectations were holding me back
  • The shift: bad junior → solid mid-level developer who can refactor, understand context, and write tests
  • Plan first in markdown — schemas, diagrams, feature docs — then review at a high level
  • Run parallel threads, use AI to review itself, and invest heavily in automated checks
  • The new question isn’t “can AI code?” — it’s “how do I work with AI effectively?”

For the past year, I’ve been telling anyone who would listen that AI coding tools only add about 20% efficiency.

I wasn’t wrong about the number. I was wrong about what I was measuring.

The 20% Trap

My original take was simple: AI is good at generating well-defined blocks of code. Give it a clear, isolated task — write a function that does X, generate a test for Y — and it performs reasonably well. But ask it to understand a larger system and implement something that fits? It falls apart.

So I treated it like a fancy autocomplete. A slightly smarter code snippet generator. And with that framing, 20% efficiency gain felt about right.

But here’s the thing about mental models: once you’ve decided what something is, you stop noticing when it becomes something else.

Resetting Expectations

A few months ago, I made a deliberate choice to re-commit to these tools. Not as a skeptic looking for flaws, but as someone actually trying to get work done.

What I found surprised me.

The tools had gotten better — significantly better. But more importantly, I had been holding them back. My expectations were so low that I wasn’t even asking the right questions. I was using a power drill to hammer nails and complaining it wasn’t very good at it.

When I actually engaged with the tools properly — giving them context, working with them instead of just at them — the results were different. Not just faster. Different in kind.

From Bad Junior to Solid Mid-Level

Here’s the reframe that finally clicked for me:

A year ago, AI coding assistants felt like a bad junior developer. They’d write code that technically worked but missed the point. They’d make obvious mistakes. They needed constant supervision. The cognitive overhead of checking their work ate into any efficiency gains.

Now? They feel like a solid mid-level developer.

They understand context. They can hold a design in their head and implement pieces that actually fit together. They catch edge cases I forgot to mention. They suggest approaches I hadn’t considered.

Here’s a concrete example: I was building out some UI and realized I’d asked the AI to make the same update across three different pages. The old me would have just done it — three prompts, three changes, move on. Instead, I stopped and asked it to refactor the shared pieces into a reusable component with tests.

It did. Correctly. It understood which parts were common, extracted them cleanly, and wrote tests that covered the actual behavior. That’s not autocomplete. That’s a developer who understands the codebase.

I still review everything. I still make the architectural decisions. But the ratio of “time spent fixing their work” to “time saved” has flipped completely.

What I’ve Actually Learned

Beyond the mindset shift, I’ve developed some concrete practices that make AI collaboration work:

Plan first, always. When I’m working on something larger than a single building block, I start by having AI build out a plan in markdown. Schema definitions. Entity relationships. Sequence diagrams. The AI can fully document a feature to the point where my job becomes thinking and signing off at a high level.

This mirrors something I did constantly at Amazon: feature reviews where a mid-level developer would present their design to a senior before committing engineering time. Except now that mid-level developer can produce documentation in minutes, and I can ask probing questions — “how does this handle the case where X?” — without scheduling a meeting.

Work in parallel threads. I’ve stopped thinking in single linear tasks. Instead, I run multiple workstreams simultaneously — different features, different layers of the stack — and treat each as its own context. The key is keeping threads from fighting each other. Clean boundaries, clear scope.

To manage this, I use AI to summarize progress across threads so I don’t lose track of what needs my review. It’s project management, but the project manager is also doing the work.

Make AI review itself. Before I look at generated code, I ask the AI to review it. “What edge cases did you miss? What would break this? What would a senior engineer push back on?” It catches a surprising amount. Not everything — I’m still the final reviewer — but it filters out the obvious issues so my attention goes to the subtle ones.

Invest in automated checks. This one took me too long to learn. Every hour spent improving your test suite, linter rules, or type coverage pays dividends. Because the alternative is you being the one who catches every issue manually. The more you can offload to automation, the more you can trust the AI’s output without babysitting every line.

What This Means for My Work

I’m using AI tools daily now — for client work, for this website, for exploratory coding. Not as a replacement for thinking, but as a genuine collaborator.

Does this mean I’ll charge less? No. It means I can deliver more. The bottleneck was never typing speed — it was the cognitive load of holding an entire system in my head while also writing the code. With a good pair-programming partner handling the mechanical parts, I can focus on the parts that actually require experience and judgment.

The Lesson

If you tried AI tools last year and wrote them off, it might be time to try again. Not because the hype is real — most of it is still overblown — but because the tools have genuinely improved, and your own mental model might be due for an update.

The question isn’t “can AI code?” anymore. It’s “how do I work with AI effectively?” That’s a much more interesting problem.

And honestly? I’m still figuring it out.


David Kerr is the founder of Kerrberry Systems. He builds custom software for businesses that need it and is slowly learning to accept help from robots. Find him on LinkedIn or GitHub.

Want to Work Together?

Let's discuss your project.

Schedule a Call