← Back to Insights

The Engineer Who Won't Use AI Is the New Engineer Who Won't Write Tests

AI Adoption · Jacob Hartmann · March 16, 2026

Somewhere around 2020, I sat in a sprint retrospective and listened to a senior engineer, one of the best I'd worked with, explain why unit tests were a waste of time. His code was clean. He reasoned about it carefully. He'd been shipping production software for fifteen years and his defect rate was lower than people who wrote tests religiously. He wasn't wrong about any of that. He was wrong about what it meant.

Within three years, his position went from respected contrarian to career liability. Not because he got worse at his job, but because the profession moved and he didn't. Today, an engineer who refuses to write tests doesn't get hired at serious shops. Nobody debates whether tests are necessary anymore. We just do them.

I'm watching the exact same pattern repeat with AI tooling. But faster. And the engineers who are digging in on the wrong side of this are going to have a much shorter window to course correct.

The Objections Sound Familiar

I talk to engineering teams inside both PE-backed and bootstrapped companies often. The range of AI adoption across these teams is staggering. Some have integrated agentic coding tools into every workflow and fundamentally changed how they estimate and deliver. Others have a handful of individuals using Copilot autocomplete and a majority who haven't changed a single habit.

When I ask the holdouts why, I get a set of objections that are startlingly consistent. I've heard every one of them before, almost verbatim, about testing, about version control, about code review, about CI/CD. The specific technology changes. The resistance script doesn't.

"The output quality isn't good enough." This is the most common objection and the most revealing. It's true that AI-generated code requires review and often requires correction. It's also true that the first unit testing frameworks were clunky, that early CI pipelines broke constantly, and that the initial version of every transformative development practice was rough. The engineers who adopted those practices early didn't wait for perfection. They recognized directional value and adapted their workflows to capture it. The ones who waited for the tools to be flawless are the ones who got dragged into adoption five years later by mandate.

If you're evaluating AI coding tools by pasting a prompt into ChatGPT, reading the output, deciding it's not as good as what you'd write yourself, and concluding the technology isn't ready, you have fundamentally misunderstood what these tools are for. They're not a replacement for your judgment. They're a multiplier on your throughput. The gap between "not as good as my best work" and "useless" is where all the leverage lives.

"It doesn't work for my domain." I've heard this from infrastructure engineers, embedded systems developers, security teams, data engineers, and ML platform builders. Each one is convinced their work is uniquely unsuited to AI assistance. Some of them are partially right. AI tools are weaker in some domains than others. But "weaker" is not "useless," and the domains where these tools struggle today are shrinking by the month.

More importantly, this objection usually reveals that the engineer tried one tool, in one mode, on one task, found it lacking, and generalized. That's not an evaluation. That's a single data point dressed up as a conclusion. The engineers who are getting real value from AI tools experimented with multiple approaches across multiple tasks before forming a view. The ones who aren't getting value tend to have tried the least.

"I'm concerned about code quality and security." This is a legitimate concern applied as a blanket excuse. Yes, AI-generated code can introduce vulnerabilities. Yes, it can produce subtly wrong implementations that pass cursory review. These are real risks that require real mitigation, through review processes, testing (there's the irony), and thoughtful integration patterns. They are not reasons to reject the entire category of tooling.

We don't refuse to use open-source libraries because they might contain vulnerabilities. We vet them, we scan them, we maintain them. The same discipline applies here. If your response to risk is avoidance rather than management, you're not being cautious. You're being brittle.

"It's a philosophical issue. I take pride in my craft." This is the objection that worries me most, because it's the hardest to argue with and the most professionally dangerous. I respect the impulse. I genuinely do. The engineer who cares deeply about the quality and integrity of their work is exactly the kind of engineer you want on your team.

But pride in craft has to include pride in outcomes. A cabinetmaker who insists on hand-cutting every joint when a precision CNC router would produce identical results in a tenth of the time isn't demonstrating superior craftsmanship. They're demonstrating an attachment to process that has become disconnected from purpose. The craft is building excellent software that solves real problems. The tools you use to get there are means, not ends.

The Timeline Is Compressed

The testing adoption curve took roughly a decade to go from "optional practice some teams do" to "baseline professional expectation." Version control took maybe fifteen years. CI/CD took about eight.

AI tooling is going to complete this arc in two to three years. Maybe less.

The reason is simple. The productivity differential is too large to ignore, and the competitive pressure in PE-backed environments is too intense to allow it. When one portfolio company's 15-person team is delivering at the pace that used to require 40, the operating partners at the next portfolio company are going to notice. And they're going to ask their CTO why the engineering org isn't keeping up. "Our senior engineers prefer not to use those tools" is not going to be a satisfying answer.

I've already seen this play out in two engagements this year. In both cases, the team had a clear split: a group of engineers who had integrated AI tools deeply into their workflow and a group who hadn't. The productivity gap wasn't subtle. It wasn't 10 or 20 percent. The AI-adopting engineers were delivering features in days that the non-adopters estimated in weeks. Same codebase, same complexity, same level of seniority.

When the numbers look like that, the conversation stops being about preferences and starts being about performance.

This Is a Leadership Problem

If you're an engineering leader reading this and thinking "I agree, but I can't force my senior engineers to change how they work," I'd push back on that framing. You force them to write tests. You force them to go through code review. You force them to use the team's chosen version control workflow and deployment pipeline. You have done this for years because you understood that individual preferences yield to team-level standards when the standards serve a clear purpose.

AI tool adoption is the same category of decision. It belongs in your engineering standards, your onboarding process, and your expectations for professional development. Treating it as optional is treating a competitive advantage as a suggestion.

That doesn't mean mandating a specific tool or a specific workflow. It means setting a clear expectation that every engineer on your team should be proficient with AI-assisted development, should be actively experimenting with how to integrate it into their work, and should be able to demonstrate that they're using these tools effectively. The specifics of how they use them can vary. The fact that they use them shouldn't.

For the engineers who are struggling to adopt, invest in them. Pair them with teammates who have figured it out. Set aside time for experimentation. Create a safe space to be bad at it before getting good at it. This is a skill transition, and skill transitions require support.

For the engineers who refuse to adopt after genuine support and a reasonable timeline, you need to have the same conversation you'd have with any engineer who refused to meet a professional standard. That conversation might end with a performance improvement plan. That's not cruel. That's honest. Letting someone's career quietly stall because you were too uncomfortable to name the problem is worse.

The Window Is Closing

I want to be direct about who I'm really writing this for. If you're a senior engineer who has been skeptical of AI tooling, who has tried it and been unimpressed, who feels a genuine discomfort with the idea that a language model can do meaningful parts of your job, I understand. I do. The ground is shifting under a profession that many of us chose because we loved the deep, focused, solitary work of solving hard problems with code.

But the ground is shifting whether you like it or not. And the engineers who will thrive in the next era of software development are not the ones who ignore AI. They're the ones who learn to direct it, who develop taste for what it does well and judgment about where it falls short, who use it to eliminate the tedious work so they can focus on the genuinely hard problems that still require a human mind.

That's not less craft. It's more.

The window to make this transition on your own terms, at your own pace, with room to experiment and learn, is open right now. It won't stay open forever. Two years from now, AI proficiency will be table stakes, the same way testing is today. The question isn't whether you'll adopt these tools. The question is whether you'll do it while you still have the luxury of learning, or whether you'll do it under duress after the market has already made the decision for you.

I know which one I'd choose.

Want to discuss this with our team?

Book a call and let's talk about how these ideas apply to your organization.

Book a Call