In December 2025, Harvard Business Review surveyed 1,006 global executives about AI and workforce decisions. Sixty percent had already reduced headcount or slowed hiring in anticipation of AI's impact. The number who had done so because AI was actually performing the work? Two percent.
That gap is the entire story of engineering leadership in 2026. The cuts are happening. The expectations are shifting. And almost nobody is willing to say so clearly enough that their engineers can actually respond to it. The fear your team feels is not irrational. It's a reasonable reading of a situation that management has decided is easier to obscure than to explain.
The Bifurcation Is Already a Completed Action
The argument that PE-backed engineering teams are splitting into two camps (those who treat AI as a force multiplier and those who treat it as an insult to their craft) is not a forecast. It happened. At multiple companies. With public documentation.
Block cut 4,000 of its 10,000 employees in February 2026. Not because the company was struggling. Block posted $10.36 billion in gross profit. Jack Dorsey cut them because he concluded that AI-augmented smaller teams could do more. One engineering team went from eight people to one. The head of engineering told remaining staff that output expectations were going up. Some engineers reportedly worried that quality would suffer. Others, presumably, started figuring out how to become the one person who replaces eight.
Klarna halved its workforce from over 7,000 to under 3,000 between 2022 and 2025, with CEO Sebastian Siemiatkowski publicly criticizing other tech leaders for sugarcoating AI's impact on jobs. He expects to hit fewer than 2,000 by 2030. The financial results validated the bet: 38% year-over-year US revenue growth and rising profits during the reduction. Klarna IPO'd on the NYSE while shrinking.
Then there's IgniteTech, which is the case nobody in polite enterprise circles wants to talk about. CEO Eric Vaughan replaced nearly 80% of his staff after they refused to adopt AI tools. He gave them time, training, reimbursement for prompt-engineering courses. He was met with what he described as mass resistance and even sabotage. The most resistant group wasn't sales or marketing. It was the engineers, the people who understood AI's limitations well enough to focus on what it couldn't do rather than what it could amplify. By end of 2024, IgniteTech was running at nearly 75% EBITDA margins.
That last detail matters. It tells you what other PE operating partners are actually looking at.
What the New Bar Actually Looks Like
If you run an engineering org and you haven't defined what "AI-augmented performance" means for your team, your engineers are filling in the blanks themselves. Most of them are filling in something terrifying and vague, which is worse than the truth.
Here's what the explicit version looks like, because a few leaders have been willing to say it.
Shopify CEO Tobi Lütke published a memo in April 2025 that became the canonical document for AI-augmented expectations. AI usage is a baseline expectation for all employees, including leadership. Teams must demonstrate that AI cannot do a job before requesting new headcount. AI proficiency is built into performance reviews as a core career skill. Employees need to improve at 20 to 40 percent annually to, in Lütke's words, "re-qualify." He called that threshold "not even terribly ambitious." The memo was widely read not as a productivity initiative but as a filter: a deliberate reshaping of who would thrive at Shopify and who would self-select out.
Accenture went further. CEO Julie Sweet stated in September 2025 that the firm would "exit" employees who could not be reskilled for AI. The company laid off 11,000 while simultaneously growing its AI and data workforce from 40,000 to 77,000. That's not a reduction. That's a swap.
The traditional engineering metrics (lines of code, ticket velocity, PR throughput) are becoming inadequate as performance indicators. When AI reportedly generates the vast majority of code at companies like OpenAI, the relevant measure shifts to decision velocity, the ability to orchestrate AI systems effectively, and the judgment to know when the AI's output is wrong. Engineering analytics platforms are already building frameworks around AI adoption impact tracking and human-AI collaboration efficiency. A major financial services company tracked engineers against their own pre-AI baselines and found that engineers using AI tools showed a 30% increase in PR throughput compared to 5% for those who weren't. The gap is widening. It is measurable. And it maps cleanly to the question of who stays and who doesn't.
The Structural Pressure Is Not Optional
If you're at a PE-backed company, this isn't a cultural trend you can opt out of. The pressure is coming from the cap table.
Vista Equity Partners, which manages over $100 billion in software-focused buyouts, told its LPs in November 2025 that it plans to reduce its own workforce by as much as a third through AI adoption. Vista also scores each portfolio company on AI adoption and is mandating that portcos build AI agents and shift to usage-based pricing. As of early 2026, 100% of Vista's majority-owned portfolio companies are leveraging AI in core operations.
When the PE sponsor itself is eliminating roles and grading you on AI fluency, the ambiguity is gone. The expectation is structural.
BCG's January 2026 analysis breaks AI adoption into three phases: deploy (hand out licenses), reshape (redesign workflows and team structures), and invent (build AI-native products). Most portfolio companies are stuck in deploy, which BCG's data shows rarely creates measurable P&L impact. The value shows up in reshape, which requires changing role definitions, reducing headcount in augmented functions, and rebuilding orgs around AI fluency. PE-backed companies that systematically build AI capabilities show nearly twice the return on invested capital compared to those that don't.
That's the number your board is looking at. Not your sprint velocity. Your ROIC delta.
Why Nobody Will Say This to Your Face
Remember that HBR number from the top of this piece: 60% cutting in anticipation, 2% cutting based on evidence. The silence around those decisions is just as stark.
Executive outplacement firm Challenger, Gray & Christmas found that only 75 job cuts in the US during the first half of 2025 were explicitly linked to AI, against over 744,000 total layoffs. Companies are hiding behind "operational optimization" and "restructuring" because attributing cuts to AI invites scrutiny, and because (here's the uncomfortable part) nearly 60% of executives admitted that blaming AI simply sounds better to investors than admitting to financial mismanagement.
This creates the worst possible outcome. Engineers sense the shift. They see the tools improving quarterly. They read the same headlines you do. But instead of getting a clear framework ("here is the new bar, here is how we measure it, here is your runway"), they get silence and vague reassurances. The fear metastasizes into rumor, disengagement, or the kind of quiet resistance that one in three employees apparently already practice: actively sabotaging AI rollouts.
The results of premature, uncommunicated cuts are already visible. Forrester's 2026 Predictions report found that 55% of employers regret their AI-driven layoffs. Gartner predicts half of the companies that attributed cuts to AI will need to rehire for similar roles by 2027. Cutting without defining the new standard hollows out institutional knowledge before AI can actually replace it.
Fear Is a Signal. Use It.
Here is what I think most engineering leaders get wrong about this moment. They treat fear as a management problem to be soothed rather than a signal to be channeled.
Your engineers should be scared. Not because AI is going to replace all of them tomorrow (it isn't), but because the performance bar is moving and most of them don't know where it's moving to. The fear is directionally correct even if the timing is imprecise. An engineer who was right about AI's limitations in 2022 may be wrong about where those limits sit in 2026. The research on AI-augmented teams supports this: AI helps teams explore more solutions faster, but the gains accrue to people who engage critically with AI outputs, not to those who either ignore the tools or follow them blindly.
The kind thing, the actually humane thing, is not to pretend the world hasn't changed. It's to be specific about what changed and what you expect now. Define the AI-augmented performance standard. Make the criteria measurable. Give people genuine runway to clear the bar, not a vague "we encourage you to explore AI tools" buried in a Slack channel nobody reads.
Some of your engineers will clear it and become radically more valuable. Some won't. That's not cruelty. That's the reality of a technological shift that compresses into 18 months what previous transitions spread across a decade.
The cruelty is the silence.