← Back to Insights

PE Should Require AI Headcount Reduction Targets in the First 100 Days

Research · Jacob Hartmann · March 17, 2026

And here's why the ones doing it wrong are still doing better than the ones not doing it at all.

I'm going to make an argument that will make a lot of people uncomfortable. That's fine. Comfort is not the point. Accuracy is.

Every PE value creation plan includes cost optimization. Every one. You don't acquire a company with borrowed money at 12x EBITDA and then shrug about operating leverage. That's not how the math works. And right now, in early 2026, AI has made a specific, quantifiable version of that cost optimization possible inside engineering organizations. Not theoretically possible. Not "emerging." Possible today, with tools that exist, at companies that are already doing it.

The argument I want to make is this: operating partners should set explicit headcount reduction targets, somewhere in the range of 10 to 20 percent in year one, tied to AI-assisted productivity gains. The savings should be reinvested into the remaining team (higher comp, better tooling, more senior hires) or dropped to EBITDA. The companies doing this quietly are already outperforming. The refusal to name the number is a failure of leadership, not compassion.

But I also want to be honest about something: the academic evidence does not cleanly support the version of that argument that most PE operators want to hear. The research tells a more complicated, more interesting, and ultimately more useful story. If you're an operating partner reading this, the nuance is where the real alpha lives.

What PE Actually Does Post-Buyout (and Why This Conversation Matters Now)

The playbook has shifted. A meta-analysis of PE technology-driven value creation by Liepert (2024) confirms what anyone in the industry already feels: pure financial engineering, the leverage-and-pray model, is running out of runway. The firms generating returns are doing it through operational and technology-driven improvements. IT cost reduction linked directly to EBITDA expansion. Tech-enabled process redesign. Digital enablement that opens new revenue streams.

This isn't new. What's new is the speed at which AI has compressed the timeline for these interventions. Liepert's work across multiple papers (here and here) documents how IT cost optimization and automation can raise EBITDA by 10 to 20 percent in portfolio company contexts. That range used to require 18 to 24 months of painful ERP migrations and vendor renegotiations. Today, a well-executed AI integration strategy can start producing measurable gains in 90 days.

Verbouw, Meuleman, and Manigart (2025) reinforce the broader pattern: PE value creation has moved from financial engineering toward operational improvement. And Åstebro (2021), in what remains one of the best inside looks at AI use in PE, shows how the technology reshapes talent mix and accelerates due diligence by roughly 25 percent. The framing, even back then, was augmentation rather than elimination. I think that framing was right in 2021. I think it's incomplete in 2026.

The Case for Naming the Number

Here's the part where I make people angry.

I've watched enough 100-day plans unfold to know the difference between one that has teeth and one that's a PowerPoint exercise. The plans with teeth have numbers. Specific numbers attached to specific initiatives with specific owners and specific deadlines. "We will reduce engineering headcount by 15% over twelve months, primarily through attrition and role consolidation enabled by AI tooling, and we will reinvest 60% of the savings into compensation increases for retained staff and 40% into EBITDA." That's a plan. "We will explore opportunities to leverage AI for efficiency improvements across the engineering organization" is a sentence designed to survive a board meeting without anyone having to commit to anything.

The companies I've seen move fastest on this are not running around firing people on day one. They're doing something smarter. They're setting a hiring freeze, communicating transparently that AI is going to change team structure, and letting natural attrition do most of the work. This is exactly the Klarna model, which I'll come back to because it's the single most instructive case study in the market right now, for both what to do and what not to do.

The PE operating partner who refuses to name a number isn't being compassionate. They're being avoidant. And avoidance in a portfolio company with a five-year hold period is expensive. Every quarter you spend "exploring" AI's potential instead of restructuring around it is a quarter of margin improvement you'll never get back.

Let me name three specific reasons why explicit targets matter.

First, targets create accountability. Without a number, the CTO can nod along in board meetings about AI adoption while changing nothing about team structure, hiring plans, or capacity allocation. I've seen this happen at portfolio companies where the operating partner said all the right things about AI and then watched engineering headcount grow by 8% in year one because nobody connected the AI strategy to the HR plan. Those are real dollars walking out the door.

Second, targets force honest conversations about which roles AI actually replaces. Not in theory. In practice, at this company, with this codebase, with these people. When you say "we're going to reduce by 12%," suddenly the VP of Engineering has to sit down and figure out where those 12% come from. That exercise, even if the final number shifts, produces organizational clarity that the "let's see where AI helps" approach never generates.

Third, targets signal seriousness to the team. Engineers are not stupid. They read the same headlines everyone else does. They know AI is going to change team structures. A company that says "we're going to be thoughtful about this" while hiring three more backend developers is sending a message that leadership either doesn't understand what's happening or doesn't have the nerve to act on it. Neither interpretation builds trust.

Now Here's Where the Evidence Complicates Things

I promised honesty, so here it is: the academic research does not support naive headcount quotas tied to AI.

The strongest large-scale study on AI and firm-level outcomes comes from Babina, Fedyk, He, and Hodson (2024), published in the Journal of Financial Economics. Their finding is striking: AI adoption is associated with firm growth in sales, employment, and market valuations, driven mainly by product innovation. Not by operating cost cuts. Not by shrinking headcount. The firms getting the most value from AI are growing, not cutting.

This isn't an isolated finding. Enholm, Papagiannidis, Mikalef, and Krogstie (2021), in a comprehensive literature review on AI and business value, concluded that AI generates performance gains through automating tasks and augmenting workers, improving processes, and enabling new products and services. The word "layoffs" doesn't feature prominently in the value creation narrative.

Rožman, Oreški, and Tominc (2023) found that AI-supported workload reduction leads to lower perceived stress and higher engagement, which in turn drives higher performance. Cut too aggressively and you don't just lose people, you lose the engagement multiplier that makes the remaining people productive.

And Wamba-Taguimdje et al. (2020), studying AI's influence on firm performance, found that value comes from reconfiguring processes, skills, and business models. Not from treating AI as a headcount eraser.

Even within PE specifically, Sharma et al. (2024) and Sahani (2024) frame AI as augmenting professionals rather than replacing them, focusing on deal-making and analytics rather than systematic workforce reduction.

So where does that leave my argument?

The Productivity Paradox: Why Naive Cuts Fail

Before I reconcile the position, I need to talk about what's actually happening when companies hand their engineers AI tools and expect headcount reductions to follow automatically. Because the data here is damning.

Faros AI published what might be the most important piece of engineering operations research in 2025: the AI Productivity Paradox report. They analyzed telemetry from over 10,000 developers across 1,255 teams. The headline findings should be tattooed on every operating partner's forearm: developers using AI completed 21% more tasks and merged 98% more pull requests. Sounds great. But PR review times ballooned by 91%. Organizational delivery metrics stayed flat.

Read that again. Individual output up. Organizational throughput unchanged. AI made every developer faster at the thing that wasn't the bottleneck.

This is Amdahl's Law applied to software organizations, and it's the reason why a PE firm that sets a headcount target without understanding the delivery pipeline is going to destroy value, not create it. If you cut 15% of your engineers because AI is making the remaining 85% more productive at writing code, but your bottleneck is code review, QA, and deployment, you've just made the bottleneck worse with fewer people to staff it.

The 2024 DORA State of DevOps Report reinforced this. As AI adoption increased across their sample, delivery throughput dropped by an estimated 1.5% and delivery stability fell by 7.2%. More AI, worse outcomes. The 2025 report showed improvement (AI adoption now correlates positively with throughput), but stability still suffers. The pattern is clear: AI accelerates code production but exposes weaknesses downstream. Without robust testing, mature version control, and fast feedback loops, more code just means more chaos.

Stack Overflow's 2025 survey adds another layer. Despite 84% of developers using AI tools, positive sentiment has actually dropped from over 70% in 2023 to around 60%. Only 3% of developers report high trust in AI-generated code. The people using these tools every day are growing more skeptical, not less.

So if you're an operating partner who sets a 15% headcount reduction target and your CTO delivers it by simply not backfilling attrition while handing everyone a Copilot license, you're going to get exactly what the research predicts: more code, worse code, slower delivery, frustrated engineers, and a CTO who tells you it's "still ramping up" for four consecutive board meetings until you've burned 18 months of your hold period.

The Klarna Lesson: Get the Sequence Right

Klarna is the case study everyone points to, and for good reason, but most people draw the wrong lesson from it.

The facts: Klarna reduced headcount from roughly 5,500 in 2022 to around 3,000 by 2025, primarily through natural attrition and a hiring freeze. Their AI customer service agent replaced the work of 700 to 850 full-time support staff. Revenue grew 108% while operating costs stayed flat. Revenue per employee rose 152%. Average compensation for remaining staff increased nearly 60%. They IPO'd successfully. The numbers are remarkable.

But here's what most commentators leave out: Klarna's CEO Sebastian Siemiatkowski publicly admitted by early 2025 that the company had gone too far. Customer satisfaction dipped. Complaints increased. The AI systems couldn't handle nuanced support interactions. Siemiatkowski's exact admission was that cost had been too dominant an evaluation factor. By mid-2025, Klarna was rehiring human customer service agents and implementing a hybrid model.

The lesson isn't "don't cut." The lesson is "don't confuse cutting with strategy." Klarna's mistake wasn't reducing headcount. It was reducing headcount in customer-facing roles where AI genuinely wasn't ready to handle complexity, while the underlying organizational redesign hadn't been completed. They optimized for cost before they optimized for capability.

The lesson for PE is this: the sequence matters enormously. You don't start with a headcount number. You start with a capability model. What does this engineering organization need to deliver in 12, 24, 36 months? Which of those capabilities can AI genuinely absorb today (not theoretically, not with the next model release, but today)? Which require humans? What does the team structure look like when you rebuild around that reality?

The headcount number falls out of that analysis. It doesn't get set beforehand and then imposed.

Reconciling the Argument: What I Actually Think PE Should Do

I started this piece with a provocative claim: PE should require AI headcount reduction targets in the first 100 days. Now I'm going to tell you what I actually mean by that, informed by the evidence.

I don't mean operating partners should walk into a portfolio company and announce "you're cutting 15% of engineering by December." That's the naive version, and the research is right that it doesn't work.

What I mean is this: by day 100, the value creation plan should include a specific, numbers-attached model for how AI changes the engineering organization's cost structure and capability profile over the hold period. That model should include projected headcount changes, even if those changes are "net reduction of 12% through attrition, with reinvestment of 80% of savings into compensation and tooling." The number might be 8%. It might be 25%. It depends on the company. But the number needs to exist.

The research from Li, Qiu, and Shen (2017) on organization capital and post-merger performance is instructive here. High-performing acquirers earn better long-term results by combining targeted cost moves with capability building and growth actions. They cut cost of goods sold while increasing SG&A (investing in capabilities and sales). They improve asset turnover and innovative efficiency. The discipline of cutting costs funds the investment in what matters.

That's the model. Not "cut heads to save money." Rather: "restructure the team around AI capabilities, reduce roles that AI has made redundant, reinvest the savings into making the remaining team significantly more effective and better compensated, and drop the remainder to EBITDA."

Here's what that looks like in practice for a typical PE portfolio company with a 40-person engineering team:

Months 1 through 3: Assess the current delivery pipeline end to end. Identify where AI tools can absorb work today (not tomorrow). Implement AI tooling with proper training, workflow redesign, and measurement. Critically, fix the downstream bottlenecks (code review, testing, deployment) before accelerating upstream code production.

Months 3 through 6: Institute a targeted hiring freeze. As people leave through natural attrition (which, in most engineering orgs, runs 15 to 20% annually), don't backfill roles where AI has absorbed the work. Do backfill roles where human judgment remains essential, but hire differently: more senior, higher paid, with explicit AI fluency.

Months 6 through 12: Evaluate whether the team is delivering more with less. If yes (and it should be, if you've done the upstream work), formalize the new team structure. Increase compensation for retained engineers. The Klarna model of significantly higher per-capita comp is exactly right: you want fewer people who are paid well enough to stay, not a skeleton crew of people who are already job-hunting.

By month 12, a well-executed version of this plan should produce a team that's 10 to 20% smaller, 30 to 40% more productive on meaningful delivery metrics (not just lines of code), with average compensation up 20% or more. The net effect on EBITDA depends on the starting point, but Liepert's research suggests the range for technology-driven EBITDA improvement is 10 to 20% in PE contexts, and AI-assisted restructuring is the fastest path to the upper end of that range.

The Organizational Design Problem Nobody Wants to Talk About

Here's the part that makes this genuinely hard, and that most AI-and-PE content conveniently ignores.

Angwin and Meadows (2015) identified five distinct post-acquisition integration strategies, each differing in how aggressively the acquirer intervenes in structure, systems, and culture. Their key insight: matching integration style to deal context is critical for long-run outcomes. The same logic applies to AI-driven restructuring. There is no universal playbook. A 200-person engineering org at a healthcare SaaS company serving regulated customers requires a fundamentally different approach than a 15-person team at an e-commerce platform.

Malik, Budhwar, and Kazmi (2022), studying AI-assisted HRM, argue for reframing human resources toward strategic, data-driven decisions rather than pure cost-cutting. Applied to engineering: the question isn't "how many people can we cut?" It's "what is the minimum viable team to deliver our product roadmap at the quality level our customers require, given the AI tools available today?"

That reframe matters because it changes the conversation from adversarial (management vs. engineers, cost vs. capability) to architectural (what's the right design for this organization given current technology?). Engineers, even skeptical ones, can engage with an architectural question. Nobody engages productively with "we're cutting your team by 15% because AI."

Cannas et al. (2023), studying AI in supply chain and operations, found that AI projects reduce costs, inventory, lead times, and operating expenses through process optimization and better decisions, not through headcount reduction per se. The headcount changes are downstream effects of the process changes. Start with the process. The people decisions follow.

What About Atlassian?

I'd be remiss not to mention what happened just last week. Atlassian announced a 10% reduction in its workforce, roughly 1,600 employees, explicitly to fund further investment in AI and enterprise sales while reorganizing around their System of Work initiative. CEO Scott Farquhar stated they focused on retaining employees with the skills to thrive as an AI-first company, including strong performers, graduates, and people with transferable skills.

This is a publicly traded company, not a PE portfolio company, but the pattern is identical. Name the number. Tie it to a strategy. Reinvest in the capabilities that matter. Don't pretend it's not happening. Atlassian didn't frame this as "exploring AI efficiencies." They said: we are shrinking, we are reorganizing, and here is why.

That's leadership. It's not painless. But the alternative, the slow bleed of unclear expectations, frozen hiring that nobody explains, and a team that reads between the lines and loses trust, is worse for everyone, including the people who eventually get cut anyway, except later and with less severance.

The Real Failure Mode

The biggest risk in this conversation is not that PE firms set AI headcount targets and destroy value through over-cutting. That's a real risk, and Klarna's customer service debacle illustrates it vividly. But it's a risk that self-corrects, because the feedback loop is fast. When customers start complaining, you hire people back. It's expensive and embarrassing, but it's recoverable.

The bigger risk is the one that doesn't self-correct: the PE firm that spends 18 months "exploring" AI while competitors restructure. The portfolio company where the operating partner keeps asking about AI in board meetings but never connects it to a specific number, a specific plan, a specific timeline. The CTO who runs an "AI tiger team" that produces impressive demos and zero organizational change. That company exits at 8x instead of 12x, and the difference, which might be $50 million or $200 million depending on the company, represents the cost of leadership's unwillingness to name what was obvious to everyone in the room.

Valkama et al. (2013) found that deal-level buyout returns are driven by acquisitions made during the holding period, deal size, and sector growth, with traditional governance levers mattering less than often assumed. Translated: the PE firms that win are the ones that make decisive moves during the hold period. Not the ones that wait for consensus.

AI-driven team restructuring is the decisive move of this cycle. Every quarter you delay it is a quarter of compounding productivity gain you forfeit to competitors who didn't.

What I'd Tell an Operating Partner on Monday Morning

Set the target. But set it right.

Don't announce a headcount number and hand it to the CTO as a mandate. Instead, commission a 60-day engineering capability assessment that maps every team, every workflow, and every delivery bottleneck against current AI tool capabilities. Bring in someone who has done this before, someone who understands both the technology and the organizational dynamics.

From that assessment, build a restructuring model with a specific number. Pressure-test it against the delivery roadmap. Make sure you're not cutting the people who are your bottleneck (senior reviewers, infrastructure engineers, anyone who holds critical system knowledge). Make sure you're investing in the downstream capabilities (testing, deployment, monitoring) that AI-accelerated development will stress.

Then communicate it clearly. Tell the engineering team: this is our plan, this is why, and this is what it means for the people who stay (more money, better tools, more interesting work). Set a hiring freeze on the roles AI will absorb. Let attrition do the heavy lifting. Backfill selectively and at a higher level.

And measure it. Not just headcount. Measure delivery throughput, deployment frequency, change failure rate, customer-reported defects, and engineer retention. If the numbers go the wrong direction, adjust. The target is a starting point for a plan, not a suicide pact.

The evidence says AI creates value through augmentation, process redesign, and innovation rather than through crude headcount cuts. That's true. It's also true that augmentation, process redesign, and innovation, done honestly, produce a team that's meaningfully smaller and meaningfully better. Refusing to name that outcome in advance doesn't make you compassionate. It makes you slow.

Name the number. Get the sequence right. And move faster than the company in the next portfolio over, because they're already doing it, whether they're saying so publicly or not.

Want to discuss this with our team?

Book a call and let's talk about how these ideas apply to your organization.

Book a Call