The Labor Question
Market commentary treats AI as a labor-saving tool. The evidence points in a different direction. An eight-month ethnographic study by researchers at Berkeley Haas, published in Harvard Business Review in February 2026, found that AI tools did not reduce work at the company they studied—they intensified it. Employees worked at a faster pace, took on broader scope, and extended work into more hours. The self-reinforcing cycle was clear: AI accelerated tasks, which raised speed expectations, which increased AI reliance, which widened scope further.
Inside software teams, the more important effect is uneven. AI increases the output of employees who already know how to frame problems, review work, and keep moving.
The same logic extends beyond engineering. A high-agency engineer ships more. A high-agency support employee can resolve more issues at a higher standard. A high-agency customer success manager can cover more accounts without lowering the level of service. A high-agency employee in sales or operations can prepare faster, follow through faster, and move with less administrative drag.
The productivity data supports this. Microsoft Research found that developers using GitHub Copilot completed tasks 55.8% faster in a controlled experiment, with measurable improvements in code quality. But the gains are not automatic—the same research shows it takes roughly 11 weeks for developers to fully realize the productivity benefits, which means the return depends on the person, not just the tool.
For management teams and investors, the implication is straightforward. AI does not simply reduce labor demand across the board. It increases the return on employees who can use the tools with speed and independence. One well-paid, high-agency employee can now do work that previously required several average employees.
Same tools, different workforce, different output.
The hiring variable that matters most is orientation. A high-agency employee is not thinking about job security. They are thinking about financial impact on the company, about improving operations, about doing more with less, about raising the bar for customers. That orientation—toward impact rather than self-preservation—is what determines whether AI tooling produces outsized returns or sits unused.
The operating model that follows is smaller teams, higher-agency employees, and higher compensation for employees who can carry more scope. "Fewer, better, higher paid" is the shorthand, but the real point is leverage: moving from average-paid employees who protect their position to higher-paid employees who raise the standard.
The result is a more efficient company with a higher internal bar. Quality standards rise because fewer people are carrying more responsibility. Productivity standards rise because the weaker forms of coordination work become harder to justify. Operating standards rise because a smaller, better-paid team has to ship cleanly and consistently.
Fewer, Better, Higher Paid
What remains valuable: senior engineers who can design systems and review AI-generated work, product leaders with strong domain expertise, support and success employees who can use AI to handle more customer volume without lowering service quality, and infrastructure people who make release processes reliable.
What compresses first: routine implementation, manual QA, repetitive support work, low-value follow-up work, and documentation or reporting tasks that can be drafted or automated once standards are clear.
How compensation shifts: a smaller team may cost less in aggregate even while paying far more for the remaining seats. The relevant measure is output per experienced employee, not salary bands in isolation. The argument is not to underpay fewer people. It is to pay more for employees who can carry far more productive scope.
The distinction matters because headcount reduction alone does not work. Klarna automated two-thirds of its customer service interactions with AI in 2024, handling 2.3 million conversations and reducing resolution time from 11 minutes to 2 minutes. But the company's CEO later admitted that quality suffered, and Klarna began rehiring human staff. The lesson is not that AI cannot handle the volume. It is that removing people without raising the bar for the people who remain produces a worse outcome, not a cheaper one.
In PE settings, headcount is treated mainly as a cost lever. In an AI-accelerated organization, it is also a leverage decision.
Implications for PE Operating Models
If this dynamic holds, it changes several parts of the post-close playbook.
The 100-Day Talent Plan Looks Different
The traditional 100-day plan asks which open roles need to be backfilled. A higher-leverage model changes the question. Which roles should actually remain open? Where should savings be redeployed into higher-agency employees, better tooling, and stronger retention across engineering, support, success, sales, and operations?
This is not simply a headcount reduction exercise. It is a redesign of how the team operates, with fewer roles, clearer standards, and more investment in the people who can carry broader scope.
Management Assessment Changes
Traditional CTO assessment focuses on recruiting, team management, and technical credibility. Those still matter. The additional question now is whether this person can run a smaller, faster, AI-augmented organization without losing quality.
Managing a 15-person high-leverage team is different from managing a 45-person conventional one. Communication is tighter, expectations are higher, and weak performers are more expensive. Not every CTO who worked well in the older model will work equally well in this one.
The GP-Level Capability Gap
Many PE operating teams are still benchmarking against pre-AI org charts. That makes it harder to judge whether a smaller team is under-resourced or simply better tooled, and whether a higher compensation band reflects inefficiency or leverage.
The larger firms are moving. Vista Equity Partners reported that by January 2026, 30 portfolio companies were already generating revenue through agentic AI, with another 30 to 40 expected by mid-2026. Thoma Bravo now deploys dedicated AI transformation teams into portfolio companies to modernize sales, marketing, and R&D. But evaluating AI-augmented team structures requires different judgment than evaluating traditional org charts—and that capability gap at the GP level is real.
The central operating question is not how many engineers a company employs. It is how much reliable output the organization gets from each experienced employee.
The point is not to predict a single future org chart. It is to recognize that AI is already changing the economics of technical leverage. Firms that evaluate teams on that basis will make better staffing and underwriting decisions.