When AI Progress Creates Power Struggles
Leaders chase momentum, teams chase output, managers get torn between the two
If your AI rollout is running into “resistance” or “change fatigue,” pause before you blame the workforce. In a lot of companies, the issue is not the people but the systemic conflict in incentives.
Does this sound familiar?
Teams protect delivery by building parallel processes.
People nod in meetings and route around the initiative to keep work moving.
Functions retreat into silos to defend control of data and decisions.
The rollout is not only changing workflows, it’s changing status and power, so friction should be expected.
A 2025 enterprise AI adoption survey reinforces this. The study found that 68% of the C-suite say AI adoption has caused division at their company, and 42% say it’s “tearing their company apart.” It also describes barriers that include “power struggles, conflicts, silos, and even sabotage.”
Let’s look closer at the scorecards to understand why this happens.
The scorecards
The Leaders
How they’re measured: Confidence and momentum, a credible narrative, visible progress, especially on AI and productivity.
What they’re struggling with: Pressure to show movement in a noisy market, uncertainty about what “good” looks like, and the temptation to reward activity that signals progress rather than results that prove it.
The Delivery Teams
How they’re measured: Output, reliability, quality, and hitting performance targets, often on timelines that do not flex because a new initiative started.
Where they’re struggling: More priorities are added but nothing is removed, and the practical reality that AI work is not only about the tools. It is data readiness, process redesign, training, and governance that tends to be under-scoped and under-resourced.
The Managers
How they’re measured: Adoption and stability at the same time. Leaders want momentum, teams need capacity, and managers are evaluated on whether both appear true.
What they’re struggling with: Being accountable for translation without the authority to make tradeoffs. They are asked to keep delivery steady, absorb ambiguity, manage tension across functions, and maintain belief in the program, even when the scorecards above and below them are pulling in opposite directions.
This is why the middle becomes the highest-tension point in AI rollouts. It is where the vision meets the work, and it is also where capacity breaks first.
Microsoft’s 2025 Work Trend Index describes a “capacity gap,” noting that 53% of leaders say productivity must increase, while 80% of the global workforce, including leaders, say they lack enough time or energy to do their work.
Senior leaders are under pressure to show AI progress that reads well externally, to investors, boards, and the market, even when the day-to-day work of running the business does not pause. To gain real momentum in progress and adoption, the governance and incentive problem has to be addressed.
What alignment looks like in practice
Measure displacement, not activity. If AI is meant to save time, require the initiative to name what will stop, what will shrink, and what metric will prove it. “We launched a pilot” is motion, it is not a benefit.
Make tradeoffs visible. If leadership wants speed, something else has to give. Without a subtraction plan, teams will create one informally, and it usually looks like shortcuts, shadow processes, or quiet noncompliance.
Resource the middle layer like it is a critical dependency. If managers are the translation layer, they need capacity, clarity, and decision rights. Otherwise you are giving them the impossible task of reconcile two conflicting set of metrics.
AI adoption is not primarily a belief problem. It’s an alignment problem. When leaders are rewarded for motion, teams are punished for missed output, and managers are left to absorb the contradiction, the organization will fracture, even if the technology works.

