Breaking stories and in‑depth analysis: up‑to‑the‑minute global news on politics, business, technology, culture, and more—24/7, all in one place.
Thomson Reuters identified 400 internal “AI champions” across all departments and seniority levels to accelerate adoption of artificial intelligence, a strategy that proved more effective than top-down mandates, according to insights shared by the company. The initiative, launched in 2023, aimed to overcome a key obstacle to AI implementation: not technological limitations, but human resistance to change.
MIT Sloan Management Review research highlights a recurring theme in AI adoption: the biggest impediment isn’t the technology itself, but the individuals within organizations who haven’t yet embraced it. The research frames organizations not as hierarchical structures, but as complex networks of relationships, ambitions, and competing interests. Introducing AI into this dynamic doesn’t simply alter workflows; it fundamentally reshapes power structures.
The shift can be unsettling. Roles that once held significant influence may diminish in importance, while others gain relevance. Tasks previously requiring collaborative effort can now be completed by a single individual equipped with the right AI tools. Those whose authority stemmed from controlling information, managing approvals, or possessing specialized institutional knowledge often feel the impact most acutely.
Rather than imposing AI adoption through directives, Thomson Reuters focused on building coalitions from the ground up. The “AI champions” were tasked with sharing practical use cases, modeling desired behaviors, and advocating for AI among their peers. By November 2024, the company reported that employees had completed foundational AI training and were actively integrating generative AI tools into their work.
Middle management consistently emerges as a critical battleground for AI adoption. McKinsey research indicates that managers and senior practitioners, whose existing methods are often reasonably effective, are frequently the most resistant to change. The perceived learning curve and disruption to established routines contribute to this reluctance.
To address this, one CEO of a major conglomerate required 100 business leaders to each sponsor an AI project with defined revenue or cost targets. These targets were then incorporated into the following year’s budget, creating a direct link between AI adoption and individual accountability. This approach transformed resistance into investment by ensuring managers had a vested interest in the outcome.
Leadership buy-in is paramount, but it must extend beyond mere advocacy. Executives who delegate AI implementation to IT departments while maintaining their own traditional work habits send a damaging signal. A McKinsey survey found that when leaders publicly share their own AI learning experiences – including acknowledging areas where they still need to develop expertise – it reduces psychological barriers for their teams. Demonstrating personal use of AI, such as a Chief Marketing Officer utilizing AI-driven analytics or a sales manager incorporating AI forecasting into weekly reviews, reinforces the importance of AI in a way that policy documents cannot.
Organizations that are successfully advancing AI adoption are those that proactively map out existing alliances and identify key stakeholders before launching initiatives. This involves determining who needs to be actively engaged, who requires a voice in the process, and who might quietly obstruct progress if ignored. This strategic approach recognizes that AI adoption is as much a political process as We see a technological one.
Vanguard, as reported by Thomas H. Davenport and Randy Bean, is investing in AI payoffs, but the political dynamics within organizations remain a crucial factor in determining success. The organizations that prioritize building the necessary political conditions for adoption are the ones most likely to realize the full potential of AI. Without that groundwork, AI risks remaining merely optional, perpetually sidelined by those who feel threatened by its implications.