The AI Transformation 100
Introduction
Key takeaways
Chapter name
The AI Transformation 100

The AI Transformation 100

Ideas from people we admire that JUST MIGHT improve how you lead and work

Authors

Rebecca Hinds
Rebecca Hinds, Ph.D., is the Head of the Work AI Institute at Glean and a leading expert on the future of work. Her work explores how AI and other emergent technologies are transforming organizations. Her research and insights regularly appear in publications like Harvard Business Review, Inc., CNBC, TIME, and Forbes.
Bob Sutton
Bob Sutton is an organizational psychologist and professor emeritus of Management Science and Engineering at Stanford University. He’s a New York Times bestselling author of eight books including The No Asshole Rule, Good Boss, Bad Boss, and The Friction Project. His research and writing focus on leadership, organizational change, and how to build better workplaces.

Expert insights from

Manjari Agochiya
GenAI Strategy Lead, Uber
Michael Arena
Former Chief Talent Officer, General Motors and Dean, Biola University
Andy Ballester
Co-founder, GoFundMe and EyePop.ai
Matt Beane
Professor, UC Santa Barbara
Lindsey Cameron
Professor, The Wharton School, University of Pennsylvania
Eric Colson
Advisor and Former Chief Algorithms Officer, Stitch Fix
Kelly Daniel
Prompt Director, Lazarus AI
Al Dea
Founder, The Edge of Work
Erica Dhawan
AI Expert, Author, Digital Body Language
Tadeu Faedrich
Senior Engineering Manager, Booking.com
Melinda Fell
Founder, Complete Leader
Kyle Forrest
Future of HR Leader, Deloitte
Tony Gentilcore
Co-founder and Head of Product Engineering, Glean
Liz Gerber
Professor, Northwestern University
Amandeep Gill
Product Manager, Cvent
Cyril Gorlla
Co-founder, CTGT
Adam Grant
Professor, The Wharton School, University of Pennsylvania
Hilary Gridley
Head of Core Product, Whoop
Alexandre Guilbault
VP of AI, Telus
Nan Guo
Senior VP of Engineering, Zendesk
Reid Hoffman
Co-founder, LinkedIn and Partner, Greylock
Sonya Huang
Partner, Sequoia Capital
Arvind Jain
Founder and CEO, Glean
Phil Kirschner
CEO of PK Consulting
Perry Klebahn
Professor, Stanford d.school
Lindsay Kohler
Behavioral Scientist and Author
John Lilly
Board Member, Figma, Duolingo, and Code for America
Paul Magnaghi
Head of AI Strategy, Zoom
Michael McCarroll
CEO, Teamraderie
Sharon Milz
CIO, TIME
Lauren Pasquarella Daley
Associate VP, Jobs for the Future
Michael Pfeffer
CIDO and Associate Dean, Stanford Health Care
Hatim Rahman
Professor, Kellogg School of Management, Northwestern University
Brian Rain
Agile Coach, Slalom
Kristi Rible
Founder, The Huuman Group
Daniel Rock
Professor, The Wharton School, University of Pennsylvania
Daan van Rossum
Founder, Lead with AI
Brandon Sammut
Chief People Officer, Zapier
Ammen Siingh
Lead Solutions Engineer, Reddit
Aishwarya Srinivasan
AI Expert, Content Creator
Rebecca Stern
Interim Chief Learning Officer, Udemy
Federico Torreti
Senior Director of AI, Oracle
Adam Treitler
Manager of People Technology and Analytics, Pandora
Phil Willburn
VP of People Analytics, Workday
Howie Xu
Chief AI and Innovation Officer, Gen
Chris Yeh
Cofounder, Blitzscaling Ventures and Coauthor, Blitzscaling
Lily Zhang
VP of Engineering, Instacart
And more

Introduction

There’s no shortage of AI promises. Faster productivity and innovation. Lower costs. CEOs making bold claims about “rightsizing” teams and “unlocking efficiencies.” Vendors pitch 10x gains while customers wait for real results. Yet, so far, in too many organizations, the hope and bluster outstrips reality.

A 2025 Boston Consulting Group study1, which surveyed more than 280 finance executives with AI experience in large organizations, found the median reported ROI from AI initiatives is just 10%—well below the 20% or more many were targeting.

Beware, however, of treating this, or any other study, as definitive. Estimates of AI success swing wildly across studies depending on the maturity of companies sampled, research methods used, and performance metrics assessed. It’s also notoriously hard to separate the AI hype that people report from what’s actually happening in their organizations.

That’s why we wanted to cut through the noise and see how AI is already improving how people lead and work (and where it shows promise). Our goal was to uncover practical ways that AI can make your work better, by amplifying the good parts and dampening the bad.

We collected insights from more than 100 leaders, technologists, and researchers across business, healthcare, government, and academia— and conducted live interviews with 35 of them. The result is The AI Transformation 100: our collection of 100 concrete ideas for using AI to improve how work gets done. We developed these ideas to help you navigate the human and organizational messy realities that determine whether AI solutions can be prototyped, implemented, and scaled.

Key Takeaways

In this report, we’re not serving up future‑of‑work fairy tales or breathless predictions about 2035. Many of these moves are already delivering results. Some are promising prototypes or hunches. And others are risks and hard-won lessons about what can break, backfire, or blow up.

Five big lessons emerged from our investigation:

AI amplifies, so point the megaphone carefully
A
AI doesn’t fix broken systems. It amplifies, warts and all. Drop it into a broken bureaucracy and the red tape and bottlenecks will get even worse. Put it in the hands of a curious team and you’ll get faster breakthroughs.
Don’t automate the soul out of work
B
Sure, AI can handle the grunt work. But when you start automating the craft, the judgment, the human touch, jobs collapse into hollow shells. And the soullessness can alienate your clients, customers, and employees alike.
Leaders can’t phone it in
C
AI adoption isn’t powered by mandates, training portals, or inspirational memos. It spreads when leaders roll up their sleeves and use the tools themselves. When executives model AI in their own work—drafting, debugging, questioning—it encourages others to experiment and incorporate it into their work too.
Structure eats AI for breakfast
D
Even the best tools fail when organizations use the wrong structures for the work they do. Centralize everything and you smother AI in approval queues. Decentralize everything and you get a mess of redundant bots, shadow projects, and coordination problems. The leaders who make headway flex their structures: they centralize when risk, coordination of complex work, and governance demand it. And they decentralize, delegate, and get out of the way when speed and learning matter more.
Make AI part of the work, not a side hustle
E
AI doesn’t stick when it’s treated as an “add on.” The companies getting traction embed AI in the daily rhythms that already drive work: sprints, customer queues, project reviews. When it’s part of the flow, AI can reduce harmful friction. When it’s bolted on, it often adds another layer of complexity.
THEME 01

Division of Labor

Who should do what—and why?
For centuries, the division of labor has been the backbone of organizational design: break complex work into smaller tasks and assign them to different people. AI is now stretching—and sometimes erasing—those boundaries. And, in some cases, it’s reshuffling who does what, which tasks get automated, and which stay human.
“Reps had to scavenge through internal systems to dig up past interactions, then scour the web for press releases and news mentions—hours of energy-sapping work before the real conversation even started. Now, an AI assistant does the digging. What once took a couple of hours now takes five minutes.”
Sharon Milz,
CIO, TIME

Start by using AI to cut administrative sludge

A smart place to begin rethinking the division of labor with AI is by tackling the least-loved burdens—like administrative sludge.

A 2024 survey of over 13,000 knowledge workers across six countries found more than half (53%1) of their time disappeared into administrative sand traps such as scheduling and rescheduling meetings, writing status updates, and chasing down routine decisions that were stalled and stuck in bureaucracy. A 2025 study2 led by Yijia Shao and her Stanford colleagues of 1,500 workers in 104 occupations confirmed this is the kind of work people want AI to take off their plates. They’re repetitive, boring, low-value, sometimes mind-numbing chores that still need to get done—and get done quickly and well.

At Time Magazine, one source of sludge was prepping for client meetings. CIO Sharon Milz shared, “Reps had to scavenge through internal systems to dig up past interactions, then scour the web for press releases and news mentions—hours of energy-sapping work before the real conversation even started. Now, an AI assistant does the digging. What once took a couple of hours now takes five minutes.”

Try this

Ask employees to nominate the most joyless, soul-draining parts of their day: chasing approvals, filling out duplicative reports, copy-pasting data between tools. Collect submissions through a Slack channel, form, or email alias. Then feed them into an AI tool to cluster patterns, spot quick-wins, and rank order tasks that are ripe for automation.

Try this

Tackle your sludge at its source: unstructured data

Why is administrative sludge so hard to avoid and remove? Often because the information needed to move work forward is trapped in unstructured data: emails, PDFs, call transcripts, wikis, chats, support tickets, CRM notes. With data scattered across formats and systems, even simple tasks turn into digital scavenger hunts.

A 2023 report by IDC1 estimates that 90% of the data generated by companies is unstructured. That’s why Box CEO Aaron Levie points to2 an obvious place to start: “Work that requires a heavy amount of unstructured data and information—documents, visual data on a screen, video content.”

Try this

Follow Workday VP of People Analytics Phil Willburn’s playbook. He told us how he cut out briefing decks and weekly update docs. Instead, unstructured data (including Slack conversations and project plan information) now flows into one AI system. His team no longer spends hours compiling updates. Now, when Willburn heads into a steering committee meeting, he asks AI to compile the brief. If he needs more detail, Willburn queries the AI and drills straight into the source. That shift has wiped out a mountain of low-value work for his team—including piecing together information, late-night slide making, and answering the boss’s barrage of “quick questions” and stray musings.

Try this

Build an agent to cut your meeting drudgery

Once your unstructured data is AI searchable, go after one of the biggest sludge factories: meetings.

Meetings spew out mountains of unstructured information: spoken words, half-baked ideas, gossip, disagreements, interruptions, offhand decisions, and unspoken cues. Parsing it all is nearly impossible. AI can catch what humans miss and save everyone from archaeology digs through transcripts and scattered notes. At Glean, one popular agent used by employees is the Daily Meeting Action Summary. It pulls action items from every meeting you had that day and delivers them in a single Slack digest.

Try this

Use AI to extract action items and decisions from meetings, not just generate transcripts. And remember, the real prize isn’t efficiency. It’s freeing up human attention for the work that matters most, like creative problem-solving, decision-making, and relationship-building.

Try this
“You’re not looking at the woman in the room to take notes anymore…I don’t have to look around and wonder, okay, who’s gonna remember all this? Now I can actually focus.”
Kelly Daniel,
Prompt Director, Lazarus

Use AI to reduce “office housework”

Every workplace runs on invisible labor: scheduling follow‑ups, taking notes, tracking action items, nudging people to hit deadlines. It’s essential but low‑reward work. And for decades, it’s been disproportionately dumped on women1.

AI can take on part of that hidden load. As Kelly Daniel, Prompt Director at Lazarus AI, told us:

“You’re not looking at the woman in the room to take notes anymore…I don’t have to look around and wonder, okay, who’s gonna remember all this? Now I can actually focus.”

Try this

Audit your team’s “invisible work.” Make a list of all the recurring low‑reward tasks that sap energy but rarely get rewarded, like note taking, deadline reminders, or follow‑up tracking, and reassign them to AI.

Try this

Make it easy for customers to talk to a real human

Some of the most vexing sludge accumulates in customer support. Much of this frustrating friction (for both employees and customers) piles up because employees need to dig through unstructured data such as old tickets, inconsistent documentation, and scattered wikis to answer questions that customers ask again and again. It’s repetitive and rules-based work, seemingly perfect for AI.

Klarna’s leaders thought so too. In 2023, this “buy now, pay later” company claimed its AI assistant could replace 700 human agents and started culling customer support staff. Yet by 2025, in a show of thoughtful leadership, they’d reversed course. “We just had an epiphany: in a world of AI, nothing will be as valuable as humans,” said Sebastian Siemiatkowski1, CEO of Klarna. “We’re doubling down—investing in the human side of service: empathy, expertise, and real conversations” said a Klarna spokesperson2.

Treating all customer support as interchangeable sludge can backfire. For some requests, most customers only care about a fast and accurate response. But when the challenge is complicated or unprecedented, or a valued customer is upset, your organization benefits by offering easy access to human judgment, empathy, warmth, and trust.

Siemiatkowski later reflected3, “It’s so critical that you are clear to your customer that there will always be a human if you want.”Indeed, 89%4 of the 1,011 U.S. consumers surveyed by cloud platform Kinsta say customers should always have the option of speaking to a human.

Try this

Audit your customer support process. How difficult is it for customers to figure out how to talk to a real person? How many clicks does it take? If it’s a confusing path, and more than two or three clicks are required, you’ve likely buried the human too deep.

Try this

Don’t let AI strip the humanity out of customer relationships

AI can draft instant replies and fully automate interactions with customers. But trust doesn’t come from such cold efficiency. It comes from the messy, emotional, and time-consuming human parts of conversation like small talk about your favorite foods and offhand jokes about your hometown team.

As one leader at Navan Travel, part of global travel company, Navan described1 a sales rep meeting with the CEO of a top-10 bank: “We met with the CEO of a top-10 bank last week. The rep knew the CFO went to Villanova and loves the Knicks. The CEO went to Boston University and grew up nearby. Somehow, he tied it all together into a joke about the Pope going to Villanova and divine intervention helping the Knicks win the championship. That kind of connection-building? AI’s not there yet.”

Try this

Don’t let AI turn every interaction into a drive-thru transaction. Use a simple rule of thumb: If the customer cares most about speed of resolution, let AI handle it. But only if it’s capable and it’s legal and ethical to do so. And don’t fool yourself into believing it is a quick and easy fix if your AI experience is lousy. As a randomized field experiment2 by Shunyuan Zhang and Narakesari Das Narayandas of the Harvard Business School found, AI agents boost customer satisfaction for routine issues when the system works well. But they fall flat when customers face repeated problems or systemic failures during interactions with chatbots—fast, polished, and unhelpful replies add to frustration. And once customers have a bad experience with a chatbot, these researchers found that if they are transferred to a human agent, customers continue to feel dissatisfied. In part, because they don’t believe they are talking to a human.

If your goal is to make someone feel respected, valued, and understood, keep it human. The small talk, offhand jokes, and other “inefficient” detours are how people decide if they like you, if they trust your motives, and if they want to keep doing business with you. They slow the transaction, but speed up the relationship. They’re good for business, and good for the soul.

Try this

Protect your distinct style, charm, and voice from generic and soulless AI

The more you lean on AI, the easier it is to lose what makes your work yours. Without clear guardrails, AI can quietly dilute your style, flatten your creativity, and make your output sound like everyone else’s.

Social media creator Aishwarya Srinivasan told us that she’s drawn a hard line: no more than 10% of her posts can be AI-generated. She’s decided that balance is just enough to get a meaningful efficiency boost without diluting her voice or sacrificing authenticity or connection with her audience.

Try this

Identify the parts of your work that carry your distinct value—the things only you can deliver, whether it’s craft, perspective, or creative judgment. Then set clear, measurable boundaries for AI’s role. For example: “no more than 10%,” “only in first drafts,” or “never for client pitches.” Make the division of labor explicit.

Try this

Beware of automating work that fuels intrinsic motivation

Just because you can automate something doesn’t mean you should. Some tasks are slow on purpose. Some are enjoyable. Some are deeply human. One Stanford study1 found that 41% of Y Combinator AI startups are automating tasks that workers would rather keep manual.

At one edtech company, a content writer described2 the fallout from a new mandate that all content had to be AI-generated: “Being forced to use AI has turned a job I liked into something I dread. As someone with a journalism background, it feels insulting to use AI instead of creating quality blog posts about education policy.”

A 2025 University of Groningen study3 across 20 European countries found a similar pattern. Workers in highly automated jobs reported less purpose, less control, and more stress—even when the work was technically easier. Workers said they felt like extensions of machines rather than skilled contributors.

Professor Matt Beane at the University of California at Santa Barbara told us that he keeps seeing this play out in software development. As AI takes over more coding, senior engineers are pushed into oversight roles: reviewing, prompting, debugging. Some like it. Others miss the craft of writing and building code.

Try this

Before rolling out AI, ask your team which tasks they enjoy and which ones they’d happily hand over. If you’re still convinced that automating work they find meaningful is the right move, replace it with other work that offers challenge, skill, and satisfaction so the job doesn’t become repetitive and mind-numbing.

Try this
THEME 02

Expertise

Who ought to be the experts now? How do you blend specialists and generalists?
For decades, expertise was locked behind titles, tenure, and credentials. Experts once had a monopoly on specialized skills. But AI is upending that, putting powerful tools in the hands of non-experts. With a few prompts, a generalist can now crank out code, draft legal language, and spin up a marketing campaign. That shift creates both opportunity and risk. AI can democratize skills. Or it can churn out a wave of overconfident amateurs who are clueless about the limitations and mistakes in their work.
“If you bring experts in too early, they’ll tell you all the reasons it won’t work. AI let [the non-engineers] show what was possible, fast.”
John Lilly,
Board Member, Duolingo

Let generalists build first, bring experts in later

At Duolingo, board member John Lilly told us about two non-engineers with no chess background who used AI tools to prototype a working chess feature in just four months. He explained to us: “They weren’t engineers. They weren’t chess experts. But they built something real, and it outpaced other internal initiatives.”

AI flipped the order of the workflow. Instead of experts weighing in early and shooting down ideas at the whiteboard stage, they stepped in later, once there was something real to react to. Lilly explained: “If you bring experts in too early, they’ll tell you all the reasons it won’t work. AI let [the non-engineers] show what was possible, fast.”

Google is using a similar approach. Head of Product Madhu Gurumurthy says1 they’re moving from lengthy Product Requirements Document (PRDs) to prototypes first. With AI-powered “vibe coding,” teams now prototype in code before drafting long proposals—speeding iteration and killing fewer ideas prematurely.

Try this

Identify one area where progress keeps stalling while your team waits for experts to weigh in. Try reducing such bottlenecks by handing AI tools to generalists and let them build early prototypes. Then invite the experts in to evaluate, refine, and (sometimes) reject those unfinished creations.

Try this
“Results come when AI engineers are embedded directly into business units. Sitting side by side with sales, ops, or support teams, they co-develop solutions with the people closest to the problems.”
Daniel Rock,
Professor, Wharton School of the University of Pennsylvania

Embed AI experts in business units

The flip side of letting generalists build first is knowing when experts need to sit shoulder-to-shoulder with the people closest to the work. A useful rule of thumb: if the workflow is exploratory and the cost of failure is low, let generalists build first. But if the workflow is critical—where errors could break compliance, expose data, or disrupt core operations—experts should be embedded from the start.

Wharton professor Daniel Rock has seen a clear pattern across AI-native organizations, as well as legacy firms like a Fortune 500 CPG company and a Fortune 100 insurance company successfully using AI to transform operations. “Results come when AI engineers are embedded directly into business units. Sitting side by side with sales, ops, or support teams, they co-develop solutions with the people closest to the problems.”

It’s not a new idea. Procter & Gamble used a similar approach during its design thinking push, embedding designers on business teams rather than in a separate silo. The U.S. Army’s Rapid Equipping Force1 did it too during the conflict in Afghanistan: They stationed technologists in combat zones to identify problems and develop prototype solutions with soldiers in settings where they worked, not for them based on suspect assumptions about soldiers’ needs.

Try this

Pick two or three frontline teams—like sales, ops, or customer support—and embed an AI engineer part-time for one quarter. Have them sit in standups, shadow day-to-day work, and co-build solutions on the spot. Don’t offload everything to them: teams should still learn to handle the low-effort wins with macros or no-code tools. Use the engineer’s time for the high-impact problems that require real engineering expertise and judgment to get right.

Try this

Beware of vibe coding the last mile (especially if you’re a novice)

Vibe coding lets novice software engineers generate code that looks usable without first mastering the fundamentals. But it often produces what Andy Ballester, co-founder of GoFundMe and now EyePop.ai, and Cyril Gorlla, co-founder of enterprise AI company CTGT, calls1 “AI slop”: cheap, auto-generated, buggy code that demos well but collapses at scale.

While both acknowledge that AI can make it easy to churn out code that’s functional, it’s often mediocre. True excellence and elegance still require human craft and years of experience. A study2 by Stanford Professor Erik Brynjolfsson and colleagues backs this up: hiring for early-career developers has fallen, while demand for mid- and senior-level engineers (35 years old and older) has continued to climb.

Several leaders we’ve spoken to describe this as the “last-mile problem”. With AI, many can get from zero to a prototype. But far fewer can carry it across the finish line: scaling, securing, and shipping production-ready systems. That final stretch takes experience and judgment you can’t vibe-code your way into.

Try this

Use AI coding tools to speed prototyping, but hold senior engineers accountable for the last mile. Put them in charge of reviewing architecture, adding error handling, running load tests, locking down security, and refining what AI and juniors start, so you don’t end up shipping “AI slop.” Try these suggestions from3 Brian Rain, an agile coach at Slalom.

  1. When reviewing AI- or junior-written code, shift the focus from syntax to substance. Ask senior engineers to review for intent, design alignment, and business purpose—not just whether the code runs. Encourage them to explain why the code works (or doesn’t), and how it fits into the bigger system, rather than only flagging if it technically executes.
  2. Prioritize technical debt prevention by encouraging senior engineers “to reject or refactor AI-generated solutions that accrue hidden complexity.”
Try this
“Our brains are built for survival, not objectivity. We’re wired to run from the rustling bush, not sit and analyze whether it’s a rabbit or a tiger. AI doesn’t have that wiring—it can calmly explore millions, even billions, of possibilities without flinching. It’s great at broadening the consideration set and ranking what looks most promising.Then humans step in with context—brand, values, and strategy—to make the final call on what deserves to exist.”
Eric Colson,
former Chief Algorithms Officer, Stitch Fix

Let AI generate the options. But have experts make the calls

The last mile isn’t just an engineering problem. In creative and knowledge domains, the test is whether the work resonates, aligns, and holds up under scrutiny. That final judgment still needs to belong to experts.

At Stitch Fix, an online clothing retailer, they’ve used algorithms1 to scan inventory and customer preferences to flag gaps—styles, colors, patterns, or fabrics that aren’t being met. AI then generated design suggestions based on those gaps. But instead of letting the system greenlight production, Stitch Fix routed those suggestions to human designers, who decided which ones dovetailed with the brand, met quality standards, and resonated with customers. The AI stretched the creative option set; the experts cut it back to what’s worth doing.

Eric Colson, former Chief Algorithms Officer at Stitch Fix had this to say: “Our brains are built for survival, not objectivity. We’re wired to run from the rustling bush, not sit and analyze whether it’s a rabbit or a tiger. AI doesn’t have that wiring—it can calmly explore millions, even billions, of possibilities without flinching. It’s great at broadening the consideration set and ranking what looks most promising. Then humans step in with context—brand, values, and strategy—to make the final call on what deserves to exist.”

Try this

Beware of letting AI close the loop on its own when expert judgment is required. Use it to provoke your experts with raw options, patterns, and experiments. Then rely on their judgment to decide what will resonate with customers and what’s worth shipping.

Try this

Beware the temptation to ignore experts

Just as with AI slop, AI can make mediocre work look deceptively polished. At Udemy, Rebecca Stern, Interim Chief Learning Officer, saw teams using AI to generate learning plans for employee development—complete with outlines, objectives, and quizzes. On the surface, the plans looked polished. Underneath, they missed the basics of instructional design (for example, sequencing concepts out of order, misaligning objectives and assessments).

That’s the danger of cutting experts out of the loop. When AI skips steps, the cracks often don’t show until the work is live. And by then, fixes are harder, costlier, and painfully public.

Try this

When work depends on deep expertise, require teams to show the reasoning and steps behind the plan, not just the AI-generated draft. And when AI is encroaching on the kind of expertise that takes years of training or certification to master, create principles, as The New York Times1 has done, requiring journalists to disclose when they use generative AI and explain how human oversight shaped the final result.

Try this
THEME 03

Roles

Which roles ought to be created? 
Expanded? Shrunk? Disappear?
The division of labor decides who does the work. Expertise decides whose knowledge counts. Roles make those choices official. With AI, some roles are stretching, others are shrinking, and a few entirely new ones are emerging. In some cases, though, the “new” roles are simply familiar jobs with AI capabilities bolted on.

Appoint AI drudgery czars

Some organizations have created formal roles dedicated to hunting down and automating the piles of administrative sludge we’ve discussed.

At Glean, members of the “Glean on AI” team work across functions internally to spot manual processes worth automating, then build a roadmap to turn them into AI-driven agents. A parallel “AI Outcomes” team runs the same playbook with customers. The model borrows from Palantir’s “forward-deployed engineer” role (engineers who work side-by-side with clients, fine-tuning products directly on the clients’ premises).

Quora has taken a similar approach1, assigning teams to systematically identify automation opportunities and convert them into AI solutions.

Try this

Appoint an AI drudgery czar (or equivalent) inside each team or function. Their job is to identify needlessly high‑friction work and automate it. Give them real authority to find it, fix it, and free people up for more impactful work.

Try this

Equip your “peer to peer” AI champions for success

Some of the most important new roles in AI adoption aren’t formal titles at all—they’re the “AI champions” or internal “AI influencers” who spread new learnings and habits peer to peer.

Babson College professor Rob Cross has found that while top-down change typically reaches just 30–35%1 of employees, pairing those formal efforts with internal influencers can be twice as effective. That’s why it’s important to find your AI champions: the people who are curious, credible, and well-connected enough to inspire others to experiment.

At PwC, part of the AI rollout included a “train the trainer”2 model. The firm designated AI champions inside each line of service. These champions volunteered to coach peers, run hands-on exercises, and adapt AI use cases to their team’s daily work.

Try this

Identify your employees who are already experimenting with AI and have influence in their teams. Within each team, ask: “Who do you go to when you want to learn how to do something with AI?” Give your champions early access to new AI models, real training, and a platform to share wins and lessons learned. Make sure champions are spread across business units so adoption grows laterally, not just top-down.

Try this

Back AI champions who do smart things, not the ones who spew out smart talk

Often, the best AI champions don’t raise their hands or get nominated by their boss—they reveal themselves through action. That’s how Uber found theirs. GenAI Strategy Lead Manjari Agochiya told us how she launched an open call for AI use cases. She told us, “It surfaced 150 ideas, but more importantly, it revealed several stakeholders across various domains at Uber who were thinking about AI, implementing it in experimental ways and were excited to share their journeys with everyone at Uber. There were around 53 in Marketing and Performance Marketing, around 30 in Legal, and several small teams across the Community Operations, Engineering, and Customer Obsession teams.” Those people who raised their hands with excitement became Uber’s first real network of AI champions.

Udemy took a similar approach. During its company-wide “UDays,” where employees came together to build AI prompts and prototypes, the team didn’t just track attendance; they watched behavior. Interim Chief Learning Officer Rebecca Stern told us, “We looked at who showed up curious. Who helped others learn. Who shared tools and tips. That lens surfaced 31 early AI champions across the company.” The doers, not the talkers.

Try this

Run a prompt-a-thon or agent-a-thon where the goal isn’t just to collect ideas—it’s to watch the behavior. Who dives in? Who collaborates? Who leaves buzzing with new ideas? Most important of all, who implements the best ideas AFTER the challenge ends? Those are your AI champions.

Try this

Use “Fleet Fixers” and “Fleet Supervisors” to coordinate multi-agent systems

As companies deploy multi-agent systems (fleets of AI agents that code, test, and coordinate with one another), University of California at Santa Barbara Professor Matt Beane and colleagues Jonathan Hassell, Brendan Hopper, and Steve Yegge note1 that the “middle” of software work is collapsing. Routine coding tasks are increasingly automated, while the real human value is shifting to oversight, coordination, and orchestration. Instead of writing every line of code, humans are now needed to manage how these AI agents interact.

As a result, Beane and team describe new human jobs emerging in this bot economy, including:

  • Fleet Supervisors act as an “air traffic controller for bots.” They monitor live agent activity and coordinate deployment so dozens (or even hundreds) of agents don’t crash into each other or stall out mid-task.
  • Fleet Fixers focus on debugging conversations between machines. Beane and team liken them to a “family therapist” for bots. They can step in when agents miscommunicate, loop endlessly, or generate conflicting outputs, tracing where interactions went wrong, and resolving issues.
Try this

In your next multi-agent project, assign one person as the Fleet Supervisor and another as the Fixer. Have them keep a running log of every time they need to step in. If those logs start filling up with the same problems—loops, duplicate work, or agents talking past each other—that’s your signal. It’s time to make those roles official and give someone the job of keeping your bot ecosystem from eating itself. And, when investing in AI agents, opt for solutions that have strong built-in agent guardrails.

Try this

Consider merging or melding specialized roles to reduce handoffs

AI is starting to collapse the walls between once-specialized jobs. In some places, work that used to require a lineup of experts—researcher, writer, designer, coder—can now be done by one person with the right AI tools.

In software, a single engineer with AI can ideate, write code, generate tests, and deploy—tasks that once required product managers, QA testers, and release engineers. In marketing, one person with AI can research, draft campaigns, design assets, and schedule posts—collapsing analyst, copywriter, designer, and campaign manager into a single role.

Krishna Mehra, a former Head of Engineering at Meta, argues1 this is a chance to “rebundle” roles. Startups are skipping layers of project managers and bloated teams and instead hiring “full-stack builders”: people who can take an idea from concept to deployment, leveraging AI at every step. As Mehra notes, these adaptive, end-to-end roles are already powering companies like Cursor (20 people, $100M ARR) and Midjourney (10 people, $200M ARR) as of mid-2025.

Try this

Map your team’s workflows and focus on the baton passes. Where do people in three or four roles each “touch” the work before it ships? Mehra recommends starting small. Spin up a tiger team and challenge them to ship something—an internal tool, a campaign, a feature—using as few role handoffs as possible. Document what worked, what broke, and what AI made easier.

Try this

Experiment with moving from many specialized “directly responsible individuals” to one generalist

Rebundling how work gets done also means rethinking who’s accountable for it. Before AI, big projects often had multiple owners or what Apple calls “DRI”s (directly responsible individuals). An engineering lead drove technical execution. A product lead set priorities. An operations lead handled logistics.

AI is starting to change that equation. Dylan Field, CEO of Figma, has noted1 that “areas that were seen as distinct phases in the product development process are now merging. Product is also blurring with design and development and potentially even parts of research.”

Tony Gentilcore, Co-founder and Head of Product Engineering at Glean, suggests that it’s time to rethink project ownership structures. Splitting DRIs made sense when each function had to manually manage its own part of the work—product chasing requirements, engineering tracking dependencies, ops managing timelines. But AI is starting to take over those mechanics. Systems can now generate status updates automatically, route tasks to the right owner, and flag dependencies before they block progress. With that administrative load off the table, having three separate DRIs, for example, can add unnecessary coordination costs.

Try this

Take one active project with multiple DRIs. Run a 90-day pilot with a single DRI accountable for the whole effort, while AI tools handle the coordination—status reports, task routing, dependency tracking. Compare speed and clarity against your old structure.

Try this
“If you were to build your function from scratch today, how would it look and operate?”
Al Dea,
Founder, The Edge of Work

Redraw roles so AI can optimize work across silos

The org chart is how we carve up roles. It slices work into boxes so decisions don’t overwhelm people. But pattern-finding and connection-mapping algorithms don’t operate within boxes. They can spot connections that traditional roles were not designed to manage.

In a 10-month study1 Rebecca Hinds did with Stanford’s Melissa Valentine, they saw such connection-mapping at a digital retailer that used AI to optimize inventory. Human merchants owned narrow categories like “plus-size denim” or “plus-size dresses.” That’s how the org chart divided the work. The AI tools (and experts), however, identified cross-category patterns that humans missed (like spikes in denim sales that predicted spikes in dress sales).

Coordination troubles flared because no one had the mandate to act on insights that crossed silos. These problems waned after leaders redrew roles—expanding managers’ responsibilities to cover broader product lines—so it was clear which managers had the visibility and authority to act on AI’s system-wide insights.

The same shift is happening elsewhere2. Some companies are consolidating product lines and platforms so AI can analyze relationships across usage, renewal, and expansion. AI might reveal that adoption of one feature predicts renewal rates or upsell potential months later. But in a traditional org chart, product managers own adoption while customer success managers own renewal. Because accountability is split, neither side can act on the full picture. Valuable AI insights fall into the cracks.

Try this

Conduct an honest evaluation of your role. As Al Dea, Founder of The Edge of Work, suggests, “If you were to build your function from scratch today, how would it look and operate?”

Then ground your vision in reality. Audit where AI is surfacing cross-cutting patterns—like when customer churn shows up in product usage data, support tickets, and contract renewals, but no single role owns the whole problem. Give one leader accountability and authority over the full span—not just fragments.

Try this
Yes, AI can write your PRDs [product requirements document], map your backlog, and even spit out a half-decent launch plan before you finish your coffee.

Turn your product managers into diplomats and conflict wranglers

AI is also transforming the role of product manager (PM). As Amandeep Gill, a PM at Cvent, puts it1, “Yes, AI can write your PRDs [product requirements document], map your backlog, and even spit out a half-decent launch plan before you finish your coffee.”

But one part of the job hasn’t changed, and won’t any time soon: creating common ground among people who are prone to disagree because of their roles and personalities. Sales pushes for speed. Engineers demand stability. Finance tightens budgets. And some people believe they are the smartest person in the room and are rarely wrong about anything. The PM is the one charged with finding common ground, soothing touchy egos, brokering trade-offs, and getting the group to move forward together. AI can draft the docs, but it can’t get a room of disagreeable people to (sometimes begrudgingly) agree on schedules, priorities, actions, and goals.

Try this

Use AI for the first-pass documentation: drafting PRDs, backlogs, and launch plans that are largely routine and formulaic given the raw inputs already available. That frees product managers to spend their time where it really counts: driving alignment, coaxing compromises from stubborn stakeholders, and keeping focus on the bigger picture. As Gill puts it, engineering and sales need PMs in the room, “preferably before the chairs start flying.”

Try this
“Solution architects have to digest so much technical matter on the front end…if you can augment that and give them the tools…they can then invest in building the trust needed so that they’re creating work ties and people are pulling them in sooner or faster.”
Michael Arena,
Dean of the Crowell School of Business at Biola University

Redesign sales and service jobs so your people spend more time with customers, not covering more territory

The solution architect is another role being reshaped by AI. Historically, people in these jobs spent months memorizing product specs and technical documentation before they were credible with customers. As Michael Arena, former Chief Talent Officer at General Motors and Dean of the Crowell School of Business at Biola University, explained to us: “Solution architects have to digest so much technical matter on the front end…if you can augment that and give them the tools…they can then invest in building the trust needed so that they’re creating work ties and people are pulling them in sooner or faster.”

But Arena also warned us of a trap. Using AI only to enable sales and service workers to handle more accounts (for example by summarizing discovery call notes in Gong and auto-generating systems architecture diagrams in Lucidchart AI) treats them like throughput machines. The bigger payoff comes when people reinvest that new-found time into customers—building trust, uncovering needs, and shaping long-term growth.

Try this

When AI shortens the technical ramp for sales and service roles, use the freed capacity to crank-up customer engagement. Redesign metrics so people’s success is measured by trust and relationship-building, not just account volume.

Try this

Make your finance leaders responsible for allocating computing resources

AI gobbles up so much computing power that it is, and ought to, change what CFOs do. Traditionally, finance leaders have been stewards of capital and labor. Now, in many companies, computing resources have been added to the list.

Lily Zhang, VP of Engineering at Instacart, told us that compute allocation—who gets access to GPUs, model licenses, and tokens—is increasingly where the biggest productivity gains or bottlenecks are created. She pointed to Meta’s CFO Susan Li, who has said1 compute is now one of the most strategically scarce resources the company manages.

Allocation decisions have important implications. Do you give every developer unlimited AI tokens, as Shopify has2, reasoning that an extra $1,000 per engineer per month is cheap if it delivers a 10% productivity lift? Or do you ration usage, as other companies do, and risk stifling adoption?

Try this

Expand the CFO’s role to explicitly cover compute allocation. Don’t bury it in IT budgets. Ask which business outcomes deserve scarce model access, and allocate accordingly. The goal isn’t to squeeze ROI out of every token today, but to treat compute as a strategic resource that the CFO manages in concert with goals set by the CEO, other members of the executive team, and the board.

Try this
“The best people are the ones that can drive the biggest transformation…but often organizations want to keep their best folks in operations.”
Alexandre Guilbault,
VP of AI, Telus

Invite and entice your best people to test AI pilots

That’s a mistake. As Telus VP of AI, Alexandre Guilbault, told us: “The best people are the ones that can drive the biggest transformation…but often organizations want to keep their best folks in operations.”

Too many AI pilots end up staffed with whomever has “extra capacity” instead of the people who could make the biggest impact. The best performers are often left out—sometimes because they’ve already mastered the task being tested as part of the pilot and dismiss AI as unnecessary. More often, because they’re buried in crucial work and leaders hesitate to pull them away to try a prototype or pilot.

When AI efforts exclude top performers, two problems crop up:

  • The system learns from the habits of average performers instead of the practices that make your best people great.
  • The influencers everyone else looks to have no stake in the outcome—so when the tool rolls out, they’re the first to shrug or resist. That stifles the spread of tools that can speed up and improve the quality of work done throughout your organization.
Try this

Rotate your best people into AI pilots, even if it hurts short-term execution. Flatter them, pay them, give them a few days off. Do what it takes to get them to sign up and give their all to the task. The short-term cost is a down payment for building systems your top performers will use and refine—and the rest of the organization will trust and benefit from.

Try this
“This [prompt engineering] feels like something everyone will need to be competent at—not work that falls solely to those with a specialized job title within the next few years.”
Kelly Daniel,
Prompt Director, Lazarus AI

Decide if prompt engineering is a role or a universal skill

As AI reshapes how work gets done, new responsibilities are popping up quickly: writing effective prompts, managing outputs, catching hallucinations, fine-tuning model behavior. Leaders face a tough call: when does a new capability deserve a formal role, and when should it stay a baseline skill?

Take prompt engineering. Some companies are hiring full-time prompt engineers. Others are folding it into everyone’s job. Kelly Daniel, Prompt Director at Lazarus AI, is betting on the latter. She told us: “This feels like something everyone will need to be competent at—not work that falls solely to those with a specialized job title within the next few years.”

This is a classic challenge in moments of transformation: codify a role too early, and you risk creating something that’s obsolete before it matures. Wait too long, and critical work drifts with no clear owner.

Try this

For every new AI task, ask: does it demand dedicated focus, deep expertise, and accountability? If yes, make it a role. If it’s something everyone should learn, treat it as a skill. Or if you do hire dedicated prompt engineers, try making them temporary roles that are designed to disappear in, say, a year.

Try this

Don’t create new AI roles as knee-jerk solutions

When AI creates new kinds of work or responsibilities, some leaders respond by inventing a shiny new role. The Chief AI Officer is a good example. Some leaders told us that, at its best, this role is designed for impact and filled by a star—someone who can set strategy, attract other top talent, shape governance, and unite teams across functions to build and implement AI tools. But too often it’s a ceremonial title—window dressing with no budget, authority, or real influence inside or outside the company.

Try this

Pressure-test every shiny new AI title with three questions:

  • Will this role drive work that would otherwise stall?
  • Will the person have the budget, authority, and access to make change happen?
  • Will creating the new AI role create more coordination tax and turf wars?

More roles mean more cost, complexity, and confusion. In many cases the smarter move is to embed AI responsibilities into existing roles, instead of handing out another title (especially if it is hollow and powerless).

Try this
THEME 04

Control

Who has power? And when should powerful people loosen the reins or get out of the way?
AI is shifting who calls the shots, whose judgment counts, and which decisions live in what roles and functions. Savvy leaders are now redrawing the lines of control: vertically (where AI sits in the hierarchy) and horizontally (which teams own which AI decisions).

Done well, such redesigns remove bottlenecks and speed execution and innovation. Done poorly, it fuels turf wars, creates red tape and stalls projects, and sends AI’s potential to die in committee.

Don’t lead an AI implementation if you don’t know how AI works

It sounds obvious, but too many leaders try to design AI structures and controls without understanding the technology or how it changes work. When leaders don’t understand AI, they make kneejerk moves. Some centralize everything under one department “for risk management” and create bureaucratic AI choke points where ideas suffocate in approval queues (often staffed by people who don’t understand AI either). Others swing to the other extreme, throwing the doors open in the name of “agility” and ending up with a random scatter of uncoordinated and non-strategic AI projects (and gaping security holes too).

Instead of theorizing and making proclamations from on high, Reid Hoffman, co-founder of LinkedIn and Partner at Greylock, built an AI clone of himself and then held conversations with it. He explained1, “conversing with an AI-generated version of myself can lead to self-reflection, new insights into my thought patterns, and deep truths.” NVIDIA CEO Jensen Huang interrogates the technology to understand how it works by asking the same question to different AI models to get the best response and even has them critique one another. He’s said2, “In areas that are fairly new to me, I might say, ‘Start by explaining it to me like I’m a 12-year-old,’ and then work your way up into a doctorate-level over time.”

Try this

If you don’t use AI yourself, you’re not qualified to decide how your teams should. Get a mentor and get hands-on like Hoffman and Huang do before you start handing down rules. Follow Wharton Professor and AI expert Ethan Mollick’s “ten-hour rule”3: Spend at least ten focused hours working directly with AI so you can experience what he calls the “jagged frontier”—the uneven, often unpredictable line between what AI can and can’t do. Prior to redesigning structures or issuing new mandates, sit with the teams building or deploying AI and watch where approvals, politics, or outdated processes are slowing them down.

Or if you don’t have the time, will, or background to do so, delegate the authority to colleagues who do. Putting the right people in charge will not only be better for your company, that way you will get credit for work that goes well rather than blame for work that goes badly.

Try this
“Unlike tools or rules that shift with each new model, your AI principles don’t expire.They give the policy a human-centered, durable back bone even as the technology changes.”
Kristi Rible,
CEO, The Huuman Group

Implement nuanced AI policies—and keep updating them

Too many companies have charged ahead with AI without writing down the guardrails and rules. A 2024 survey by Vanta1 of 2,500 IT and business leaders in the U.S., U.K., and Australia found that only 36% work in organizations that have an AI policy or are in the process of developing one.

Without guardrails, AI turns into the Wild West. Some workers freeze up because they are unsure what’s allowed and what isn’t, and worry about crossing invisible lines that trigger a reprimand or even get them fired. And when companies have inflexible and otherwise misguided AI policies, people may engage in constructive defiance to get their work done. That’s one of the main reasons a survey2 by security company Anagram found 45% of workers used AI tools that were banned by their organizations.

A single static company-wide policy isn’t the solution. New tools emerge, regulations shift, and risks change faster than any single rigid policy can cover. As one SVP at a Fortune 20 company told us, they’ve prioritized a living governance system: cross-functional steering committees, legal reviews, and guardrails that flex depending on the context (for example, healthcare data versus store operations). They’ve also built an internal portal where employees can access approved tools, request licenses, and see each tool tagged by risk level (from “safe to use” to “proceed with caution”).

When we spoke with her, Kristi Rible, CEO of The Huuman Group, emphasized that AI policies should be paired with principles that explain why guardrails and rules are in place—like “humans remain accountable for decisions” or “transparency first.” “Unlike tools or rules that shift with each new model, your AI principles don’t expire. They give the policy a human-centered, durable backbone even as the technology changes.”

Try this

Treat your AI policy like a living system. It should be easy to access, list approved tools, flag their risk level, and explain the training required. Pair it with a governance rhythm—steering committees, regular reviews—so the rules evolve as fast as the technology does. And anchor it in a set of enduring principles—like accountability, transparency, and human oversight—so the guardrails stay relevant even as the tech shifts.

Try this

Flex your organizational hierarchy

Hierarchy is one of the main ways skilled leaders control how AI decisions get made—but they treat it as flexible rather than a fixed one-size-fits-all org chart.

University of Michigan Professor Lindy Greer's research1 shows the best leaders flex the hierarchy to fit shifts in tasks and interpersonal dynamics. Teams coordinate and perform better when members report, for example, “there is a clear leader on some tasks, and on other tasks, we operate all as peers.” For stable challenges and environments, hierarchy provides clarity and coordination. But for fast-changing situations, a rigid pecking order creates bottlenecks. So, effective leaders “flatten the hierarchy.” They delegate decision-making and empower people closer to the work, so teams can act and adjust on the fly.

AI makes such flexibility even more critical because it can implement rapid changes in the speed and the scope of decisions. AI surfaces signals in real time—too fast for a rigid chain of approvals. And these signals often cut across teams, products, or platforms—which are often too broad for a single local manager to handle well.

Try this

Ask three questions before choosing to drive or delegate AI work.

  1. Where is speed more important than control and company-wide consistency? Push those AI decisions down to the front lines. Let local teams act quickly on things like dynamic pricing and customer support responses.
  2. Where have you completed the “idea generation” phase of innovation and moved to “implementation?” Developing AI solutions, as with all creative work, entails first generating, prototyping, and testing many ideas. In this phase, you get more ideas by “flattening” the hierarchy and encouraging variation. A more top-down approach (“activating the hierarchy”) works better for selecting the best ideas and assuring they are implemented consistently.
  3. Where is integration and risk management more important than local speed? Pull those AI decisions up the hierarchy. Top-down decisions may be superior, or essential, in areas including data governance, reorganizations, and cross-platform investments.
Try this

If AI is crucial to your strategy, AI leaders ought to be in the C-suite

Where AI sits in your organization's hierarchy determines the attention, resources, prestige, and, yes, power of such leaders and teams. If technology leaders (including those focused on AI) are buried three layers down, AI takes a backseat to other operational and strategic matters. But when they are in the C-Suite, AI decisions and implementations move to the top of the list.

That's why CIOs have spent the past decade clawing their way closer to the top of the org chart. From 2015 to 2023, the Wall Street Journal reported that the percentage of CIOs reporting directly to CEOs jumped from 41% to 63%1. Positioned at the top, IT has the potential to influence investment decisions, product strategy, and the organization's competitive direction.

At Blue Shield of California, in 2022, then-CIO Lisa Davis made a successful case that she ought to report to the CEO. Her increased influence helped her IT team to drive enterprise-wide technology transformations with greater speed, visibility, and measurable business outcomes. Davis explained2: “That would have never happened if IT was sitting in a back office.” She added, “Generative AI has just reinforced the need to have a technology and digital leader that understands business mission and outcomes, and how they are connected together.”

Try this

Review your current organizational design. Where do your top AI leaders reside today, and what influence do they have? If your company's future depends on AI to propel growth, transformation and strategy, then AI leaders ought to be in the C-suite. And consider using “softer” signals to bolster the power and prestige of top AI leaders such as moving their offices close to the CEOs, nudging the CEO to talk about your AI leaders, and inviting them to present your AI strategy at board meetings.

Try this

Beware that flattening your organization can make it harder to match AI's speed

Many tech giants are flattening their organizations, cutting management layers (in theory) to shrink bottlenecks and speed up AI-driven decisions. Across Corporate America, layers of management are being stripped out; median spans have jumped1 from roughly 1:5 in 2017 to about 1:15 in 2023 and appear to be still widening2.

Whether delayering helps or hurts depends on the kind of work your teams are doing. As Dean of the Crowell School of Business at Biola University and former Chief Talent Officer at General Motors Michael Arena explained to us, “What you really want [to ask] is, what work am I in? Am I heads down or heads up?…if you push the span of control too far…it breaks.” Research by Arena and colleagues3 found that “managers leading larger teams, particularly those with more than seven direct reports…[have a] relentless workload [that] reduces their availability, creating bottlenecks.”

Arena suggests distinguishing between:

  • Heads-down work: Coding, call center tasks, data processing, operating machinery or vehicles. This type of work enables wider spans of control because the work is routine, predictable, and coordination (and even employee supervision) can often be handled by AI.
  • Heads-up work: Product design, strategic planning, code reviews. This type of work depends on collaboration, judgment, and constant alignment. Flattening here risks overloading managers or starving teams of the time they need to sync-up their less predictable and more improvisational work.
Try this

Before flattening or delayering, audit how your teams spend their time. Map whether they're in heads-down or heads-up phases of work. If they're heads-down, widen spans of control and let AI absorb more of the execution load. If they're heads-up, keep spans tighter and reinvest AI's gains into coordination, judgment, and relationship-building. And beware that, despite the myths, flatter is often slower than faster, and can burn out your best leaders.

Try this

Don't treat “standardized” and “centralized” AI as dirty words

Another way to draw the lines of control is deciding how centralized—or decentralized—your AI efforts should be. One common approach is to centralize AI into a Center of Excellence (CoE). It brings consistency, shared standards, and governance. But if everything needs to flow through the CoE, you risk creating bottlenecks as every team queues up for support. On the other hand, if you lean too much on decentralization, you get the opposite problem: redundant projects, incompatible models, and fragmented security.

One CIO at a large U.S. university described their hybrid model to us. A Center of Excellence within the university's central IT department owns core AI responsibilities including integration, risk, and data infrastructure. But schools and units have the authority to run decentralized pilots for speed and experimentation. He recommended centralizing AI efforts when scale and governance matter most, and decentralizing where agility and learning are more important.

Try this

Map your AI activities against two dimensions: risk and need for experimentation:

  • High risk / low need for experimentation (e.g., data security, compliance, enterprise integrations): Centralize in a CoE for consistency and control.
  • Low risk / high need for experimentation (e.g., prompt testing, local workflow automations): Decentralize to teams so they can move fast and try new things.
Try this

Break down the walls between IT and HR

Another lever of control is redrawing the borders between functions. In 2025, Moderna merged its Human Resources and Technology departments under one leader: the Chief People and Digital Technology Officer. Tracey Franklin (Moderna's former HR chief), who moved to this new position, explained the redesign1: “Merging HR and Digital isn't just about consolidation—it's a deliberate move to close the gap between the people who shape culture and those who build the systems that support it.”

Try this

Don't let HR and IT manage AI in isolation. Work on solutions—whether through merged leadership, cross-functional governance, or joint planning—that bring together leaders and teams who are responsible for managing the people with those who are responsible for building and running your information systems.

Try this
“We're urging more companies to start thinking about what their org chart looks like assuming that everyone will have at least 5-10 agents running under them. This requires a very different way of thinking, but will become crucial, especially as the number of agents will start to take over.”
Daan van Rossum,
Founder, Lead with AI

Sketch what your org chart will look like when people manage more AI agents and fewer (if any) humans

As Moderna merged its HR and IT functions under a single leader, it also deployed more than 3,000 GPT agents1 across a variety of roles. In the coming years, it's likely that most employees won't just use AI—they'll lead small fleets of it. McKinsey2 calls this the rise of the agentic organization, where humans and AI agents operate side by side, each contributing judgment, execution, and learning. They've already seen teams of two to five humans supervising “agent factories” of 50 to 100 specialized agents running end-to-end processes. While this is not the reality for most organizations, it's a useful exercise to start sketching out.

Try this

Take inspiration from Daan van Rossum, founder of Lead with AI, which helps executives rethink their organizations for AI. He told us: “We're urging more companies to start thinking about what their org chart looks like assuming that everyone will have at least 5-10 agents running under them. This requires a very different way of thinking, but will become crucial, especially as the number of agents will start to take over. These also end up in discussions about the mindsets that need to change, especially in larger organizations where there's still a huge sense of wanting control and getting them to let go of the idea of very fixed org charts into much more fluid systems.”

Try this
THEME 05

Coordination and Silos

How does work move through—or get stuck—inside your organization?
Good coordination is what prevents an 
organization from wobbling off course. When it works, handoffs are clean, people know what to do and when to do it, time and money are saved, and everyone moves in the same direction. When it fails, you get all the familiar problems: delays and ordeals for colleagues and customers, duplicated effort, frustration and confusion, and strategies that are lost in translation. AI raises the stakes. Now you have to orchestrate not only people—but people and machines together.

Fix your systems before expecting AI to improve coordination

You can't just bolt AI onto a broken system and expect it to work. If you drop it into a flawed legacy system, the same old coordination failures that hurt productivity, innovation, and well-being—and generally drove people crazy for decades—will persist, or get worse.

In healthcare, for example, work is often fragmented into poorly connected and specialized roles and silos, leading to botched handoffs and breakdowns in information sharing. Such problems are amplified by well-meaning laws, rules, and norms for protecting patient confidentiality. These same issues now make it hard to train AI tools to improve quality and coordination in healthcare systems.

Northwestern Professor Hatim Rahman described a hospital project where his PhD student is studying the use of AI to improve access to medical diagnostics. To train AI models, large numbers of ultrasound images are required, especially images of the same patients taken over time. But decades of efficiency-driven practices in healthcare have taught clinicians to take as few images as possible. Another roadblock is getting written permission from patients to use their scans to train AI models. On top of that, imaging units that have taken different pictures of the same patient over time may lack incentives, or have a history of hostility, which undermine cooperation. And imaging techs who suspect that management will use the data to evaluate them, increase their workload, or eliminate their jobs may resist supporting AI projects. As a result, collecting the images required to train AI models is taking far longer than healthcare leaders have anticipated.

Try this

Before deploying AI, ask: What do we need to change in the organization—not just the tech—to train, use, and trust AI tools? Which rules and routines that once served us well now block progress? Which long-ignored problems might AI finally force us to fix? That may mean breaking down silos, loosening overly restrictive rules, and addressing the employee mistrust that was festering long before AI showed up.

Try this
“If you throw AI at an existing process that has gaps, you’re just scaling [dysfunction]—or bad decisions—at a higher velocity.”
Adam Treitler,
Manager of People Technology and Analytics, Pandora

Map how work is really done before you automate

Even after you tackle the structural problems that constrain AI, you’ll still run into the messy day-to-day reality of how work gets done IN PRACTICE rather than IN THEORY. Most work is propped up by informal fixes, shortcuts, workarounds, rule bending, and judgment calls that aren’t documented—and yet skilled employees use these constantly and can’t do their job without them.

For example:

  • “The system says to decline the refund, but this customer spends $2M a year so we’ll make an exception.”
  • “The system says we can’t do it. I’ll just Slack Priya directly because I know she’ll approve it.”
  • “I’ll skip step 4 so we can hit the deadline.”

If AI models don’t know how people in your organization get work done DESPITE rather than BECAUSE of the formal system, they’ll faithfully scale processes that function only because of hidden workarounds (despite a lousy “official” design), replicating and spreading broken handoffs, redundancies, and poorly written policies and decision rules. “If you throw AI at an existing process that has gaps, you’re just scaling [dysfunction]—or bad decisions—at a higher velocity” says Adam Treitler, Manager of People Technology and Analytics at Pandora.

At WestRock, internal audit VP Paul McClung first envisioned a one-click AI solution to automate their audit process. But when he mapped1 the workflow on a whiteboard, the gaps (and informal solutions) became obvious: data scattered across systems, judgment calls not to count bad lunch, and informal detours and workarounds that glued things together. In the end, his team used AI to automate specific parts of workflows (such as building risk and control matrices and generating client request lists) rather than automating the entire process.

Try this

Before deploying AI, map the workflow to understand how it really works, not just how it’s documented. Capture every step, detour, and workaround. Then decide: is this a process you can automate end-to-end, or one where it’s smarter to optimize and link a few tasks and stop short of full automation.

Try this

Stop thinking in disconnected tasks, start understanding the work

There are two powerful tools for understanding how work actually happens in organizations. First, “journey maps” take an outside-in view: showing how customers or users experience a process across touchpoints. Second, “process maps,” take an inside-out view: documenting the actual sequence of tasks, decisions, and handoffs that staff perform.

To best apply AI, you need to understand both. Journey maps1 uncover the human experience: how customers or clients travel through a system—the steps they take, and the points that feel easy, frustrating, or surprising. Process maps2 reveal how work moves through the organization from the perspective of employees who operate the system.

A 2024 study3 of stroke rehabilitation mapped how patient care unfolded from both perspectives for 130 patients. Instead of a single clean sequence, they uncovered nine different variants—including cases where discharge readiness (when clinicians judge a patient stable enough to leave hospital care) was recorded before key functional assessments (tests of how independently a patient can move, speak, or perform daily tasks). The study shows how pairing journey maps with process maps exposes both the patient experience and the messy reality of operations. That clarity is essential before layering in AI or automation.

Try this

Help each function build a process map from the insider’s view, and a journey map from the client or customer’s view. For example, Stanford d.school students4 shadowed airline customers from arrival to boarding and again after landing—discovering that waiting at the luggage carousel was the worst part of the experience. That revelation surprised airline executives, who said they never check luggage, so didn’t think much about that part of the journey.

Then ask: Which steps are structured enough for AI? Could agents such as these span the steps before and after to smooth handoffs? Pair it with a journey map that captures how customers or colleagues actually experience those same steps. Side by side, the two maps reveal both the operational reality and the human experience.

Try this

Use AI to stress-test the happy path before the handoff

Handoffs are classic coordination choke points. In product-engineering handoffs, the product team often presents the “happy path”: focusing on the ideal user journey where nothing breaks. Engineering counters with all the ways it could go wrong. This same dynamic shows up in other handoffs too—like when IT rolls out a slick new workflow and the security team points out the change just opened six fresh vulnerabilities.

It’s not just that people in different functions have different personalities. It’s that they focus on and are rewarded for different parts of the work. Product teams are rewarded for ambitious plans, like launching flashy new features under aggressive deadlines. Engineering teams are rewarded (and accountable) for making things work and spotting risk. The result? Meetings between product managers and engineers bog down in back-and-forth debates about edge cases and technical landmines.

Teamraderie CEO Michael McCarroll explained to us how they use AI to short-circuit this dynamic for the online learning experiences that his company develops. Before meeting with engineers, product managers at Teamraderie now run their designs through AI to:

  • Generate edge cases and “what-if” scenarios that expose likely failure points.
  • Draft acceptance tests so engineers can edit instead of starting from scratch.

As a result, before the meeting happens, the work has been thoughtfully critiqued and issues addressed. Engineers focus on architecture and scalability—where they add the most value—instead of being the first line of defense and being perceived as pushing back against product managers.

Try this

Try incorporating AI into your product-engineering handoff. Stress-test the happy path and surface failure scenarios before you meet so engineers can spend less time swatting bugs in the meeting and more time building the system right.

Try this

Use AI to get the right people on projects—and find them fast

Staffing projects is another coordination choke point. At one consulting firm, consultants were staffed for projects through a clunky, manual process: keyword searches across résumés, multiple database checks, and rounds of verification. The process was slow and left room for bias. If your résumé didn’t have the “right” words, you wouldn’t appear in search results. And when they were under time pressure, managers defaulted to picking people they knew best and had worked with in the past.

After adopting a Work AI platform (Glean), managers can simply type a natural-language query such as: “Show me three candidates with retail banking experience.” The system searches across the company’s knowledge base—résumés, project records, even work artifacts—and produces a ranked shortlist, complete with summarized skills.

It’s an early step toward what Stanford professors Melissa Valentine and Michael Bernstein call “flash teams”1: project-ready virtual groups that are assembled in real time based on the skills a project needs most, not just who happens to be available or visible.

Try this

Audit how you staff projects, and sources of friction and failure. How much depends on manual searches, outdated systems, or who managers already know? Figure out where and how AI can help you identify and assemble team members inside and outside your organization that might otherwise stay invisible.

Try this

Use AI to nudge teams to go broad and narrow at the right times

Once teams are staffed and in place, the questions become: Are they collaborating in the right way for the task at hand? Are they talking about the right things at the right time? Do they know how and when to “go broad” (diverge and discuss a wide range of ideas) and “go narrow” (discuss the few ideas they have converged on implementing) at the right junctures in their innovation journeys?

In their study of 117 remote software development teams on an online platform (www.gigster.com), Katharina Lix, Sameer Srivastava, and Melissa Valentine found1 that high-performing teams adjusted their “discursive diversity” (the range of perspectives, language, and ideas in the conversation) based on the task at hand. When the work called for creativity, they showed higher discursive diversity—drawing on a broad range of language and perspectives. But when the work shifted to execution, their discursive diversity narrowed, keeping conversations more focused and streamlined.

AI isn’t there yet, but early tools are starting to point in this direction—analyzing participation patterns, spotting when conversations stall, and even suggesting adjustments in real time.

Try this

Run your meeting transcript (with permission) through an AI tool and ask it to assess participation. Did a handful of people dominate, limiting the team’s discursive diversity? Did the group circle around the same points instead of converging on a decision? Use those insights to learn more about whether your team is collaborating in the right mode for the task at hand—broad and exploratory for creativity, or narrow and focused for execution.

Try this

Use AI to prevent loudmouths (and other toxic types) from hijacking conversations

Meetings are powerful coordination tools when they enable people to hear from varied colleagues and to develop nuanced understandings of how each others’ work fits together. Meetings also enhance coordination when people with varied expertise blend their knowledge to smooth handoffs and speed work through the organization. But such learning and problem solving is impossible when a single voice hogs the airtime and drowns out quieter (and often) wiser voices. Blabbermouths are often clueless that they are silencing, annoying, and wasting others’ time. Even worse, a single rude and insulting member can destroy cooperation, fuel destructive conflict, and drive members to leave meetings and quit teams.

At Stanford, researchers built a Deliberation Platform1 that uses AI to improve group dynamics in real time. The system tracks participation and nudges people who haven’t spoken in a while. It also calculates a live “toxicity score.” If toxic behavior by a member is detected, the system asks the group to confirm—and if they agree, it mutes the offender’s mic.

Try this

Don’t think of AI as just a meeting notetaker. Use it as a facilitator—nudging for equal airtime, surfacing quieter voices, and flagging toxic behavior before it derails the meeting. You might give Stanford’s Online Deliberation Platform a try for meetings with 8 to 15 people. It’s free and easy to use.

Try this

Shut down the notetaker bot free-for-all

AI meeting bots can save time. But just like other kinds of software bloat1, teams that use too many different bots pile on the coordination tax.

When we interviewed Phil Kirschner, CEO of PK Consulting, he described a scene that keeps playing out across his clients: five people join a meeting, and three different AI notetakers show up—perhaps Otter, Fireflies, Fathom. Each spits out a transcript with slightly different wording, timestamps, and action items. Then collaborators may waste time debating which record is “right” rather than moving work forward. And the team ends up with “my notes” and “your notes” instead of a shared record on which everyone can align.

Try this

Treat AI meeting bots the way you treat human participants: give them clear roles and boundaries. Decide upfront who will be the official recorder, who’s flagging action items, and who’s summarizing decisions. Ask the team to agree on the AI notetaker, and provide assurance about handling mistakes. If you are the boss and they can’t agree, make and communicate the decision.

Try this

Cut the bottlenecks out of AI approvals And help your lawyers default to “yes”

Efforts to bring new AI tools into a company often are slowed—or come to a screeching halt—because of a three-headed bottleneck: legal, security, and procurement. A seemingly simple request can drag on for weeks, even months, when staff in such functions are overloaded, overly cautious, don’t communicate their concerns and suggested solutions quickly and clearly, or focus on only their function rather than on what other functions need to do their work.

Zapier tackled1 such bottlenecks by assigning a dedicated product manager to work across procurement, legal, and engineering to fast-track approvals. Shopify went further2: they built a culture where their lawyers start from “yes.” When Shopify VP of Engineering Farhan Thawar pushed to adopt AI back in 2021, he didn’t ask legal if they could do it. He told them they were likely going to do it, and asked: “How can we do it safely?”

It’s a reminder of the power of what Patty McCord, Netflix’s former Chief Talent Officer, called “fully formed adults3.” Netflix built the same kind of culture: no travel policy, no annual reviews, unlimited vacation. The bet was simple: hire people you trust to use judgment and put the company first. Don’t hire “problem people,” and move them out quickly if you get it wrong. That’s what makes it possible to ditch the red tape and slash the coordination tax.

Try this

Sit down with your General Counsel, Chief Information Security Officer, and head of procurement and make AI adoption an explicit strategic priority. Frame the conversation as “How do we do this safely?” Default to “how can we” rather than “why can’t we” (while acknowledging there will be some “no”s). That shift helps turn legal, security, and procurement from gatekeepers into enablers, and reduces one of the biggest coordination taxes in AI adoption.

Try this

Co-develop AI solutions with AI vendors

Too many AI vendors still act like traditional software providers: ship the product, configure a few settings, and walk away.

That model doesn’t work for AI. AI performance isn’t just about the model—it’s about your business: your messy data, undocumented workflows, cultural quirks, and edge cases that no off-the-shelf system can anticipate. The best AI platforms don’t just deliver technology. They act more like consultants:

  • They invest the time to understand how your organization really works.
  • They co-develop solutions with you, not for you.
  • They adapt their products to your workflows, not the other way around.

A recent report1 by researchers at MIT** found that only 5% of custom-built enterprise AI tools ever make it to production, while vendor partnerships that embed and iterate alongside business teams are nearly twice as likely to succeed. Although flaws in this research suggest that the low success rate isn’t representative of typical AI initiatives, the MIT finding that working and iterating with vendors increases the odds of success and dovetails with advice and evidence from other sources including Gartner2 and McKinsey3.

Try this

In the age of AI, vendors can’t just ship software and walk away. They need to embed themselves as true business partners: understanding your data, adapting to your workflows, and co-developing solutions that fit your reality, not some generic template.

Try this

**The MIT report is based on 300 publicly disclosed AI initiatives, structured interviews with representatives from 52 organizations, and survey responses from 153 senior leaders collected across four major industry conferences, which is a modest sample. Critics point out that the report defines “success” quite narrowly: deployment beyond pilot stage with measurable KPIs, including P&L impact, evaluated six months post-pilot. This excludes many other forms of value that AI initiatives might produce (e.g. cost avoidance, efficiency, quality, customer satisfaction, risk reduction, etc.). Also, some argue six months might be an insufficient time window for many AI projects to show impact, especially in large, regulated or slow-moving sectors. Effects may accumulate more slowly.

Try this
“We made sure that we are embedding the solutions rather than bolting on—not another layer, and not asking people to go learn this new UI. It’s the difference between handing somebody a gym membership versus putting a bench press in their backpack.”
Ammen Sihngh,
Lead Solutions Engineer, Reddit

Use AI to reduce the toggle tax

As employees start using AI, they often rack up more “toggle tax,” the coordination burdens that pile up as people constantly hop between apps to find the right doc, message, or task. A study1 led by Rohan Narayana Murty and his colleagues reported in Harvard Business Review found that workers at Fortune 500 companies switch apps over 1,200 times a day. The authors calculate that, “Over the course of a year, that adds up to five working weeks, or 9% of their annual time at work.”

Multiple leaders we’ve spoken with call this the “swivel chair problem.” Employees don’t want to swivel to another platform just to use AI. At Reddit, that philosophy has guided their approach. “We made sure that we are embedding the solutions rather than bolting on—not another layer, and not asking people to go learn this new UI. It’s the difference between handing somebody a gym membership versus putting a bench press in their backpack,” said Ammen Sihngh, a Lead Solutions Engineer at the company.

Try this

Pinpoint where employees start their day (email, browser, CRM) and embed AI there. Stop asking people to go find AI. Embed it where they already work.

Try this

Tame AI sprawl with super agents

Workers are drowning in apps, dashboards, and digital tools, leading to what UC Santa Barbara Professor Paul Leonardi calls “digital exhaustion.” Over the past year, as generative AI tools have become mainstream in organizations, an Asana survey1 of 13,000 knowledge workers found that digital exhaustion has continued to increase. One of the main culprits is that workers need to grapple with a growing list of disconnected AI copilots and tools, each with its own login, quirks, and learning curves.

At one Fortune 20 retailer, a senior executive told us their solution is “super agents.” A super agent is a single front door to many smaller AI agents behind the scenes. Instead of juggling separate bots for individual tasks like submitting PTO requests, interacting with HR policies, or submitting a new vendor contract, employees can interact with one single agent that knows who they are, what they do, and what information matters most.

Truly autonomous agents will be able to do this on their own. Rather than a collection of narrow task bots, they’ll operate as adaptive collaborators: handling the logistical tedium, remembering context across workflows, and routing complex requests to the right sub-agents automatically.

Try this

Don’t roll out a hodgepodge of disconnected AI apps and agents. Think about how employees will access them as they work. If the experience is overwhelming, AI use will stall, efficiency will suffer, people will make mistakes, and digital exhaustion will spread. Instead, consider a unifying layer—an integrated platform and autonomous agents—that can understand who’s using AI, what they’re trying to accomplish, and route them to the right tool automatically.

Try this

Make AI part of the work, not a side project

Treating AI as a side experiment is a surefire way to run up coordination taxes. Teams end up juggling two parallel workflows: the “real one” to do their jobs and the “AI experiment” that heaps on additional burdens and yet provides little value (at least in the short-term).

At Zendesk, Nan Guo, Senior VP of Engineering, told us how they avoid the side-project trap by embedding AI directly into their most common coordination rhythm: the sprint. In agile development, a sprint is a short, time-boxed period—usually two weeks—when teams plan, build, and ship work. Instead of asking engineers to go hunt for “AI use cases,” Zendesk leaders insert AI into their usual scrum cadence. As Guo explained: “The Scrum is our standard way of doing day-to-day work. We picked one sprint in October so people actually applied GenAI in their real workflows. That way we could see the impact, not just something additional on the side.”

Try this

Identify your team’s core coordination cycle—sprints, project reviews, customer support queues, whatever—and embed AI there. When AI becomes part of the real flow of work, you will cut the coordination costs and continuously learn how to use it to improve productivity, decision-making, and collaboration—and in the places that matter most.

Try this
“People will use AI tools to offload, to decrease their collaborative overloading, [but] increase other people’s.”
Michael Arena,
Dean of the Crowell School of Business, Biola University

Don’t let bots pass the friction from you to others

AI is supposed to take work off people’s plates. But too often, it just shifts the burden from one person to another.

We experienced this when scheduling interviews for this project. The invite came from “Kiran,” an AI assistant. Kiran’s job was to make scheduling easy. Instead, “she” barraged Bob with reminders—even when he had an out-of-office message. At first, he thought Kiran was an unusually rude human assistant. Then he realized Kiran was a bot.

Kiran was optimized for the sender—getting a meeting booked—not for us as the recipients. Michael Arena, Dean of the Crowell School of Business at Biola University and former Chief Talent Officer at General Motors, warned, “People will use AI tools to offload, to decrease their collaborative overloading, [but] increase other people’s.” That means one team’s efficiency gain might become another’s hidden tax.

And that hidden tax isn’t just about wasted time—it’s also about trust. As researchers Ian McCarthy, Timothy Hannigan, and André Spicer have warned1, the careless use of AI can produce what they call “botshit”: made-up or misaligned content that people use uncritically. When bots act without oversight, they don’t just spread misinformation—they spread friction. A scheduling bot that spams your inbox or a chatbot that serves up bad data may look efficient from one angle, but it’s quietly pushing cognitive and reputational costs onto others.

The costs can pile up fast, and they often land on you. Research2 by Paul Leonardi shows that when bots screw up, the blame doesn’t land on them—it lands on you. People don’t hold the AI accountable. They hold the human who unleashed it.

Try this

Before deploying an AI agent, ask: Does this bot reduce work for the person receiving the message? Or does it just shift the burden onto them? In an effort to make YOUR work more efficient, are you unwittingly weaponizing friction in ways that—throughout your organization and network—heap more rather than less work on your colleagues and customers?

Try this
“We didn’t want people finding documents they shouldn’t have access to… you can only find what you have access to.”
Tadeu Faedrich,
Senior Engineering Manager, Booking.com

Use AI to break the bad silos, and keep the good ones

AI can help dismantle the silos that slow teams and individuals down. But not every silo is bad. Some exist for good reason, such as protecting HR files, legal documents, or payroll data. And in many cases where people do creative work, it helps to protect them from distraction, interruption, and bad advice from people in other parts of the company—such as Disney’s “Imagineering1” subsidiary and Lockheed’s “Skunkworks2.” The challenge is designing AI that can tell the difference between silos that protect necessary privacy, confidentiality, and creative focus versus silos that unnecessarily slow work down, undermine quality, and frustrate employees and customers.

That’s why, for example, rules about “permission layers” matter so much: who can and cannot access which information and when they can and cannot. The best AI tools enforce such rules relentlessly. Product managers in most companies should be able to instantly pull design files across teams. But they should be locked out of HR records such as performance reviews or salary records.

At Booking.com, HR teams were “extremely nervous” about AI search for exactly this reason. As Tadeu Faedrich, Senior Engineering Manager at Booking.com, told us, “We didn’t want people finding documents they shouldn’t have access to…you can only find what you have access to.”

In the end, they chose a platform that respected the same rules as their internal systems: if you couldn’t open a file yesterday, you wouldn’t suddenly see it in search today.

Try this

When evaluating AI tools, stress-test how they handle permissions as hard as you stress-test features. Many organizations are running on a mess of outdated or overly broad access rules. A good platform3 should flag where access is too open (for example, files set to “anyone with the link” or documents shared with large catch-all groups). It should also alert admins when risk increases, such as when an employee steadily accumulates more permissions to access sensitive documents than their role requires.

Try this

Measure AI’s hidden coordination costs, not just its output

AI tools can make people feel like they’re moving faster: drafting code, writing copy, or generating analyses on their own. But speed at the individual level can create hidden costs for the team.

In one randomized controlled study1 of 16 experienced open-source developers, who worked on 246 tasks, those using AI assistants took 19% LONGER than those who didn’t use AI. The AI assistants created all kinds of additional obstacles for developers. Fewer than half of the AI’s suggestions were usable. Developers spent extra time reviewing, rewriting, and cleaning up code. The tools also stumbled on context-heavy problems like working in large, mature codebases with over a million lines of code. Developers reported that AI often made “weird changes” in unrelated parts of the code, missed backward-compatibility quirks, or failed to handle subtle dependencies.

Try this

Don’t measure only how much AI produces. Measure how much additional work it heaps on people. For software developers, this means tracking bugs introduced by AI, cleanup hours logged, and integration delays. In other roles, it takes the form of campaign copy that looks polished but drifts off-brand, contracts that miss compliance requirements, or financial reports that don’t align with auditing standards.

Then build guardrails (structured code reviews, automated integration tests, and team-level checkpoints) so AI accelerates real progress instead of just kicking costs downstream. And, as Anthropic recommends2, don’t hesitate to ask a generative AI tool, “why are you doing this? Try something simpler.” It explains that Claude “tends toward more complex solutions by default but responds well to requests for simpler approaches,” which will reduce the coordination costs.

Try this
THEME 06

Hiring, Promoting, and Firing

Who gets jobs? Who stays? 
Who goes? How can AI improve the process?
AI is beginning to shape every major career decision. It is already influencing who gets hired, who advances, and who’s let go. And the impact is likely to keep getting stronger. In September of 2025, Walmart CEO Doug McMillon said, “Maybe there’s a job in the world that AI won’t change, but I haven’t thought of it.” If implemented intentionally and based on evidence rather than untested fantasies, AI can reduce bias, surface overlooked talent, enable employees to do better and more humane work, and help them to make decisions fairer and faster. If not, it can hard-code old inequities into new systems, undermine performance, and even automate thoughtless layoffs at scale.

Don’t cut jobs until gains you believe AI will create are demonstrated

Some executives see a flashy AI demo, read a press report about a single leader or company seeing success with AI, or scribble a few numbers on a napkin, and start plotting headcount cuts. Some even brag about the labor costs these half-baked and untested ideas will save. It’s a quick way to spook employees, erode trust, and wind up with fewer people to implement the same broken processes.

One Vice President at a Fortune 500 technology organization explained to us that his company takes a different approach. Their rule: no role changes until the AI productivity gains are measurable in the real world. “We don’t do any of this work until we have measurable productivity,” he explained. He added, as with most innovations and pilot programs in organizations, “about 80% of the things that we start on don’t get anywhere near the productivity gains that were imagined at the beginning.” And “that’s really not too surprising with technology” and so when an AI-based innovation doesn’t work out as they hoped, they focus on “shared accountability” and working together to solve the problem, and on getting “away from finger-pointing.”

Policies like this send a clear signal to employees: changes in jobs here aren’t based on hype, fantasy, or cool-sounding but untested ideas. We won’t change or eliminate your work until we’ve shown the new way is better. This “test your ideas before you change” approach also helps companies redesign jobs and systems to integrate AI-based solutions—without sacrificing productivity, innovation, or the quality of employee and customer experiences. It prevents organizations that are littered with ill-conceived changes that satisfy executives’ whims but waste money and make people miserable.

Try this

Before making workforce cuts in the name of AI, gather hard evidence about what works and what doesn’t. Run pilots, track the impact, and only rewarm or eliminate roles once the payoff is proven. (We know this sounds obvious, but the best leaders and companies are often masters of the obvious1.)

Try this

Use AI to root out bias throughout the employee lifecycle

Every hiring, promotion, and review process introduces bias—because they are designed and completed by imperfect human decision-makers1 who routinely ignore, twist, misremember, and forget key information. John Lilly, who sits on the board of organizations including Figma, Duolingo, and Code for America described to us how, at one company he is working with, interviews could sometimes be skewed as each interviewer took notes in a different style, making it harder to compare candidates fairly. To counter this, they’re piloting an AI tool that captures notes in a consistent template, reducing variation and leveling the field.

Bias infects performance reviews. A 2022 study (by AI human resources firm Textio) of 25,000 performance reviews2 found women were 22% more likely than men to receive personality-based feedback. AI models can help standardize inputs and flag loaded language. Yet AI can magnify and entrench bias if the models aren’t audited and governed. A 2025 study3 of more than 300,000 job postings found that “most models tend to favor men, especially for higher-wage roles.” Large language models tended to recommend (equally qualified) men rather than women for interview call-backs in male-dominated, higher-paying fields like engineering and finance. And LLMs tended to recommend women rather than men for lower paying female-dominated jobs such as in personal care and community service.

Try this

Audit your talent flow end to end, including sourcing, screening, interviews, reviews, and promotions. At each step, ask three questions: What inputs shape the decision? Who decides? Where can subjectivity creep in? That’s where bias hides.

Try this
“Using AI as a helper removes friction—including writer’s block—and ensures that 100% of Zapier team members benefit from this essential cultural touch point.”
Brandon Sammut,
Chief People Officer, Zapier

Use AI to create “How to work with me” manuals

Onboarding is a make-or-break moment of the employee lifecycle. A BambooHR survey1 found organizations have just 44 days to convince a new hire to stick around. And when badly onboarded employees do stay2, they are less effective at their work, less well-equipped to work with colleagues, less satisfied about their jobs, and less committed to their companies.

At Zapier, AI helps newcomers make the best of that time. As Chief People Officer Brandon Sammut has shared3, new hires answer a few questions and AI generates a personalized “How to Work With Me” ReadMe, a short user manual4 for working with each employee. Drafted in the employee’s own voice, employees edit the AI draft and share it in Slack so colleagues can quickly learn how best to collaborate with them. Sammut’s People Team estimates that this AI-supported workflow saves one to two hours per new employee. As importantly, he told us, “using AI as a helper removes friction—including writer’s block—and ensures that 100% of Zapier team members benefit from this essential cultural touch point.”

Try this

Give new hires simple prompts or agents to create “How to Work With Me” guides. These auto-generated ReadMes surface working styles and hidden rules, helping teams built trust and side-step missteps from day one. Zapier’s “AI Team Readme Creator5” or the “Manual of Me6” might help you do so.

Try this

Use AI to help new hires understand how work really gets done

Most onboarding fails because the real rules of work aren’t in the handbook. They live in hidden systems, unspoken norms, and backchannel networks. New hires who don’t learn them early risk chasing the wrong priorities, duplicating work, or burning out before they find their footing.

At Glean, when Rebecca Hinds was a newcomer, she used AI to surface that hidden context by asking five simple prompts1:

  1. How does my role connect to the company’s top-line objectives?
  2. What do the highest performers here do differently?
  3. What do these acronyms, buzzwords, and phrases mean?
  4. Who actually influences projects related to my role?
  5. Who here shares my personal interests or life stage?

When Hinds asked the Glean Assistant the second question, the answer was: coachable. The highest-performers weren’t just skilled. They chased feedback, adapted fast, and treated every assignment as an opportunity to improve. That shifted how she approached her own ramp-up, focusing less on proving herself and more on learning fast.

Try this

Audit your onboarding process. Ask whether new hires can answer the five critical questions above within their first week. Where they can’t, make sure you’ve centralized the right unstructured data—Slack threads, project docs, meeting notes—so AI can surface the answers instead of leaving new hires to guess.

Try this

Ask employees to set at least one AI growth goal

Most companies talk about AI adoption at the organizational level—rollouts, pilots, big initiatives—but stop short of making it personal. At Workday, VP of People Analytics Phil Willburn described a different approach: every employee is asked to set a quarterly goal to build their AI skills. That simple expectation shifted AI from an abstract company initiative into an individual commitment. It created accountability and normalized experimentation. Plus, each employee then received personalized insights into their own AI usage, so they could better connect their goal with indicators of skill building.

Try this

Make AI part of employee development plans. Ask every employee to set a specific goal for learning or applying AI, so building these skills becomes a visible and shared commitment.

Try this

Let AI brag for you

One of the hardest parts of performance reviews and promotions is remembering—or celebrating—your own wins. At Zapier, Full Stack Engineer Jordan Raleigh and his team1 used AI to solve that problem with a “brag doc” system.

When someone posts a win in Slack and adds an emoji, AI grabs it, generates a short summary, and saves it to a personal brag database. Over time, each person builds a ready-made record of achievements. “We can use these docs as reference during the company’s goal cycles, promotion processes, and performance reviews. It’s even been helpful to tackle imposter syndrome,” says Raleigh.

Try this

Don’t burden employees by asking them to dig up every win and kudo at review time. Instead, use AI to log each employees’ accomplishments as they happen, so proof of impact is always up to date and easy to find.

Try this
THEME 07

Learning and Development

How are skills taught, learned,
updated, and unlearned—and how should they be?
Traditional learning and development has been episodic and standardized: annual training modules that feel like box-checking exercises, annual performance reviews that bleed time and suffer from recency bias, and the occasional “upskilling” course that feels bolted on. AI has the potential to flip that model and turn learning and development into something continuous, personalized, and wired into the work people actually do.
“The next phase of AI transformation will belong to organizations that pair machine intelligence with human discernment training teams to slow down, question assumptions, and use AI as a collaborator, not a crutch.”
Erica Dhawan,
AI expert and author, Digital Body Language

Use AI as a thinking partner, not a substitute for thinking

AI can make you feel more productive while quietly making you dumber.

That’s the upshot of a study of healthcare workers1 that Wharton Professor Adam Grant told us about: when Polish endoscopists started using AI to help detect cancer during colonoscopies, their performance on non-AI procedures got worse. In other words, the more these healthcare workers relied on AI, the less sharp they became when the AI wasn’t there. Grant’s takeaway “whether you get cognitive debt or cognitive dividend depends on whether you use AI as a crutch or a learning tool.”

Erica Dhawan, AI expert and author of Digital Body Language and the forthcoming book Use Your Brain, sees the same pattern at the organizational level. As she put it to us, “The smartest leaders approach AI transformation not as a tech upgrade but as a judgment upgrade. The next phase of AI transformation will belong to organizations that pair machine intelligence with human discernment training teams to slow down, question assumptions, and use AI as a collaborator, not a crutch.”

If you treat AI as a substitute for thinking and professional judgment—reflexively leaning on its outputs—you rack up cognitive debt: slower execution in the long run, eroded skills, and misplaced confidence. If you treat AI as a learning tool and collaborator (that helps you critique, compare, and interrogate) and the same system rewards you with a cognitive dividend. It sharpens your judgment and expands your skills, even if sometimes your work takes longer in the short-term.

Try this

Consider running a “Human-in-the-Loop Review” before AI-driven decisions. Dhawan has seen how a little friction can go a long way. She explained to us, “At one global manufacturer I studied, teams now run a "Human-in-the-Loop Review" before every AI-driven decision asking, "What does the data miss?"; "What would we decide if the AI were wrong?" This simple discipline has reduced costly missteps and rebuilt trust across silos.”

Try this
“You spend half the meeting [with a mentee] just trying to figure out what they’re talking about…they’re lacking metacognitive skills to plan, diagnose, and understand the problem they’re tackling.”
Liz Gerber,
Professor, Northwestern University

Use AI pre-coaching to enrich mentorship

Mentorship is powered by the human touch. But AI can free mentors to spend their time where their expertise matters most.

As Northwestern professor Liz Gerber explained to us, one of the biggest skills novices lack is metacognition: the ability to step back and ask, “Am I even working on the right problem?” Novices who lack this skill default to safe, surface-level tasks such as polishing slides, coding another feature, or tweaking formatting. The harder, riskier work—clarifying assumptions, diagnosing risks, and stress-testing strategy—goes untouched. “You spend half the meeting [with a mentee] just trying to figure out what they’re talking about…they’re lacking metacognitive skills to plan, diagnose, and understand the problem they’re tackling.”

Her PhD students experimented1 with AI “pre-coaching” to fix this problem with coaches who mentored novice entrepreneurs. Here’s how it worked. Before the coaches had one-on-one meetings, the novices ran through an AI-guided prep: articulating goals, surfacing risks, and reflecting on progress. The system didn’t just collect updates. It used a library of common pitfalls to uncover the novices’ blind spots (like skipping validation or ignoring distribution) and asked the novices targeted follow-ups. One first-time founder admitted, “I felt called out—but in a good way,” after realizing he’d been hiding in the comfort zone of coding instead of facing the real blocker: the lack of a distribution plan for his start-up.

That prep transformed the subsequent one-on-ones between coaches and novice entrepreneurs. Instead of wasting half the meeting clarifying the problem, conversations jumped straight to higher-value ground: critiquing prototypes, designing validation experiments, debating strategy, or addressing emotional blockers including perfectionism and fear of failure. Mentors in the study reported feeling more focused and impactful; novices said the AI’s nudges felt sharper and more “real” than static frameworks such as Lean Canvas2.

Try this

Pilot an AI “triage” solution in your mentoring or coaching programs. Have employees complete a short AI-guided prep before one-on-ones, summarizing goals, blockers, and progress. The possible result: less time circling the problem, more time solving it.

Try this

Have the juniors mentor the seniors

Just as AI is reshaping mentorship, it’s also upending apprenticeship. Traditionally, workplace learning followed an apprenticeship model: juniors learned the ropes from seniors. But with AI, that script flips. Many junior employees enter with more AI fluency—they’ve picked it up in college or side projects. Seniors still bring the judgment and scar tissue from years of experience, but when it comes to AI, reverse mentorship is often the answer.

University of California at Santa Barbara Professor Matt Beane calls this “inverted apprenticeship.” On paper, it sounds promising. In practice, his research shows that 90% of the time it fails. Seniors learn just enough to scrape by, while juniors get stuck with the grunt work and eventually burn out.

Yet Beane and his collaborator Callen Anthony (NYU) found1 a notable exception, what they call seeking. In seeking, senior experts don’t delegate AI tasks downward. They dig in and work side-by-side with their junior colleagues. They share the early stumbles, the trial-and-error, and the messy problem-solving. That joint struggle is what builds lasting expertise for both groups—and stronger bonds between novices and experts.

Try this

Be wary of making juniors the “AI help desk” for seniors. Instead, create “seeking” opportunities. Pair senior experts with junior employees with AI expertise and give them a real problem to crack together. Make sure seniors are in the struggle early, experimenting and learning alongside their juniors.

Try this

Question the hype
Growing entry level jobs might be better than cutting them

The doom narrative says AI will wipe out entry-level roles and recent research1 shows that entry-level jobs are being cut. Shopify is betting on the opposite strategy. After successfully bringing in 25 engineering interns, CEO Tobi Lütke asked Head of Engineering Farhan Thawar how big they could scale the program. Thawar reports, “I originally said we could support 75 interns. Then I took it back. I updated my answer to 1,0002.”

Thawar has explained3 that in the post-LLM era, they bring something else: they’re “AI centaurs.” They experiment fearlessly, chase shortcuts, and don’t waste time reinventing the wheel. “I want them to be lazy and use the latest tooling,” Thawar says. “We saw this happen in mobile—interns were mobile-native. Now they’re AI-native.”

At Shopify, leaders believe they need interns with fresh eyes who don’t have old habits to unlearn, and that their curiosity can drag an entire team into the future. But this strategy only works, we should add, in high-trust cultures where young talent gets real problems to solve.

Try this

Build a culture where entry-level employees aren’t sidelined, but trusted to solve real problems with AI. Encourage them to be “lazy” in the best sense: using tools to find smarter, faster ways to work. Pair them with senior mentors, give them visible projects, and let their AI-native instincts ripple across the org.

Try this
“Senior devs give those junior devs an impossible task per unit time…the junior devs come screaming across the finish line in 48 hours with code that is functional. And then the senior dev says, ‘There are three problems—go find them. I’m not going to tell you what they are.’”
Matt Beane,
Professor, University of California, Santa Barbara

Let junior developers build buggy code, then ask them to find the bugs

The apprenticeship model isn’t just inverted in some organizations. It’s morphed into a two-speed system—where juniors race ahead with AI, then seniors force them to slow down and wrestle with the bugs.

Traditionally, junior developers learned slowly: sitting in code reviews, pair programming, and watching seniors catch mistakes. With AI-assisted “vibe coding,” developers now produce polished-looking code in hours. It compiles, it runs, it looks fine—until it collapses under real-world complexity.

Some teams embrace a two-speed model. As University of California at Santa Barbara Professor Matt Beane described to us, they let “junior devs” sprint with AI, then force them to hit the brakes. “Senior devs give those junior devs an impossible task per unit time…the junior devs come screaming across the finish line in 48 hours with code that is functional. And then the senior dev says, “There are three problems —go find them. I’m not going to tell you what they are.”

Beane says this works because juniors learn faster. In just a few days, they’ve done the building and now face the kinds of tricky bugs and design flaws that would normally take months to surface. Instead of just watching seniors catch mistakes, they have to find and fix the problems themselves. Beane added, “They get in four days what that entire team normally could have done in, say, three weeks…and those junior people are learning way faster about really high-order subtle stuff than they ordinarily would have.”

Try this

Let junior developers use AI to build quickly but then slow them down on purpose. Don’t tell them what’s broken. Make them find the problems. Prompts like “There are three things wrong” can shift the learning from passive review to active discovery.

Try this
“Break your complex problems into ten smaller items and then let AI handle six of them. This isn’t quite the way many engineers naturally think, but once you start seeing the benefits, you start developing new habits for breaking things down differently. One more thing to consider during such a process is whether humans should augment AI, or AI should augment humans.”
Howie Xu,
Chief AI and Innovation Officer at Gen

Coach people to use AI to break big challenges into smaller pieces

Many engineers are trained to think end-to-end: take a big problem, own it, and ship a solution. But with AI in the mix, the skill of decomposition—breaking a big challenge into smaller pieces to figure out which parts can be handled by machines—becomes more valuable.

Research in engineering design has shown1 that experts tend to use “breadth-first” decomposition (outlining all the major pieces of a problem before drilling into detail). Novices, on the other hand, dive deep into one part and try to perfect it end-to-end. The experts’ breadth-first habit appears to be becoming a key part of AI literacy. “Break your complex problems into ten smaller items and then let AI handle six of them. This isn’t quite the way many engineers naturally think, but once you start seeing the benefits, you start developing new habits for breaking things down differently. One more thing to consider during such a process is whether humans should augment AI, or AI should augment humans,” says Howie Xu, Chief AI and Innovation Officer at Gen.

When you break a problem apart, you’re not just managing complexity. You’re drawing the boundary between human judgment and machine execution. Every decomposition is a quiet negotiation of power.

Try this

When scoping a project, have engineers break it down into discrete modules before writing any code. Try applying the “Rule of Halves2”: ask engineers to select half of those modules to hand off to AI. And use a similar strategy for other kinds of work too. A marketing campaign can be broken into creative concepts, copy, visuals, audience targeting, and analytics—half of which AI can accelerate. Or take something as personal as planning a wedding: seating charts, invitations, menu options, music playlists—many of those subtasks can be sped up or automated with AI.

As you decompose the work, take Howie Xu’s (Chief AI and Innovation Officer at Gen) advice and be deliberate about who’s augmenting whom. If humans augment AI, put more of your focus on training the machine, improving its performance through richer labels, better feedback loops, and careful curation of edge cases. But if AI is augmenting humans, put more of your focus on training your people: helping them build the skills, pattern recognition, and judgment to know when to trust the machine, when to question it, and when to override it.

Try this

Use AI to rehearse difficult conversations

AI isn’t just reshaping technical learning. It’s becoming a coach for the interpersonal side of work, too. Leaders including Kyle Forrest, who leads Deloitte’s “future of HR” efforts, described how they used AI to prepare for difficult conversations. He told us how he sometimes role-plays those moments with an AI assistant, testing how to frame tough messages before speaking with a colleague or senior leader.

At Whoop1, Head of Core Product, Hilary Gridley, gave her team a “30-day GPT challenge.” On Day 29, employees are also encouraged to use AI to rehearse difficult conversations. They’re advised to pose a prompt such as “I need to talk to my teammate about missed deadlines without making them defensive. What’s a calm way to start that conversation?” Or, “How might they respond, and how can I handle each reaction with empathy?” [Check out Gridley’s free “30 Days of GPT Learning Guide2” if you want to try it with your team.]

A 2023 exploratory study3 by Anne Hsu and Divya Chaudhary tested a similar approach. Hsu and Chaudhary experimented with an AI-based web application they developed for conflict resolution with 13 users. Each typed what they’d say in workplace conflict scenarios—some real, some hypothetical—and the AI flagged language likely to trigger defensiveness (e.g., blame, judgment, exaggeration). It suggested more neutral, needs-based phrasing instead.

Users in the study reported they trusted the AI’s guidance, said it helped them rethink their wording, and felt calmer and more confident about having the actual conversation. Many used it to rehearse how they would handle real conflicts in a safe, low-stakes way, describing it as both stress-relieving and a skill-building exercise.

Try this

Use AI as a safe practice partner for high-stakes conversations. It can flag landmines, suggest calmer wording, and give employees confidence before they walk into the real thing.

Try this

Use AI to practice saying “no” (without sounding like a jerk)

One of the hardest soft skills is learning to say “no.” For “givers,” as Adam Grant calls them, it can feel nearly impossible. They keep saying yes until they’re exhausted, resentful, and quietly underperforming.

Yet the best organizations need people who can guard their time without wrecking their relationships.

As part of that 30-day GPT challenge1 led by Hilary Gridley, Whoop’s Head of Core Product, employees use AI to practice saying no. You can use prompts like:

  • “What’s a tactful way to decline this while still being helpful?”
  • “What questions could I ask to understand if this is actually urgent?”
  • “What might this person feel when they read my response—and how could I soften that?”
Try this

Let AI help you practice saying no. Use it to draft firm but respectful responses, so employees can draw boundaries without torching relationships.

Try this

Treat AI learning like a gardener, not a carpenter

U.C. Berkeley professor Alison Gopnik contrasts1 two mindsets: carpenters try to blueprint and control every outcome, while gardeners focus on creating the conditions for growth. As McKinsey consultants Bob Sternfels and Yuval Atsmon argue, “The most successful managers focus on identifying the sprouts—employees, teams, or departments that are experimenting with new technologies and showing promising early results. They ask, ‘Where is innovation already happening? Who is solving problems in surprisingly effective ways?’” We heard a similar argument from multiple leaders who told us AI spreads more like a garden—through peer networks, visible examples, and small wins. They argued that their job as a leader wasn’t to micromanage the blueprint for adoption step by step, but to make the soil fertile.

Try this

Don’t just “carpenter” AI adoption with mandates and blueprints. Garden it. Create visible, informal spaces—Slack channels, lunch-and-learns, internal newsletters—where employees can demo real AI use cases, swap ideas, and spark experiments their peers can replicate. Doing so can create healthy blends of formal and informal adoption.

Try this

Hold up a mirror
Show people someone like them using AI

Several of the leaders we interviewed argued that one-size-fits-all training isn’t as effective as seeing someone use AI who has a similar job to yours and who faces similar challenges. This observation dovetails with research on interpersonal behavior1 and the spread of social movements2 that shows new ideas spread faster when people see them modeled by people who they see as similar to them—for example, who went to the same school, are the same gender or race, the same age, or, in this case, have a similar role or title.

At Indeed, Megan Myers, Director of Brand and Advertising, hosts3 “power user panels” where peers do live demos of how they weave AI into their work. At Deloitte, the firm created “Savvy User Profiles4”—brief write-ups that spotlight employees who are successfully integrating the company’s GenAI assistant into their daily work. Each profile highlights the problems they solve, the steps they take, and the hours they save—which helps colleagues see what effective AI use looks like in practice. And at Zendesk, Nan Guo, Senior VP of Engineering, told us about how her team ran a six-week “How Did I It?” campaign where engineers posted everyday wins in a shared Slack channel and leaders picked winners. Campaigns like this help make AI solutions concrete rather than abstract—something that colleagues a lot like you use to do better work.

Try this

Build an “AI adoption mirror.” Find your superusers in each role. Document their workflows with screenshots, before/after comparisons, and outcomes achieved. Publish those profiles in Slack, newsletters, or team meetings so peers can see how people like them use AI well. Behavioral research5 shows that people are more likely to adopt new habits when they see someone similar already doing it first. The more concrete, the better. And have leaders amplify these stories so employees know they aren’t side hustles—they’re valued contributions to how the organization works.

Try this

Run hack-a-thons, agent-a-thons, and prompt-a-thons that reward “quiet” improvements, not just flashy impact

Hack-a-thons, agent-a-thons, and prompt-a-thons have become a staple of corporate AI rollouts. They’re fun, build awareness, and surface AI champions and use cases. Too often, though, prizes go strictly to the flashiest ideas—the biggest demo, the most visible win—while quieter work such as improving workflows or helping peers adopt AI gets overlooked.

At Udemy, Interim Chief Learning Officer Rebecca Stern described a recent organization-wide AI effort called “UDays,” monthly days dedicated to learning, where employees came together to build AI prompts and prototypes. Instead of giving prizes only for flashy impact, they put $1–2K on the table and split it across three categories:

  1. Highest impact experiment
  2. Best before-and-after prompt
  3. Best peer feedback
Try this

Build incentives at three levels:

  1. Use: reward employees for experimenting with AI in their daily work.
  2. Improvement: recognize those who make prompts, workflows, or agents measurably better.
  3. Co-creation: celebrate the people who help peers adopt or refine AI practices.
Try this

Design AI incentives that fit your team’s DNA

During Udemy’s UDays (monthly days dedicated to learning), Chief Learning Officer Rebecca Stern noticed that people on sales teams jumped in with both feet—they seemed more motivated to win the prize money than folks from other functions. As Stern put it: “They already live in an incentive-driven world—leaderboards, quotas, wins. They knew how to play, and they played to win.” Some other departments lagged. The same prize meant different things depending on the culture of the team.

Try this

Run “prompt-a-thons” or “agent-a-thons” to give employees hands-on practice. But consider how to match incentives to the way each team already operates—quota-driven groups may respond to prizes and rankings, while creative teams may value recognition, storytelling, or time to showcase their experiments.

Try this
“Buzz is far more convincing than any outreach email from IT.”
Ammen Sihngh,
Lead Solutions Engineer, Reddit

Inject AI where everyone else can experience FOMO

Not every incentive needs to be cash or prizes. The fear of missing out (FOMO) can be an equally strong motivator.

That’s what they harnessed at Reddit. Lead Solutions Engineer, Ammen Sihngh, described how they picked a spot where friction was already high: a Slack channel where employees constantly posted internal questions and waited—sometimes days—for a response. Instead of letting people slog through the backlog, they dropped in an AI agent powered by Glean. The bot answered instantly—accurate, helpful, and with no human effort.

What was once a big bottleneck became nearly frictionless. People across the company started asking, “Why don’t we have that agent too?” Within weeks, more than 300 employees requested access. Six months later, the AI agent was implemented company-wide. As Sihngh put it: “Buzz is far more convincing than any outreach email from IT.”

Try this

Embed AI in one high-friction, high-visibility workflow where everyone feels the pain. When the fix is visible and useful, curiosity and peer pressure can nudge adoption forward.

Try this
“The moment you start monitoring usage—active users, number of conversations—you get compliance, not utility. People pretend to use the tool so the metrics look good. That’s surveillance, not adoption.”
Federico Torretti,
Senior Director of AI, Oracle

Beware of vanity metrics, and the “AI theater” they create

One of the easiest traps in AI rollouts is chasing “vanity metrics.” That’s what Oracle’s Senior Director of AI, Federico Torretti calls them. He’s talking about measures such as logins, minutes using the tool, and number of chats that make dashboards look impressive. But don’t measure whether AI is actually improving work. As Torretti put it: “The moment you start monitoring usage—active users, number of conversations—you get compliance, not utility. People pretend to use the tool so the metrics look good. That’s surveillance, not adoption.”

At Zendesk, Nan Guo, Senior VP of Engineering, explained to us how instead of fixating on tool usage, they built a scorecard of six engineering productivity metrics, including five operational metrics they measure at the team level: cycle time, code review cycle time, merge frequency, change failure rate, and number of deploys (how often teams successfully release code to production). “We didn’t want to just say, ‘Look, everyone’s using the tool.’ We wanted to know whether it was actually moving the needle on productivity,” she told us.

The sixth metric, from an engagement survey, captures the emotional side of productivity: how engineers feel about the tools they’re using, whether they believe those tools actually help them do better work, and where frustration or friction is building. As Nan put it, “We look not just at the data, but at how people feel. Do they trust the tools? Do they think they’re helping or getting in the way?”

Meanwhile, at Oracle, Torretti told us how leaders there are now measuring “AI intent diversity”: the range of tasks employees actually use AI for. When people apply AI across multiple streams of work (drafting, analyzing, debugging, summarizing), it’s a sign it’s becoming part of how their work gets done—not just a novelty or a way to prove productivity through AI theater.

Try this

Beware of tracking AI via raw usage metrics. Measure how many different types of work employees are using AI for. Supplement with adoption surveys that ask how AI fits into daily workflows. These reveal whether AI is adding value, not just filling meaningless dashboards.

Try this

Put metrics in the hands of teams, not just management

It’s not enough to avoid vanity metrics. The real challenge is designing measurement systems that assess and drive effective behavior change. At Zendesk, Senior VP of Engineering Nan Guo explained to us about how deliberate her team was about measuring AI adoption by employees:

  • No names. Individual leaderboards were off-limits. Because public rankings can undermine trust, intrinsic motivation, and engagement, and can cause people to focus on gaming the system instead of genuinely improving performance.
  • No direct cross-team comparisons. Metrics only make sense when linked to the nuances of each team’s work. As Guo put it: “UI cannot compare with infrastructure—they are not doing the same thing.”
  • Give metrics data to the teams. Most importantly, Zendesk put the metrics in the hands of teams, not just management. When engineers saw their own baselines and trends, they felt more empowered because they had information about where their teams were improving and where they needed to focus more effort.

Phil Willburn, VP of People Analytics at Workday underscored the point, telling us: “I’m a big believer in democratized data. When we enable managers with insights, employees should have access to their own data too.”

He emphasized that access must come with clear communication about what metrics exist and how they benefit employees, alongside aggregation and role-based controls that prevent a perception of a “monitoring” dynamic. In other words, don’t create incentives to “perform for the metric” (e.g., spamming prompts to raise AI-usage counts). Instead, design measures that support learning and real behavior change.

Try this

Design metrics that teams can own. Focus on trends and improvement, not rankings. Put the results in the hands of employees themselves, so they feel accountable for progress rather than being analyzed and micro‑managed from above. But don’t share individual‑level metrics.

Try this

Don’t turn AI adoption into another stack-rank exercise

We heard plenty of stories about companies that turned AI usage into a surveillance sport—tracking, measuring, and even stack‑ranking employees. The new CEO of a software firm marched in and announced that AI use would now factor into performance reviews. And at one Fortune 100 firm, leaders sent out weekly emails ranking who was using AI the most. As we’ve seen, that kind of surveillance mindset breeds anxiety and gaming, not learning and better work.

But peer input, done well, can make AI use feel natural—something everyone does and expects everyone else to do, not just the early adopters or tech enthusiasts. And without turning it into a zero‑sum leaderboard. At Shopify, employees rate each other quarterly on how they use AI tools. In a memo to employees1 CEO Tobi Lütke wrote, “Learning to use AI well is an unlovious skill. My sense is that a lot of people give up after writing a prompt and not getting the ideal thing back immediately. Learning to prompt and load context is important, and getting peers to provide feedback on how this is going will be valuable.” The point at Shopify isn’t to crown winners and losers. It’s to make AI fluency visible and social, part of how colleagues earn credibility with each other.

At Zoom, giving people ownership over their AI insights drove adoption. Paul Magnaghi, Head of AI Strategy, described: “With our sales operations, building trust [around AI tools] has been critical.” Early on, the company started using conversational intelligence within Zoom to analyze the effectiveness of sales reps’ conversations. “At first, reps were worried about managers using the data to monitor them. We flipped that on its head—opening up the intelligence to the reps themselves and enabling them to query AI tools directly. Now they can ask inquisitive questions like, “Was I effective in this conversation?” We’ve seen that level of trust grow as a result.”

Try this

Build AI into peer feedback cycles, but drop the individual peer‑to‑peer rankings. Ask employees to share concrete examples of how colleagues used AI to solve a problem, improve a workflow, or help a teammate. That shifts the focus from surveillance to storytelling, from compliance to recognition, and from empty metrics to shared learning.

Try this
THEME 08

Innovation

How does (and should) AI change how new ideas are generated, selected, and implemented?
As Stanford Business School’s Charles O’Reilly has argued, organizational innovation requires both creativity and implementation—good ideas alone aren’t enough, and flawless execution of bad ideas is a waste of time and money. AI is reshaping both sides of that equation: how ideas are generated, how they’re selected, and how they move from concept to reality.

Ask yourself, is AI the wrong tool for the problem?

Not every problem is an AI problem. We’ve heard plenty of stories about teams slapping “AI” onto a project just to win budget approval, or assuming AI is required when a simpler fix would do. David Lloyd, Chief AI Officer at Dayforce, described how their organization runs proposals for new AI use cases through a mandatory review where the first question is: “Does the problem actually require AI?”

Try this

Before greenlighting any AI project, press your team to answer: “Does this really need AI?”

And if you are looking to provide real human interaction, with eye contact, a gentle smile, and a human touch, AI might give you some hints about how to do it, but can’t do it for you. In the words of GPT‑4o, “Trusting an LLM to comfort you is like asking a Zombie for a hug.” And while an LLM can explain to you how to fix that broken toilet, it’s up to you—or a plumber—to get your hands dirty to fix the thing.

Try this

Don’t fall for the jargon monoxide spewed out by (lousy) AI vendors

AI vendors—at least the crummy ones—love to cloak shaky products in buzzwords that sound smart but mean little. As Sahin Ahmed, Data Scientist at Publicis Media, puts it1: “Lately, it seems like everyone is jumping on the AI bandwagon, tossing around buzzwords like they are confetti at a tech parade.” He warns2: “Be wary of companies that use jargon‑heavy phrases without clear explanations of what their AI actually does.”

Try this

Before you greenlight or buy anything with “AI‑powered” stamped on it, strip away the label and ask: if the term “AI” vanished from their pitch, would the product still be impressive—or even necessary? Then, run it through our five‑part AI Washing Gut Check, inspired by Ahmed’s advice:

  1. The Outlandish Promises Test: Be skeptical of any vendor that can’t show working AI today but promises a revolution tomorrow. If every feature you ask about lives “on the roadmap,” you’re funding their R&D, not your transformation.
  2. The AI Residue Test: After meeting the vendor, remove every instance of the word “AI” from the pitch and read it again. If the story collapses, congratulations—you just wiped off the gloss and found what’s really underneath.
  3. The Reference Ghosting Test: Ask to speak to the vendor’s current customers. If they dodge the request or send you to a “strategic partner,” usually a friendly collaborator, not a real paying user—that’s code for “no happy customers yet.”
  4. The Human‑in‑the‑Loop Test: Genuine enterprise AI enhances judgment, not replaces it. If the vendor’s pitch assumes all or most humans disappear, they’re selling fantasy, not augmentation.
  5. The Missing Metric Test: Every AI claim should tie to something specific you can measure—fewer bugs, faster sales cycles, higher forecast accuracy. If all you hear are adjectives like “smarter” or “transformative,” you’re not buying results—you’re buying adjectives.
Try this

Build an AI sandbox to spur safe experiments

Opening up AI innovation to everyone can be risky. That’s why Stanford Medicine built a secure AI “playground” called “SecureGPT” where clinicians, administrators, staff, and members of the IT team can safely test ideas while protecting privacy and security. Michael Pfeffer, CIDO and Associate Dean, told us how Stanford doctors and nurses use it to test prompts in their daily work—without waiting for formal approvals and with every experiment anonymously tracked by the system.

For example, a team led by Chief Data Officer Nigam Shah developed “ChatEHR1” by categorizing and ranking the experiments performed in the AI playground. This tool enables clinicians “to ask questions about a patient’s medical history, automatically summarize charts and perform other tasks.” As Shah explained, “ChatEHR is secure; it’s pulling directly from relevant medical data; and it’s built into the electronic medical record system, making it easy and accurate for clinical use.” A pilot group of 33 healthcare workers tested ChatEHR’s ability to speed their search for information about each patient’s “whole story.” A Stanford physician who was one of these early users reported that ChatEHR freed him and his colleagues to “spend time on what matters—talking to patients and figuring out what’s going on.” Now, ChatEHR is available to every Stanford clinician.

Try this

Before you stand up a secure AI sandbox, ask, “Where is your place for failing?2” When he was Managing Director at IDEO, Harvard Business School Executive Fellow Diego Rodriguez used to pose that question to clients. Every innovative organization needs an answer—a space where people can take risks, learn new skills, screw up, and keep learning.

Then build that space for AI. Stand up a secure space where teams can experiment without waiting for help from your IT team. Let anyone—engineers, PMs, interns, skeptical veterans—test ideas and log outcomes. Track the experiments and scale the ones that work.

Try this
“Our philosophy has been let 1000 flowers bloom…It could be that PM somewhere in the org, or that new investor who just joined a week ago and says,‘Isn’t there a better way to do customer calls?’”
Sonya Huang,
Partner, Sequoia Capital

Don’t hoard the best tools for experts

We’ve heard about multiple companies reserving their strongest AI tools and models for engineering or data teams. Shopify does the opposite: anyone, in any function, can use every model. Their thesis: high‑value use cases can come from anywhere—and some of their fastest adoption came from sales and support, not engineering.

AI breakthroughs don’t just happen in innovation labs. As Sonya Huang, a Partner at Sequoia Capital, has emphasized, the best AI ideas don’t always come from the obvious places. “Our philosophy has been let 1000 flowers bloom…It could be that PM somewhere in the org, or that new investor who just joined a week ago and says, ‘Isn’t there a better way to do customer calls?’”

Try this

Don’t ration the best AI models or tools. Your biggest wins may come from the least obvious places. As Huang says, “It’s hard to bet against individual ingenuity in the long run.”

Try this

Give people dedicated time for AI experimentation

One theme that came up often in our interviews: people need time to experiment with AI. David Adams, co‑founder and Chief Product Officer of Canva, has reflected1, “Team members told us they needed more time to explore and experiment with AI, rather than trying to squeeze learning into an already busy schedule.” Across organizations, leaders echoed the same challenge: the biggest barrier to using AI isn’t fear or lack of interest—it’s lack of time.

So in July 2025, Canva gave 5,000 employees an entire week off their day jobs for an “AI Discovery Week.” For five days, “Canvanauts” (the company’s employees) participated in workshops, hackathons, and open exploration, building confidence, surfacing fears, and uncovering new use cases.

Try this

Carve out protected time for AI learning. It doesn’t need to be a week. Start with a day or even an afternoon a month. Provide access to tools, guided sessions, and space to experiment. Your employees need time and permission to figure out how it can change their work.

Try this

Protect the “blank page”

AI is often pitched as the cure for “blank page syndrome.” But the evidence is mixed. Some studies suggest that early AI input can boost creativity1. Others show it can narrow thinking. Consider the MIT study of students who write SAT‑style essays, which was reported2 in a The New Yorker article on how “AI is Homogenizing Our Thoughts.” The MIT researchers found that LLM users showed reduced alpha wave connectivity in the brain—a signal linked to original thinking and creative flow. In addition, the essays written by the LLM users “tended to converge on common words and ideas” and their “output was very, very similar” to one another.

The difference seems to be in whether people use AI to generate or prematurely narrow ideas in the early stage of the creative process:

  • AI can jumpstart the creative process when it offers multiple, varied, and unexpected starting points.
  • But when it creates anchoring—anchoring people to its first output—it can short‑circuit the messy, high‑value parts of creativity where the best ideas often emerge.

That’s why some creators set deliberate boundaries between when to invite AI in and when to keep it out. Behavioral scientist and author Lindsay Kohler avoids anchoring by keeping her rough early drafts messy, fragmented, and fully human. That’s where her ideas take shape. AI comes in later to help her tighten the structure, polish dialogue, refine pacing, and search for academic papers for targeted answers.

Try this

Set team norms around when to use AI in creative work. For example, the first draft must be human. When you do bring in AI, use it for divergence. Ask it to generate ten wild‑card ideas, surface blind spots, or explore edge cases.

Try this

Build speed bumps to protect the creative process

Nobel Prize winner Daniel Kahneman’s research1 shows the human mind runs in two modes of thought:

  1. System 1 (fast thinking): intuitive, quick, and automatic.
  2. System 2 (slow thinking): deliberate, effortful, and analytical.


In creative work, both systems matter. System 1 helps generate rapid connections and bold ideas, while System 2 slows things down for reflection, testing, and refinement. The best ideas emerge when people and teams shift fluidly between the two. As one advisor to Fortune 50 companies explained to us, AI can speed up the tasks around creativity—research, prompts, scaffolding ideas—but creativity itself is still messy and inefficient. He said,

“AI is like instant coffee. It’s always available, it’s fast, and it’s good enough in a pinch. But if you live on nothing but K‑cups, you risk forgetting how to brew the real thing—and you miss out on the richer flavor that comes from time and craft.”

AI thrives in fast mode—pulling in data, surfacing options, and executing at a pace humans can’t match. As Chris Yeh, co‑author of Blitzscaling, explained to us, AI can speed up the tasks around creativity—research, prompts, scaffolding ideas—but creativity itself is still messy and inefficient. He expands, “AI is like instant coffee. It’s always available, it’s fast, and it’s good enough in a pinch. But if you live on nothing but K‑cups, you risk forgetting how to brew the real thing—and you miss out on the richer flavor that comes from time and craft.”

That’s why—similar to behavioral scientist and author Lindsay Kohler—he cautioned against skipping incubation. His rule: don’t bring in AI until his thinking has had time to percolate.

Try this

When building AI into creative or strategic processes, protect the slow mode. Build in deliberate pauses, schedule checkpoints, and resist the temptation to ship the first AI‑generated answer.

Try this

Be explicit about where AI should speed you up, and where it shouldn’t

Protecting slow thinking isn’t just about adding pauses. It’s about knowing where in the process it matters most. That’s what Perry Klebahn has learned from teaching Launchpad1, a start‑up accelerator he has led at the Stanford d.school for the past 15 years. Klebahn has been experimenting the last few years with how AI can help (and hinder) the 10 to 15 founding teams that he teaches each year. He constantly presses and coaches these founders to think about stages in the start‑up process that benefit from AI’s speed versus those that require human slowness to generate insight and originality.

Try this

Instead of asking, “Where can AI help?” ask, “What kind of thinking do I need right now—fast or slow?” Then match the tool to the tempo.

  • When you need speed: Use AI to blast through tasks that benefit from pattern recognition, synthesis, or scale—summarizing 50 interviews, generating visual options, or pressure‑testing copy.
  • When you need slowness: Hit pause for work that depends on human texture, like sense‑making, judgment, or emotional resonance.
Try this

Beware of “the easy come, easy go” syndrome
You might not stick with and struggle enough to develop promising ideas

Robert Cialdini’s classic book Influence1 documents that, the more effort a person puts into a relationship, project, or idea, the more strongly committed they will be to making it work in the future. Psychologists call this the “labor leads to love” effect: People stick with hard‑won courses of action to justify their effort to themselves and others.

As Cialdini would have it, Stanford lecturer Perry Klebahn told us the speed and ease of prototyping enabled by AI—the lack of labor—means founders in his Launchpad accelerator aren’t working as hard and struggling as much now to develop their promising ideas as in the past. Founders of the 125‑plus start‑ups launched pre‑AI were less prone to this “easy‑come, easy‑go” syndrome. They were more committed, it seems, because developing those promising ideas was slower, harder, and more frustrating.

Klebahn described a telling symptom of this lack of commitment: Founders often talk about AI‑generated concepts in the third person, as “this idea.” In the past, students using slower methods said “my idea” or “our prototype.” Because AI‑aided ideas come together so easily, founders push less to refine, test, or sell them. An MIT study2 echoed this: students who used AI to write essays said they “felt no ownership whatsoever” over their work.

There is, however, a big advantage to this “easy come, easy go” effect. The lack of effort required to generate prototypes with AI means that people will be less likely to be irrationally committed to bad ideas—and it will be easier for them to pull the plug on ideas that seemed good at first, but further testing reveals to be bad or impractical.

Try this

Use AI to accelerate idea generation, but slow the process where it matters—testing, iterating, and pitching—so people invest in making the ideas their own. But beware of becoming irrationally committed to ideas just because they took a lot of effort to identify or test.

Try this

Plan for the majority of your AI experiments to fail

Many AI initiatives won’t live up to your hopes, hunches, and expectations. Several of the leaders we spoke to pegged the failure rate for AI initiatives at around 80%. As one emphasized, they don’t redesign or reallocate work until AFTER they have convincing evidence that an initiative will succeed, and AI can be used to augment or replace what human workers do in the company.

Indeed, a recent survey1 of 1,000 leaders in medium and large companies suggests that many leaders later regret the rash personnel decisions they’ve made about replacing people with AI. The survey by UK‑based software firm Orgve found that 39% of the leaders they surveyed reported making “employees redundant as a result of deploying AI.” Yet, when those leaders looked back on the layoffs they did, “55% admit they made wrong decisions about those redundancies.”

Try this

Budget and staff with (roughly) the 80/20 ratio in mind. Remind your teams upfront: the majority of our AI experiments will fail. Partner with vendors who have a proven track record so your odds improve. Define “kill criteria” before you start, set time limits on tests, and make it expected—and healthy—for people to shut down weak ideas. That way failures cost less and winners get the oxygen to grow. And you will avoid eliminating or redesigning jobs prematurely, and the employee performance problems and fear that will follow.

Try this
“AI will flatter your ideas.”
Hilary Gridley,
Head of Core Product, Whoop

Beware the AI flattery trap

In April 2025, OpenAI rolled back1 a product update after discovering its latest model had developed a bad case of AI sycophancy—a tendency toward overly agreeable or flattering language. Research by Anthropic2 has found that when people rate AI responses, they tend to reward answers that agree with their own views, even if those answers are less accurate. As a result, models trained on human feedback learn that flattering users earns higher scores—which unintentionally teaches them to be sycophantic even at the expense of truthfulness.

That’s poisonous for innovation. A sycophantic AI won’t challenge your thinking. It’ll butter it up. As Hilary Gridley, Head of Core Product at Whoop, told us, “AI will flatter your ideas.” That’s why she recommends stopping asking, “Is this a good idea?”, a question that begs for validation. Instead, ask, “Under what circumstances would this not go well?” or “Who might not receive this positively?”

Indeed, a 20253 study found that short “provocations” like “What might be missing?” or “Generate the opposite perspective” helped users avoid rubber‑stamping AI outputs. Those tiny nudges restored critical thinking, forced reflection, and turned AI from a lazy shortcut into a real thinking partner.

And when those provocations come from a Work AI platform that understands employees’ workload, skills, and workstyles, they get even smarter. They might reveal that:

  • The VP doesn’t yet trust your team, so a bold “innovative” pitch lands as reckless rather than visionary.
  • A moonshot idea drains energy from teammates who are still buried in unfinished experiments.
  • A sleek new concept sounds tone‑deaf to customers still frustrated by last quarter’s product flaws.

In some cases, human judgment will be helpful for vetting the contextual, relational, and emotional factors that will determine whether an idea lands or backfires.

Try this

Follow Gridley’s advice and have an AI assistant stress‑test your next big idea—not to judge if it’s good, but to surface where it could go wrong. Then bring that list to your team and use human judgment to evaluate which risks are real barriers to innovation.

Try this

Make probabilistic bets on AI projects like venture capitalists do

Given the high failure rate of AI projects, Alexandre Guilbault, Telus VP of AI, urges leaders to think about AI bets like a venture capital portfolio. You might back $20M worth of opportunities, knowing that maybe only $4M will pan out—but those wins more than cover the misses.

Guilbault suggests using “risk‑weighted ROI.” On paper, two AI projects might each promise $10M in benefit. But if your best estimate is that one has a 90% chance of success and the other only 30%, they’re not equal bets. Weighted properly, that “$10M” project with a 20% probability of success is really a $2M project.

Of course, no one’s crystal ball is clear. Venture capitalists and AI leaders (just like other human beings) are terrible at estimating which bets will win—there’s too much noise and uncertainty. But even rough probabilities force better conversations. Thinking probabilistically shifts the mindset from fantasyland—where every shiny project is a winner—to reality.

Try this

Don’t expect every AI project to return your full investment—let alone much more. Multiply each project’s projected return by your best estimate of its probability of success, and make funding calls based on that math. And if you don’t know the odds? Use the benchmark several leaders we spoke to hovered around: only about 20% of early AI initiatives succeed.

Double down on winners, prune and learn from losers fast, and celebrate results at the portfolio level, not the project level.

Try this

Treat your employees as “customer zero” Especially the toughest critics

If you’re building AI tools to sell to customers, don’t wait for outsiders to expose the cracks. Adobe doesn’t1. For example, when it introduced the generative AI tool Firefly in 2023, it turned thousands of employees into “customer zero.” They weren’t just finding bugs—they were surfacing unexpected use cases, flagging risks, and refining features. By the time it reached customers, it had survived some of the toughest critics: the employees who knew the product inside out.

Try this

Treat your employees as “customer zero.” Put new tools in their hands early and use their blunt feedback and insights to strengthen the rollout.

Try this

Innovate where AI can’t replace what people do

AI is making speed and efficiency table stakes. Faster and cheaper are no longer a competitive edge—in many cases, they’re the baseline.

The real advantage lies in the parts of work machines can’t replicate: trust, care, and emotional connection. Airbnb CEO Brian Chesky has leaned into this1, expanding into offerings like private chefs and photographers. These aren’t about shaving minutes off a process. They’re about creating moments of delight that no algorithm can replace.

Try this

Audit your product or service. Highlight the touchpoints where a human makes the experience better—by listening, surprising, or caring. Protect those. Double down on them. Make them the center of how you compete.

Try this
THEME 08

Leadership

How should leaders think about and do their jobs now? What is the same? What is more important than ever? What old behaviors do they need to abandon?
In the AI era, leadership is under renovation. Some parts of the job—like planning, analysis, and routine communication—can be automated or heavily augmented. Others—like earning trust, modeling civility, making hard calls, and showing up with presence—are more vital 
than ever. The challenge is knowing which parts to hand off to machines, and which to double down on as fully human.
“[AI adoption] spreads when people see their higher‑ups using the technology.”
Melinda Fell,
Founder, Complete Leader

If you use AI, your team will too
And you will fuel a virtuous cycle of teaching and learning

AI adoption doesn’t spread through policy memos alone or executive pep talks. As Melinda Fell, Founder at Complete Leader, told us, “it spreads when people see their higher‑ups using the technology.” Research by Worklytics1 found that manager behavior predicts AI adoption more than any other factor. If a manager uses AI five or more times per week, adoption rises to 75%. At one of the Fortune 50 companies where we did interviews, leaders aren’t just talking about AI, they are making it visible. Executives demo tools in staff meetings, keep AI tools on display in one‑on‑one meetings, and show, in real time, how AI supports their work.

A Senior Vice President there told us: “I’ve had examples of leaders saying they have their desktop [AI assistant] open literally all the time…and they’re showing, demonstrating, and role modeling how they’re using the tools.”

Try this

Don’t just tell people to use AI. Lead by example and prove that you’re using it too. Pull up your AI app or interface during a meeting, share the prompts you’re experimenting with, and talk openly about your mistakes too.

Try this

Build an AI rhythm into your organization

Just like people, organizations run on rhythms. Meetings, budgets, performance reviews—all of them set the pace for how work gets done. The trouble is, most companies treat AI as a side project—something dropped into the schedule here and there. Without a steady beat, adoption fizzles. With one, AI becomes part of the organization’s heartbeat—the “drumbeat that the organization marches to,” as venture capitalist John Lilly puts it. Shared rhythms help people build AI into their routines, know when to focus attention on it, and synchronize AI efforts across the company.

One HR executive at a Fortune 20 retailer described how the organization has created such an AI rhythm:

  • At the top: The organization’s CEO keeps AI as a standing topic in his monthly meeting with hundreds of VPs.
  • Across functions: A cross‑functional steering committee meets regularly to hash out adoption, governance, and use cases across finance, HR, tech, and business.
  • On the ground: Every staff meeting ends with the same ritual: a standing agenda item focused on AI. Leaders rotate to showcase what they’re trying, what’s working, and what’s not.
Try this

Create an operating rhythm for AI that runs through multiple layers of your organization: executive forums, cross‑functional councils, team meetings, and individual routines.

Try this

Examine what your system rewards—AI will give you more of it, want it or not

Many of the leaders we spoke to described AI as a magnifying glass. It amplifies the patterns that already live in your culture, data, and workflows. As Northwestern Professor Hatim Rahman told us, “AI gives us a tool to amplify more of what you want.” Even though, if you think about your goals more deeply, you may not “want” more of it at all!

If your culture prizes speed over rigor, AI will crank out sloppy work at scale. If your sales culture rewards closing at all costs, AI will double down on short‑term wins while eroding trust with customers. If your managers reward presenteeism over outcomes, AI will crank out activity metrics and dashboards that look busy but don’t move the business forward.

Try this

Before rolling out AI, run an “amplification audit.” Ask:

  • What behaviors get rewarded here—speed, consensus, risk‑taking, accuracy?
  • If AI doubled those behaviors, would that be good or bad?
  • What safeguards do we need to prevent the bad from scaling?
Try this

Avoid imposing bold (yet slippery) “hand‑waving” metrics from on high

It’s easy for executives to toss around numbers like “2x productivity” or “10x efficiency” when talking about AI. Howie Xu, Chief AI & Innovation Officer at Gen, warned us against such “hand‑waving” metrics.

He explained that top‑down AI metrics are slippery and easy to misinterpret. “2x productivity” may sound bold, but, as he put it, “70% of the impact might have nothing to do with AI.” He added, “Chasing these abstractions and top‑line metrics alone, if not used carefully, may lead to confusion about what a real AI transformation is about and, in the worst case, create even more bureaucracy.”

When leaders set vague goals like “2x productivity,” the organization scrambles to make the numbers look real. Every team interprets them differently, no one knows what success means, and hours get wasted on dashboards, massaged spreadsheets, and glossy success stories.

Try this

Measure AI with the same KPIs you already use to run the business. Not sweeping multipliers, but workflow‑level improvements that are concrete and credible:

  • CEOs: decision velocity, meeting effectiveness, return on investment, ability to attract top technical talent.
  • Engineering: faster development cycles, cleaner code, fewer hours lost to incident documentation.
  • Sales: sharper prospect research, higher outreach velocity, improved win rates.
  • Support/IT: quicker ticket resolution, more deflections, higher satisfaction scores.
  • HR: shorter onboarding ramps, easier policy navigation, less manager time wasted on routine questions.
Try this
“You could predict which patients are more likely to no‑show … You could double book them … and now the doctor potentially has to see two patients, so both get less time. Or you call a ride service and bring the patient who’s more likely to no‑show here…same algorithm, different workflows.”
Michael Pfeffer,
CIDO and Associate Dean of Stanford Health Care and the School of Medicine

Don’t use AI just to crank out more work
Use it to make work better

Another common trap is using AI to crank out more work: more emails fired off in less time, more patients seen per day, more accounts covered per rep. But ignoring or downplaying the harm inflicted by focusing too narrowly on making work faster and cheaper.

Michael Pfeffer cautioned us against a narrow focus on efficient throughput without understanding workflow. He gave the example of AI that can predict which patients are likely to be no‑show appointments. A cost‑effective strategy is to double‑book those patients—which maximizes system throughput but leaves patients shortchanged when both arrive for an appointment. A better approach, Pfeffer suggested, is to reduce barriers that cause people to miss appointments in the first place. Such as arranging transportation for people who struggle to get rides to appointments. He adds, “You could predict which patients are more likely to no‑show…You could double book them…and now the doctor potentially has to see two patients, so both get less time. Or you call a ride service and bring the patient who’s more likely to no‑show here…same algorithm, different workflows.”

Try this

When AI creates time savings, don’t automatically assume that adding more volume to the system is the best answer. Humans suffer from “addition sickness1”: a bias to add more rather than subtract or improve. Beware of the knee‑jerk reaction to squeeze in another patient, send another email, or tack on another project. Put explicit guardrails in place so the “extra capacity” is invested in higher‑quality and more humane work—listening, advising, designing—rather than just cranking the wheel faster. For example:

  • In healthcare, set policies that ensure freed‑up time goes to longer patient visits, not double‑booking.
  • In sales, track whether AI frees reps to spend more time in discovery calls, not just more outbound emails.
  • In engineering, measure whether AI support reduces time in incidents and increases time spent designing better systems.
Try this
“Saving three hours [because AI has helped draft a] report doesn’t ensure those three hours are used productively.”
Alexandre Guilbault,
VP of AI, TELUS

Figure out where the time savings from AI actually happen

Recent large‑scale research1 on 25,000 workers found that only about 3% to 7% of time savings from generative AI actually translated into earnings. As Alexandre Guilbault told us, “Saving three hours [because AI has helped draft a] report doesn’t ensure those three hours are used productively.” People use AI to finish work faster but then keep the extra time for themselves rather than for the benefit of their organization. As Guilbault said, “[People might just be] spending more time at the coffee machine. Which is not necessarily a bad thing, but doesn’t necessarily impact the bottom line.”

So, some people use AI to finish work faster but keep the extra time for themselves. They may pad the time by looking busy: firing off Slack messages or doing other nonsense that make it seem like they’re hard at work. A 2025 survey by SAP2 across more than 4,000 working adults found that one‑fifth of respondents said they would rather hide an hour saved with AI than give managers a reason to expect more, fueling such “productivity theater.”

Try this

Don’t assume time saved is time well spent. Prove it. Survey your employees to see what they’re actually doing with AI’s “extra hours.” Are engineers clearing more backlog items? Are sales reps spending more time in customer conversations? If the answers don’t add up to tangible outcomes, chances are those hours are leaking into busywork.

Try this
“Involving workers directly not only surfaces the day‑to‑day realities executives often miss, but also helps identify new pathways so frontline roles aren’t left out or written off.”
Lauren Pasquarella Daley,
Associate VP, Jobs for the Future

Stop using “higher‑order work” as a cop‑out

Leaders love to trot out the line: “AI will take the busywork so you can focus on higher‑order tasks.” Udemy’s Interim Chief Learning Officer, Rebecca Stern, explained to us how the slogan can be especially misleading for frontline roles. Junior employees and those in operations‑heavy jobs spend their days on tactical tasks. There’s often no “strategic thinking,” “innovation,” or other so‑called higher‑order work for them waiting to be unlocked—scheduling shifts, processing orders, answering support tickets, keeping the physical machinery running. There are no “higher‑order” tasks to move into, so instead of opportunity, they worry about being expendable.

Try this

Beware that the slogan “AI will take the busywork so you can focus on higher‑order work” can alienate and fuel fear among employees who do mostly routine and tactical tasks. Instead, Lauren Pasquarella Daley recommends running “task‑mapping” workshops with employees at all levels to see which tasks AI might replace, displace, augment, or elevate. “Involving workers directly not only surfaces the day‑to‑day realities executives often miss, but also helps identify new pathways so frontline roles aren’t left out or written off.”

Try this

Make it clear to employees what AI means for their job security

In April 2025, Shopify CEO Tobi Lütke told employees1 that “reflexive AI usage is now a baseline expectation.” The memo notified employees that AI skills would be part of performance reviews. Teams also had to prove they’d exhausted AI before requesting more headcount.

This type of decisive leadership—a clear, company‑wide commitment to AI adoption—can fuel AI implementation and use. But if it’s not paired with psychological safety and transparency, it can backfire. At one large real‑estate firm we know, an executive described to us how their CEO declared that everyone must “default to AI” and began tracking weekly usage of their internal GenAI tool. Employees read this type of message as “use AI or else” and as a signal that their jobs—or their coworkers’ jobs—would vanish if they didn’t keep up (or practice the art of “AI theater”).

Try this

Make expectations clear without triggering survival anxiety. Communicate how employees will be trained, supported, and valued through the transition. If layoffs are coming, or are possible, be upfront about that too. Destructive fear and uncertainty can be dampened by communicating to employees2 that their jobs are safe for some specified period—say, for the next month, three months, or six months. This predictability spares your people from constant worry, which reduces their distress and helps them.

Try this
“There’s an opportunity for employees to use AI tools to help maximize one‑on‑one time with their manager by suggesting what to talk about and summarizing key updates.”
Lily Zhang,
VP of Engineering, Instacart

Use AI to help managers handle flatter organizations and bigger teams

Many leaders we spoke to described flattening as the fad of the moment, and, in many cases, managers are inheriting more direct reports.

As Lily Zhang told us, “there’s an opportunity for employees to use AI tools to help maximize one‑on‑one time with their manager by suggesting what to talk about and summarizing key updates.”

AI can’t replace the human connection in leadership, but if you need to stretch your span, it can take over the coordination load—surfacing what each person is working on, flagging blockers, and suggesting talking points.

Try this

If you have many direct reports, or the number is growing, figure out how to use AI to strip away the administrative drudgery, identify pain points and challenges, and speed up and refine your communications to team members. Free yourself from digging through dashboards and status docs so you can focus on what leaders are uniquely poised to do: setting direction, coaching, and building trust.

Try this

Sure, use AI to train a personal bot
But let people know when it isn’t really you

In 2024, Harvard Professor Raj Choudhury’s team trained an AI1 bot to mimic Zapier CEO Wade Foster’s communication style. When employees interacted with this digital doppelganger, they could only tell the difference between the real CEO and the bot 59% of the time.

Yet, when employees believed a response came from the bot, they rated it as less helpful—even when it was actually from CEO Foster himself. Your people don’t just want your words—they want you behind those words.

This is a new rendition of an old phenomenon. Long before AI, leaders delegated emails and even their “personal” texts to assistants or had staff ghostwrite their bylined social posts. When employees or followers find out, the reaction is the same: less trust, less weight, less belief that the words really matter.

Try this

If you use AI to extend your communication, say so. Be transparent about when it’s you and when it’s the bot. Otherwise, you risk eroding the trust you want and need from your team.

Try this

Do your homework, but ditch the soulless AI presence

AI‑generated all‑hands scripts and memos can make out‑of‑touch leaders sound overly slick, inauthentic, and disconnected from employees and customers. CEO and author Nancy Duarte, who has helped clients do better speeches and tell better stories for decades, is finding that, in this AI era1, many of the leaders she works with “lack the ability to walk into any room prepared to make people care.” While the best communicators still do A LOT of homework, they do it with the knowledge that people crave the imperfect, unfiltered presence of a leader—they want the pauses, the rough edges, those bits of humanity that convey, “I’m real, I’m here with you, and you can trust me.”

That human touch matters less in some jobs, though. Wharton Professor Lindsey Cameron found that many gig workers actually prefer being managed by algorithms. In her seven‑year study2 of ride‑hail drivers, many said they valued the schedule flexibility of AI‑driven apps—hundreds of quick interactions a day, constant feedback, and freedom from biased or micromanaging bosses. Cameron shows that these systems create what she calls “choice‑based consent”: even within tight constraints, workers feel a sense of agency and mastery that keeps them engaged in the job. For them, algorithmic management and communication systems work because the job revolves around completing tasks quickly at an attractive piece‑rate, not emotional connection among managers and co‑workers.

Try this
  • Swap overly scripted all‑hands for live Q&A sessions. But do your homework: prep panelists with your goals for the conversation and on “tone, timing, and roles.”
  • Trade slick keynote decks for messy, real‑time discussions with your team. Be prepared with facts, stories, and backstage knowledge about peoples’ hopes and concerns. And remember that AI can help you prepare for such discussions, but, as Duarte puts it, “what it can’t replace is your ability to read the room.”
Try this

Name the j‑curve

Leaders often search for that AI quick win. But the reality is that productivity usually dips before it improves when implementing any new tool or organizational process. Northwestern Professor Hatim Rahman reminded us: “There needs to be the willingness and importance to invest in the difficulty of implementing a new technology like AI. Economists call this…the productivity j‑curve.”

The first wins—drafting emails, summarizing notes, generating boilerplate code—come fast. But then the harder work begins: redesigning workflows, retraining employees, rethinking roles. That’s when productivity hits the bottom of the curve. Rahman warns, “There has to be a willingness to view this as a long‑term change process of an organization.”

Try this

Name the productivity j‑curve before it hits. Tell teams to expect that AI may make work slower before it makes it faster—because the flow of work through your organization needs to be rewired and people need time to learn the new system and to do fast, effective, and innovative work together.

Try this
“A lot of times workers have experienced changes where they’ve been told something is going to improve their lives, and it hasn’t. It’s either made their job more complicated or made them redundant. So it’s not surprising that they’re skeptical.”
Hatim Rahman,
Professor, Northwestern University

Acknowledge—and learn from—ballyhooed changes that let people down before

AI doesn’t arrive in your organization on a blank slate. Most workplaces are littered with the ghosts of past “transformations” that were supposed to make things easier: ERP systems that slowed everyone down, new tools that doubled the clicks, reorganizations that created more confusion than clarity. Some of these changes cost people jobs—and then later, their employers figured out their skills were essential.

Hatim Rahman notes, “A lot of times workers have experienced changes where they’ve been told something is going to improve their lives, and it hasn’t. It’s either made their job more complicated or made them redundant. So it’s not surprising that they’re skeptical.”

If you ignore such past stumbles, you’ll lose trust. Kyle Forrest, who leads Deloitte’s “future of HR” efforts, pointed us to their research1 that found workers in high‑trust companies are more than two times as likely to feel comfortable using AI as those in low‑trust companies.

Try this

Before you roll out new AI tools, name and discuss how past changes went wrong. Describe what you’ve learned, what you’ll do differently, and how you’ll measure if it is working this time. Acknowledge and listen to peoples’ skepticism, fears, and suggestions.

Try this

Use three questions to gauge when to try bold experiments or tread carefully

So how do you decide what’s safe to try? Start with these three questions:

  1. Is it reversible? If it flops, how easy is it to stop or undo? Think Jeff Bezos’ “one‑way” versus “two‑way” doors. If it’s a two‑way door—easy to reverse—go faster.
  2. What’s the objective risk? How much harm could this do to people, performance, or reputation if it fails? High upside with low downside is usually worth a bet.
  3. What’s the political risk? Sometimes the danger isn’t the idea failing—it’s succeeding and upsetting the wrong powerful, selfish people.

These three questions won’t guarantee success. But they’ll give you a rough compass for where to place your bets, and where to hold back.

And above all, remember AI isn’t going to fix work.

You are.