How to Get Your Exec Team Aligned on AI
When everyone in the room is "pro-AI" and nothing is moving, the problem isn't the technology.
The alignment problem is almost never technical. It's political, definitional, and about competing priorities wearing the same vocabulary. I've been in these rooms. The CTO wants to build. The CFO wants a business case first. The COO is worried about the team's reaction. The CEO just saw a competitor's announcement and wants to move now. Everyone nodding. All talking about completely different things.
AI exposes four gaps that have usually been there for a while: competing priorities, definitional confusion, fear of accountability, and no shared frame for what "good" looks like.
Most strategic conversations have inherited vocabulary
Executives know what it means to approve headcount, make a build/buy call, enter a new market. There are models for these decisions. There are precedents.
AI doesn't have that yet. When you say "we need an AI strategy," the CTO hears "we need an AI team," the CMO hears "AI-generated content," and the CFO hears "show me the ROI projections before I approve anything." Same meeting. Same words. Entirely different conversations.
The first thing I do in any enterprise AI adoption conversation is slow everything down and create shared definitions. Not shared enthusiasm. Shared definitions.
The four gaps I see every time
Every company has a version of these.
Definitional gap: No agreement on what "AI" means in your specific context. LLMs? The ML models already in your product? RPA? Automation broadly? Until this is pinned, every conversation talks past itself.
Priority gap: Sales wants personalization. Engineering wants code assist. Legal wants guardrails. Finance wants a payback period. These don't conflict — but without coordination, you end up with four disconnected pilots that don't add up to anything. (I wrote about how to pick the right first use case — that decision usually surfaces the priority gap fast.)
Accountability gap: When an AI model misbehaves or an automation produces a bad outcome, who's responsible? In most companies, nobody has a clear answer. That makes everyone cautious about committing to anything real.
Mental model gap: Some execs think about AI as automation (replace tasks). Some think augmentation (make people better). Some think product moat (build competitive differentiation). These produce wildly different decisions. You need to know which frame is operating before you start arguing about tactics.
How to actually run the alignment conversation
I do this in two sessions when I have room, one long session when I don't.
First session: surface the gaps. Don't try to resolve anything. Each exec gets five minutes with two questions: "What does AI success look like for your function?" and "What's your biggest concern?" Write it on a whiteboard. The gaps become obvious on their own.
Second session: build the common frame. Use what came up in session one to co-create three things — a shared definition of what you're actually talking about (one crisp sentence, not a mission statement), a prioritization lens for what to do first, and an answer to who owns what.
The goal isn't full agreement. It's a shared frame that allows productive disagreements instead of unproductive confusion.
The question most teams skip
Who owns this when it goes wrong?
Everyone gets aligned on vision. The moment you ask about accountability, the room gets quiet.
That question determines whether anything actually happens. Without a clear owner, every initiative becomes a committee, every committee produces a report, and that report sits on a shelf next to the last AI strategy document.
I push for a single named owner before we finalize any strategy — a fractional AI lead, a product leader, someone with actual authority to make decisions. The AI strategy is only as real as the person who has to stand up and defend it in six months.
What real alignment actually looks like
Not heads nodding in the same direction.
It looks like an exec team where each person can explain in their own words what you're doing and why. A clear owner with real authority to decide. A shared definition of success with specific metrics attached, not vibes.
And it looks like a leadership team that has made explicit what it is not doing — at least for now. The companies that move fastest on enterprise AI adoption aren't the ones who said yes to everything. They said no to most things and committed fully to two.
When the alignment is real, the strategy writes itself. The hard part was always the room.

Need a Fractional Head of AI?
I help companies build an AI operating system — shared context across teams, AI handling the repetitive work, and your people focused on what actually matters.
15+
Years in Tech
12+
AI Products Shipped
3
Fortune 500 Brands