What I Tell Every Company in the First Meeting
Every first meeting is the same.
Different companies, different industries, different budgets — but within fifteen minutes I'm saying the same five things I said in the last conversation.
The mistakes are identical even when the companies aren't.
Most companies come into an AI conversation carrying the same set of wrong assumptions. And until those assumptions get cleared, nothing useful gets built.
Here's what I actually say.
Prioritization is the real problem, not AI
Every company I talk to has more AI use cases than time to test them. Automating support. Summarizing sales calls. Personalizing emails. Scoring leads.
All of them sound reasonable, and all of them will fail if you run them simultaneously.
The first job of an AI strategy is killing the wrong use cases fast enough to focus on one — not picking the right one from a list. Companies that run five AI experiments at once end up with five half-finished proofs of concept and no clear signal about what's actually working.
Pick one. Run it until it succeeds or fails conclusively. Then pick the next one.
Your data isn't as ready as you think
"We have tons of data" is what I hear in almost every first meeting. That's usually true. The data being in a usable state is usually not.
Structured data lives in three different CRMs with inconsistent field names. Customer feedback is split across Zendesk tickets, NPS surveys, and a spreadsheet someone made in 2019. Sales call recordings are in a tool that doesn't have an API.
The gap between "we have data" and "we have data an AI system can actually learn from" is where most projects stall. It's work that has to happen before the glamorous work can start.
I ask to see the actual data in the first meeting. Not a deck about the data. That conversation resets timelines by about two months, almost every time.
The ROI you're expecting isn't the ROI you'll measure
Companies usually describe AI goals as cost savings. Fewer headcount hours on task X. Lower cost per support ticket. Reduced spend on agency Y.
That's a reasonable place to start, but it's rarely where the real return shows up.
What AI actually changes is harder to quantify: decisions made with better information, response times that weren't previously possible, personalization at a scale that wasn't cost-effective. A sales team using AI to research accounts spends 40 minutes on prep that used to take half a day. That doesn't show up cleanly in a spreadsheet.
If you measure ROI only through cost savings, you'll undercount the return and optimize for the wrong things. I've watched companies abandon AI initiatives that were clearly working because the savings column didn't justify the cost.
Build your measurement framework before you build the system. Include speed, quality, and decision-making outcomes alongside cost.
You need an owner, not a committee
AI projects die by committee. Marketing has opinions about brand voice. Legal has concerns about outputs. Engineering thinks the timeline is unrealistic. The executive sponsor loses interest after the first demo.
One person with decision-making authority and accountability for the outcome is the fix — not a better steering committee.
That person doesn't need to be technical. They need to understand the business problem, have authority to make tradeoffs, and care enough to push through when it gets hard. In most companies, that's a product manager or a senior ops lead.
When I ask "who owns this?" and the answer is "the AI working group," I know we're not ready to build yet.
This is going to take longer than you think
The first timeline I get is almost always compressed by a factor of two. Six-week pilots that take three months. Q3 launches that slip to Q1. Quick wins that require two months of data cleanup first.
AI work just looks like this. You're integrating with systems that weren't built for it, working with data that wasn't structured for it, building something that doesn't exist yet. You can't estimate it accurately until you've built it.
The companies that get AI right aren't the ones that hit their original timelines. They're the ones that stay in motion when the timeline shifts instead of calling it a failed experiment.
Every first meeting ends with the same question: "So how do we get started?"
Pick one problem, one owner, and a definition of success — before writing a line of code.
The companies that start there tend to finish.

Need a Fractional Head of AI?
I help companies build an AI operating system — shared context across teams, AI handling the repetitive work, and your people focused on what actually matters.
15+
Years in Tech
12+
AI Products Shipped
3
Fortune 500 Brands