Your First 90 Days with an AI Strategy
Every company serious about AI has a slide deck.
Usually 30-something slides. A 2x2 matrix, a capability map, a strategic roadmap. Professionally designed. Almost never going to ship.
Strategy without a concrete 90-day arc doesn't have an address. It's aspirational. Inert. The companies that actually build AI products treat the first 90 days as a sequenced execution problem, not a planning one. Most don't.
Days 1-30: Find the one thing that already hurts
The instinct in month one is to audit. Map workflows. Evaluate vendors. Run workshops. Produce a prioritized opportunities list.
Don't.
The gap between "this workflow is inefficient" and "AI can fix it" is enormous, and crossing it requires real knowledge of what current AI can and can't do. Most prioritization exercises produce lists that are either impractical or already solvable with existing software.
What actually works: find the one workflow that hurts most and has a clear input-output structure. Not "improve customer satisfaction." Something like: we spend 14 hours a week manually classifying support tickets into 12 categories, and we're wrong 30% of the time.
That's an AI problem. It has a specific input, a specific output, and a measurable baseline. Start there.
Days 30-60: Prove it with the smallest possible thing
This is where most AI strategies die.
"Implement AI across customer success" is not a project. It's a program. Programs need governance, headcount, and budget cycles. They don't ship in 60 days.
The best AI strategists I know are maniacally scope-constrained during this phase. Don't build a full system. Don't evaluate five vendors. Pick one model, run it against last month's tickets, measure accuracy against your human baseline. A two-week experiment, not a two-quarter initiative.
When you show 90% classification accuracy against a human baseline of 70%, the next conversation is completely different. You're not pitching AI anymore — you're expanding something that's already working. That's a different room to walk into.
Days 60-90: Instrument before you scale
The thing that works in a pilot almost always needs real engineering before it becomes a product.
Most companies skip this. They go from pilot to scale and then wonder why real-world performance doesn't match the demo.
Before you expand: instrument. Add logging. Define the fallback when the model fails. Figure out what "good enough" looks like at 10x volume. If the system can't explain its decisions to a skeptical internal stakeholder, it's not ready to grow.
An AI system without observability is a science project with a good deck. Instrumentation is what turns a promising pilot into something a company can actually run.
The job isn't the strategy
Most AI strategy engagements stop after month one. The audit, the prioritization, the roadmap — that's where the deliverables live.
The real leverage is in days 30-90. Staying scope-constrained during piloting. Knowing which vendors can deliver in practice, not just in the sales cycle. Recognizing when a pilot is ready to become a product versus when it should be shut down cleanly.
Strategy that stops at the roadmap is a consulting deliverable. Strategy that runs through implementation is the whole job.
The slide deck was never the problem. Stopping there is.

Need a Fractional Head of AI?
I help companies build an AI operating system — shared context across teams, AI handling the repetitive work, and your people focused on what actually matters.
15+
Years in Tech
12+
AI Products Shipped
3
Fortune 500 Brands