Skip to main content
← Back to Blog

What I Look for in the First Week of an AI Engagement

Colin Gillingham··3 min
ai-consultingai-implementationfractional-aiai-strategyai-leadership

The first week of an AI engagement is a diagnostic, not an orientation.

In seven days I can tell you with reasonable confidence whether the work is going to succeed. Not because I'm running sophisticated analyses. Because the signals are almost always obvious.

Here's what I'm watching.

Who shows up to the first call

Patterns tell me more than titles.

If the person who scheduled the call sends a delegate, that's data. If I ask "what does success look like in six months?" and get five different answers from five people in the room, that's the most valuable thing I could learn all day.

Misalignment at the top is the most common reason AI projects fail. Not technical debt or bad data or the wrong model. People who can't agree on what they're trying to do.

The first call is where that surfaces, if you let it.

What they say the blocker is

Every organization has a story about why they haven't moved faster. I listen carefully to what they name as the blocker.

"We're waiting on legal" means: there's a champion but no executive sponsor with real authority.

"Our data isn't clean enough" usually means: someone read a blog post and used it as a delay. The data is rarely as bad as people fear.

"We tried something last year and it didn't work" is the most useful. I want to hear everything — how they measured it, who ran it, why they think it failed. That postmortem is a window into organizational DNA.

What a company believes about its own past failures tells me more than any technical audit.

Whether anyone has touched the tools

I ask people to walk me through what they actually use at 2pm on a Tuesday. Not their tech stack on paper — what tabs are actually open.

Companies that make real progress almost always have a few people already experimenting — using Claude to draft, building automations in Zapier, playing with something they read about. No formal permission needed, just curiosity.

Companies that stall often have no grassroots adoption at all. AI is something that happens to other companies, or something leadership wants to "implement." Nobody's playing with it.

The divide between organizations where curiosity is alive and organizations where it's waiting to be granted is the real leading indicator.

What the first no looks like

Every engagement hits a constraint in week one. A data source that requires approval. A process that needs a committee. Something technically available but practically locked.

How a team handles that first no tells me a lot. Some route around it. Others treat it as evidence that the whole thing is too hard.

Problem-solving energy matters more to me than frictionless.

What they're actually afraid of

This one takes longer than a week, but the seeds show up early. Companies fear different things about AI: replacement, compliance exposure, being wrong in public, moving too fast. The fear shapes everything downstream.

I ask versions of "what would make this feel too risky?" early. The answer tells me which conversations to have and which stakeholders to bring in before things get further.

The first week is about understanding what you're working with, not building. The organizations that let you see clearly are the ones you can help.

Colin Gillingham

Need a Fractional Head of AI?

I help companies build an AI operating system — shared context across teams, AI handling the repetitive work, and your people focused on what actually matters.

15+

Years in Tech

12+

AI Products Shipped

3

Fortune 500 Brands