Skip to main content
← Back to Blog

The AI Audit: What to Do Before You Build Anything

Colin Gillingham··4 min read
ai-consultingai-strategyai-implementationfractional-aienterprise-ai

Most companies start their AI journey by buying something.

A ChatGPT Enterprise seat. A copilot add-on. A vendor demo that looked great on a Thursday afternoon. None of these are bad choices in isolation. But most companies are making them before they can answer a single question about what they're trying to accomplish.

The AI audit exists for that gap: not a bureaucratic gate, but a forcing function that surfaces five questions every company needs to work through before they start.

I run this with every client on day one. The answers tell me almost everything. And most companies have never thought about them.

Where does a human still make a decision they wish they didn't have to?

This is the most important question, and almost never the first thing anyone brings up.

"Where could AI improve things" is too open. It lets people gesture at everything and commit to nothing. I want the specific pain: the thing someone does 40 times a week that feels mechanical but takes judgment. The meeting that exists only to sync on something that should have been automated years ago.

That's your first real use case. Anything chosen without that kind of specificity is a guess.

What data exists, where does it live, and how clean is it?

AI is downstream of data. If your customer records are split across three CRMs and a spreadsheet someone's maintained since 2019, the AI problem is actually a data problem, and AI will make it louder.

I've worked with companies that wanted to build predictive churn models before they'd ever defined "churned" consistently across their own systems. Models don't tolerate what humans can: squinting at ambiguous labels and adjusting on the fly.

Sometimes the most useful audit output is: you need six months of data cleanup before this is worth starting. Six months of cleanup before starting is the right call, not a delay.

What does good look like, and can you measure it?

Ask a team what success looks like for their AI project and you get two types of answers: a metric or a vibe.

"Cut time-to-response from 48 hours to 4" is a metric. "We want our team to feel more supported" is a vibe.

You can build toward a metric; evaluating a vibe isn't possible.

If a team can't land on a measurable definition of done, that's the most important thing the audit found. You're not ready to build; you're still figuring out what you're solving.

Who owns this when I leave?

I come in, scope the work, and hand it off. If the answer to "who owns this" is "we'll figure that out," I push back hard.

I've watched what happens when AI projects succeed and there's no internal owner: the tooling decays, nobody updates the prompts, the model drifts, and a year later someone's calling me to "fix" a project that was working fine when I left.

Every AI initiative needs one person, not a committee, who understands it well enough to make decisions when no consultant is in the room.

Find that person before you build. If they don't exist, grow them or hire them.

What's the cost of being wrong?

If the AI is wrong (misroutes something, hallucinates an answer), what happens? Who catches it?

For some use cases, the cost is low. A customer service draft a human reviews before sending. A lead scoring model that ranks prospects imperfectly but still saves time. The economics work even with error.

For others, wrong has real consequences: medical referrals, financial approvals, decisions where a mistake costs something. These need a fallback. A human in the loop, at least until the system earns trust.

Mapping this before you build changes the design, the scope, and sometimes the decision about whether to build at all.


An audit is a compass, not a delay.

The five questions take about a day to answer honestly. The companies that skip them spend months building toward the wrong destination.

Colin Gillingham

Need a Fractional Head of AI?

I help companies build an AI operating system — shared context across teams, AI handling the repetitive work, and your people focused on what actually matters.

15+

Years in Tech

12+

AI Products Shipped

3

Fortune 500 Brands