Skip to main content
← Back to Blog

How to Pick Your First AI Use Case

Colin Gillingham··4 min read
ai-strategyai-implementationai-consultingenterprise-aiai-leadership

Most companies start by asking the wrong question: what's the most impressive thing AI can do for us?

The best first AI use case isn't the most ambitious one. It's the one that teaches you something and has a clear success metric you can measure before you build anything.

Here's the filter I use.

The metric has to exist before you build

If you can't describe what "better" looks like in one number or a simple yes/no, you don't have a use case. You have a wish.

"AI that improves customer experience" is a wish. "AI that reduces average support resolution time from 4 hours to under 2" is a use case. The metric doesn't need to be perfect, but it needs to exist before you write a single line of code.

I've seen companies build AI tools for six months and then realize they had no way to know if any of it worked. Define the metric first.

The process has to be documented somewhere

AI doesn't invent process. It accelerates existing process.

If the humans doing the task today can't write down the steps they follow, the AI won't figure it out either. It'll just fail faster and more expensively. The first use case needs a documented workflow. Even a rough one. A Loom walkthrough, a Notion SOP, a spreadsheet someone maintains religiously. That documentation is the raw material.

If the process lives entirely in someone's head and changes every time, document it first. That's the prerequisite, not a detour.

The stakes have to be low enough to learn

Your first AI project is a learning project. You're figuring out what your data actually looks like, what your team's AI literacy is, where the friction in your tooling lives.

That requires being able to absorb mistakes.

"AI that makes autonomous credit decisions" is not a good first use case. "AI that drafts the first pass of credit memos for human review" might be. Same domain, different stakes. The draft can be wrong — a human catches it before it matters.

Build in a review layer while you're still learning. The alternative is a disaster that sets back AI adoption by two years.

What makes a use case worth doing first

Beyond those three filters, I look for one more thing: will building this teach you something generalizable?

A use case that processes one document type in one proprietary system teaches you almost nothing you can reuse. A use case that touches your CRM data, requires you to evaluate an LLM's output quality, and needs a simple UI for team feedback teaches you a lot. You come out with opinions about models, evals, and what your team can actually adopt.

Those opinions are worth more than the project itself.

The shortlist that actually works

When I work with a company in its first few weeks of AI strategy — here's what those conversations look like — three categories consistently show up as good starting points.

Internal document Q&A is technically simple, teaches you a lot about data quality and retrieval, and has a clear success metric. Draft generation for repetitive outputs — proposals, status updates, job descriptions — gives you a fast feedback loop and low risk. Classification and routing (tickets, leads, documents) puts AI in the decision-assist seat while humans review edge cases.

None of these are glamorous, and that's the point.

Start where you'll learn, not where you'll impress

The companies that move fast with AI aren't the ones that started with the boldest use case. They started with something boring, learned from it, and moved into harder territory with real institutional knowledge behind them.

That foundation doesn't announce itself. It just makes everything after it faster.

Colin Gillingham

Need a Fractional Head of AI?

I help companies build an AI operating system — shared context across teams, AI handling the repetitive work, and your people focused on what actually matters.

15+

Years in Tech

12+

AI Products Shipped

3

Fortune 500 Brands