Skip to main content
← Back to Blog

Why the ROI Framing Kills Good AI Projects

Colin Gillingham··4 min read
ai-consultingai-strategyai-strategistai-implementationenterprise-ai

ROI sounds like rigor. In AI, at the pilot stage, it's a filter that kills the most valuable work first.

The companies reaching for ROI too early aren't being disciplined. They're using a metric designed for known investments to evaluate unknown ones.

You can measure ROI on warehouse automation because you have historical throughput data, known labor costs, and a clear baseline. You don't have that at the start of an AI pilot. You have a hypothesis.

How the ROI filter selects for mediocrity

When you demand ROI before a project gets funded, you don't get the best AI projects. You get the ones easiest to measure.

That's usually cost reduction. Automating something a human was already doing. Cutting headcount in a documented process. These get approved because the denominator (current cost) is known. The numerator (cost after AI) is estimable. The pitch writes itself.

The problem: most of the interesting AI work isn't cost reduction. It's capability expansion. Things you couldn't do before, delivered at a speed you couldn't match before.

A prospecting agent that researches every account overnight. A feedback loop that improves your model every 48 hours.

None of these have a tidy ROI calculation at the pilot stage. The baseline is zero because the capability didn't exist. The value is real, but distributed across product quality, speed, and decision confidence, not a line in your P&L.

The ROI filter kills them, not because they're bad ideas, but because they don't fit on a spreadsheet.

The forcing function that actually works

The better question at the pilot stage is: what does this teach us if it works, and what does it teach us if it fails?

That's learning value. And it's a better forcing function for frontier work than ROI.

A company I worked with wanted to pilot AI in their customer escalation workflow. The ROI case was thin. Savings from faster routing didn't justify the engineering cost on a 12-month horizon. But the learning case was strong. They'd find out whether their support data was clean enough to feed a classification model, whether agents would trust and act on AI recommendations, and whether the latency profile was acceptable for live workflows.

Those answers were worth more than the savings estimate, so they ran it. The second project paid for itself in six weeks.

The ROI framing does something more insidious too: it trains the organization to think about AI as an efficiency play. Two years of cost-reduction projects later, your competitors who invested in capability expansion are doing things you structurally cannot.

What to ask instead

Before I greenlight any pilot now, I push for one answer: what does success here enable that we can't do today?

That question forces a concrete claim about the world after the work succeeds. "We'll save 20 hours a month" is easy to say. "We'll be able to do X for the first time" requires actually knowing what you're building.

The companies getting the most out of AI aren't the ones with the best ROI models. They're the ones who decided early that some investments are worth making without a spreadsheet, because the capability they'd gain was worth more than the certainty a calculation would give them.

ROI is a great tool for running a business, not for exploring a frontier.

Colin Gillingham

Need a Fractional Head of AI?

I help companies build an AI operating system — shared context across teams, AI handling the repetitive work, and your people focused on what actually matters.

15+

Years in Tech

12+

AI Products Shipped

3

Fortune 500 Brands