Skip to main content
← Back to Blog

AI ROI: Why Most Companies Measure It Wrong

Colin Gillingham··4 min read
ai-consultingai-strategyenterprise-aiai-implementationai-leadership

Most companies measure AI ROI with a cost-savings lens. That lens was built for a different kind of automation.

RPA. Rules-based workflows. Robotic process automation. Tools that removed a human, saved a salary, and let you count the delta. Clean math.

AI isn't that kind of tool.

There are three things AI can produce: cost reduction, speed, and new capability. Companies that measure only the first category routinely shut down projects that were working, because the value landed somewhere the metrics weren't pointed.

The most undervalued category is capability

Speed is the easy second category. Things that used to take weeks happen in hours. Not because you removed headcount, but because you removed wait time. Faster cycles mean more attempts, and more attempts mean more learning.

New capability is harder to see and worth the most. AI lets you do things that were cognitively impossible at the required scale: personalizing every customer touchpoint, synthesizing thousands of support tickets into patterns, running A/B tests your team never had time to design.

If you're only measuring cost savings, you're optimizing for the smallest thing AI can do for you.

The wrong metric kills working projects

I've watched companies shut down AI initiatives that were working.

A customer service team deployed an AI triage system. Response time dropped from 4 hours to 22 minutes. Customer satisfaction went up 18 points. They handled 40% more volume without adding headcount.

The ROI report came back: "No headcount reduction, so cost savings don't materialize." Project deemed unsuccessful.

That's a measurement problem.

When you define cost savings as the success metric before the project starts, you've already told the team which direction to optimize. You'll miss everything else the system produces.

What to measure instead

Measure capability gained, not just cost eliminated.

Build your measurement framework around a real baseline first. What does the current process cost? How long does it take? What's the output quality? Capture this before you build anything. Skip it and you'll never prove ROI even when it's obvious.

Then track speed delta. How much faster is the AI-assisted version? Time is money, but it's also optionality. Faster cycles mean more learning.

Finally, connect AI output to a downstream business number. If AI is improving your pricing decisions, track revenue per customer. If it's improving support, track churn. "Our AI-assisted forecasting improved deal close rate by 6 points" is a story the board understands. "We saved 200 hours of analyst time" gets cut in Q3.

When leadership keeps asking about cost

They're not wrong to ask. But they're using a single-dimension lens on a multi-dimensional problem.

The reframe: you're not replacing labor, you're compressing the cost of quality at scale. A team that spent three weeks on competitive analysis now spends two days and produces a better output. That's not a headcount reduction. That's a quality-per-dollar improvement that compounds.

Picking the right first use case, one with a success metric tied to a real business outcome, makes this conversation much easier before it happens. I wrote about how to pick that use case if you want the filter.

The companies that figure out how to measure AI correctly are the ones that keep compounding their advantage. The ones measuring AI like ERP software will keep getting disappointing results, not because the AI isn't working, but because they're looking for the wrong thing in the wrong direction.

Colin Gillingham

Need a Fractional Head of AI?

I help companies build an AI operating system — shared context across teams, AI handling the repetitive work, and your people focused on what actually matters.

15+

Years in Tech

12+

AI Products Shipped

3

Fortune 500 Brands