Skip to main content
← Back to Blog

What Responsible AI Actually Requires Operationally

Colin Gillingham··4 min read
ai-strategyai-consultingenterprise-aiai-implementationai-leadership

Responsible AI is a set of operational decisions about who reviews what and when.

Most companies write the principles first. Fairness. Transparency. Accountability. The language sounds serious. Then you ask who specifically reviews AI outputs before they reach customers, and the answer is some version of "we have a committee" — which means no one is doing it.

The review needs an owner

A FinTech company I worked with had their lending recommendation model running without a formal human review process. The responsible AI section of their values deck was three paragraphs. Their actual review process was a Confluence page nobody had updated since the pilot. When a regulatory audit arrived, they couldn't explain who had approved what.

The gap was ownership, not principles.

For any AI system making consequential decisions — customer-facing outputs, financial recommendations, hiring decisions — you need to document: who reviews this, at what frequency, and what authority they have to override. One named person, not a committee, not "cross-functional alignment."

Output review is not model review

Most companies with any review process are reviewing the model. Accuracy rates, benchmark scores, evals. That's useful. It doesn't tell you what your AI is actually producing for real customers on an average Tuesday.

Output review is different. Sample 20 to 50 real production outputs every week or two. A human reads them. Not to catch every error — to catch patterns. The kind of systematic drift that doesn't appear in aggregate accuracy metrics until a customer escalates.

This is intentionally manual. Responsible AI requires human attention at recurring intervals, not just at deployment.

An escalation path that actually works

Who does an employee go to when something looks wrong? At most companies: a Slack message and a prayer.

I worked with a content company where their AI tool had been producing legally risky sentences for six weeks before someone mentioned it at lunch. A new employee. Uncomfortable. Mentioned it informally. There was no formal mechanism to raise a concern — just responsible AI language in the values document.

A real escalation path has three parts: a trigger (what counts as a problem), a destination (a specific person or channel), and a response time. No ticketing system required — just clarity on those three things.

Incidents need a log

When something goes wrong — and it will — you need a record. Most companies fix the problem and move on.

Two reasons that's not enough. First, patterns: a single bad output is noise, but ten similar ones over three months means something is broken. You can't see the pattern if you're not keeping track. Second, accountability: if you can't answer "what went wrong with our AI systems in the last 12 months," you're not governing. You're hoping.

The incident log is what separates genuine AI governance from a marketing exercise.

Start with operations, end with principles

The sequence most companies use is backwards. They write the principles document first because it's fast and it feels like progress. Then they try to make the principles real and discover the operational questions were never answered.

Answer four questions: who reviews what, how often, what triggers an escalation, and how incidents get logged. That's responsible AI. Write your principles document after, as a description of what you've actually built.

If you're earlier in this work, the starting framework for smaller companies is the right place to begin before you need the full operational layer.

A principles document describes your values; operations are how you prove them.

Colin Gillingham

Need a Fractional Head of AI?

I help companies build an AI operating system — shared context across teams, AI handling the repetitive work, and your people focused on what actually matters.

15+

Years in Tech

12+

AI Products Shipped

3

Fortune 500 Brands