3 things you need to do before putting AI in the loop
AI will confidently answer any question it gets. Make sure the humans at your company can follow a process before giving it to AI to amplify. Read: "Everything Starts Out Looking Like a Toy" #255

Hi, I’m Greg 👋! I write weekly product essays, including system “handshakes”, the expectations for workflow, and the jobs to be done for data. What is Data Operations? was the first post in the series.
This week’s toy: a brilliant bit of DIY marketing by Posthog - creating a fun developer toy called Deskhog to help developers think about new ways to use their eventing architecture. To be clear, you don’t need to care about Posthog to think this little computer in a 3-d printed box is cool.
Edition 255 of this newsletter is here - it’s June 16, 2025.
Thanks for reading! Let me know if there’s a topic you’d like me to cover.
The Big Idea
A short long-form essay about data things
⚙️ 3 things you need to do before putting AI in the loop
A cool thing you can do right now with LLMs (ChatGPT, Claude, Gemini, or similar) is ask them how they would serve your customer based on the public information in your knowledge base.
Go ahead: ask the bot a question a typical customer would ask your team and see what the response says. Depending upon the complexity of the question and the quality of your online help center, you’ll get a confident answer that might be completely wrong. But it will sound confident and correct.
That’s the same thing that might happen if you use an AI process to help customers without first evaluating a few key steps in your existing customer process:
Do the humans who answer your customer have a clear idea of what they are doing or are they confused?
Do the metrics you measure stay consistent each month or quarter and are their outcomes congruent with your customer’s goals?
Do the dashboards you use to measure results clearly state the impact when these results change?
If you’ve got a clear view of every customer, understand your metrics, and have actionable data dashboards that demonstrate the impact of a change, you’re ready for AI to take the wheel and drive customers autonomously (at least for some of the most typical questions.)
But before you drive your customer support more autonomously, it would be wise to pay attention to these three steps and pave the way for better results when you ask AI to participate in the customer loop.
Let’s take a look at each of these ideas.
Great AI needs clear human thinking
One of the hallmarks of amazing customer service is the combination of the right tone with acknowledgement, information, and action.
When a customer contacts you, they are typically seeking:
information (when’s my order going to arrive)
action (could you please cancel my order)
escalation (I’d like to speak to your manager about my order because I don’t know how to get what I want)
Obviously, this is a simplification and some interactions are more complicated. But they start here, and the humans that work in customer service need to follow a set of rules (a procedure governed by a policy set by the organization). Great reps manage to thread the needle, while others get confused because the rules are not deterministic or require hidden information to solve the problem.
This sounds like the conditions that LLMs require: deterministic rules plus context on the customer to get to a good outcome. If you don’t have clear rules (policy and procedure) to answer customer questions, you’re going to get inconsistent results with humans or with AI.
It’s hard to compare last month with this month
If you find yourself saying these words, your metrics might have changed or you might not be mapping them to key outcomes in the business. For most businesses, certain things are obvious: new logos gained, revenue counted, and leads converted. Others are less clear: “how many customers are happy”, “what’s a good customer vs. a bad customer”, “how can we tell when people are really engaged?”
One of the reasons it might be hard to compare this month with last month is that once you change a definition, you might not have the source data to fill in that information over time. The time series data that you need to answer a question might not exist. So you end up guessing, and the root cause here of the problem is that the original metrics didn’t come close enough to counting the important facts of the business.
When you’re considering adding AI to the mix, take a look at the metrics you use to manage and confirm that they have a deterministic definition. If you can point at any one customer and calculate the answer given the available data, you have something AI could count for you.
Now think: if I uncovered something important that needed remediation, what happens next? This might be the most important part of your automation journey because if the answer is “ignore it” AI might help you do that more efficiently. If the answer is “spend lots of time solving that problem”, how are you going to react when you find more instances of that problem than you do today?
Dashboards often hide the actual problem
Consider your emotional reaction when you see a simple dashboard. If it’s a revenue number, when it goes up: hooray! When it’s a churn number or a dissatisfaction number and goes up: boo!
Your brain is wired to respond as a snap judgment to these numbers, and if the query behind them is filtered, or if the field being used to judge is wrong, the entire conclusion could be flawed.
That’s why it helps to establish a base metric that teams rely on with a shared, public definition. If anyone wants to check the number, they can create a report with that definition, or better use an approved metric tied to a time grain (monthly count, daily count, yearly count of a thing).
The context of that number is everything.
Making sense of data for AI
Let’s think about AI as a helpful intern. You’d like to define a task, set a procedure for the AI to follow to deliver a great service experience, guided by a thoughtful policy. If the customer doesn’t appreciate the result or wants to talk to a person, they ought to be able to do so.
But the fundamental tenets of this experience need to be in place before you think about adding AI to scale the result. Your team needs to accurately identify, triage, and manage a problem based on a known procedure and an easy-to-find policy. That team must not be confused when they hit an edge case that’s not explicitly listed in your documentation.
By focusing on some clearly defined key metrics, that team will also be able to use the tools at their disposal to know if the trends happening are improved or not with the use of AI. They’ll know that answer because they have already built the tools to make that analysis today.
Before you place AI in the loop, ask yourself: does the team know how to solve this? Is it clearly defined for the measurement of that metric over time? And does the team know how to answer when the number changes and highlight key reasons for the change?
Here are few suggestions to make your process ready for AI:
Clear rules for humans: Ensure your people know exactly how to solve common problems.
Stable metrics: Define metrics clearly and keep them consistent over time.
Transparent dashboards: Ensure your dashboards explicitly highlight meaningful changes and why they matter.
What’s the takeaway? Before trusting AI with your customer interactions, clarify your human procedures, stabilize your metrics, and simplify your dashboards. AI amplifies whatever foundation you provide (good or bad) so strengthen your basics first. With clear foundations, AI becomes a superpower; without them, it magnifies confusion.
Links for Reading and Sharing
These are links that caught my 👀
1/ AI Eats the World - Benedict Evans writes an annual must-read presentation on the tech industry. This year’s effort is on AI eating the world.
2/ How do you multimodal? - Netflix has long been at the forefront of establishing tech standards to describe digital content. They are now trying to define a data catalog for concepts that cross traditional boundaries. What do you call the Disney character who stars in a game and a movie? (Besides well-branded.) Check out how Netflix is thinking about this data model.
3/ There is no technology silver bullet - A former colleague always told me when I got excited about the next new thing: there is no silver bullet that will solve this problem. So it’s a great time to re-up this classic essay from Frederick Brooks (who you might remember as the author of The Mythical Man Month about software estimation) on the existence (or lack thereof) of a Technology Silver Bullet.
What to do next
Hit reply if you’ve got links to share, data stories, or want to say hello.
The next big thing always starts out being dismissed as a “toy.” - Chris Dixon
Another great newsletter, Greg... you've added a "prong" to my request classification. I've always sorted them into the "navigational," "transactional," and "informational" categories... now I need to add "elevational!"