Dashboard metrics don't tell the whole story
One number can't tell you everything that's going on. Without context, metrics can mislead or confuse you. "Everything Starts Out Looking Like a Toy" #118
Hi, I’m Greg 👋! I write essays on product development. Some key topics for me are system “handshakes”, the expectations for workflow, and the jobs we expect data to do. This all started when I tried to define What is Data Operations?
This week’s toy: an interactive environment modeled on Hypercard that is part presentation layer, part scripting language, and part toy Decker lets you create and deliver self-contained HTML documents that execute in the browser. This is a cool prototype that could be something really interesting and mainstream with a bit of design love (it’s still cool even if it goes nowhere). Edition 118 of this newsletter is here - it’s November 7, 2022.
The Big Idea
A short long-form essay about data things
⚙️ Dashboard metrics don't tell the whole story
We’ve all had the experience of looking at a chart or a component on a dashboard or slide and thinking: so what? It’s hard to tell what one number means on a chart unless you understand the metric and definition, you’ve been following that number since the last time it was updated, and you know when it will be updated next.
This points to a key problem with metrics in general and dashboards in specific: they don’t help you get context on the next option and help you know what to do.
In this case, we see that there is a label and a number. If you don’t know what “Quality Leads” are and what this metric has been in the past, you’re in the dark when trying to evaluate this number.
We are not demonstrating how this metric fits into the overall lead process.
Poorly Labeled Metrics are lying to you
These badly constructed metrics are not truthful for a few reasons:
The time horizon is unclear - you see a current number and it’s not clear what time period this covers
What’s a good value - you don’t know the history or variability of this number
Lineage is missing - it’s not clear how to relate one metric to another
Setting a time horizon is not difficult – you state the period the metric covers – and you typically do this by suggesting both the time horizon and the cadence in the name or description of a metric. You’ve probably seen a version of this metric renamed “Daily Quality Leads” or “Weekly Quality Leads.”
After you set the time expectation, it’s time to make it clearer what this metric really delivers. I define a metric as “Something that changes from x to y by when.” We also need to know the definition of the underlying data to make it available in a description or other informational area of the metric.
To sum up, we need to know:
the definition, e.g. “the leads that were marked ‘Quality’, because a process changed their status or high product usage was seen among those leads/trialers”
the time frame, e.g. “last 7 days” - if you don’t state it, you’d guess that this is a continuous number or a discrete number where every week this is updated. Make it clear when to expect the next update.
the headline number, e.g. the count
Knowing the definition leads to the next question.
What’s a good number?
Now that you have an idea of what you’re looking at, there are two things that will elevate a metric into a valuable observation:
Knowledge of the past range and variability of this metric
Have an idea of what to do about it
When you consider a number, how will you know if it’s good or bad? If it’s a new metric or a sparsely populated range it may be hard to compare the current value with a previous value. If you don’t look at the median or the average value over a number of measurements, it’s difficult to see any seasonality or fluctuation.
Look at the number. Now that you know whether it’s good or bad, how do you understand its impact in context? “Good” or “bad” could have been set in an arbitrary way instead of being the input for some other part of the process.
If a metric feeds another metric (or is sourced by another metric), you’d want to be able to zoom out (or in) to see the bigger picture.
What would a better metric do?
Here’s one example of what better metrics might do to display more context, uncover the source of data, and suggest the reliability of the metrics.
Let’s take a look.
The first difference is to raise the clarity of the time period, the population, and the change over time. By adding an indicator of change since the last period, you can get a sense of the directionality of the metric.
A better metric might also give you a sense of the population size, label the time period, and provide a trending indicator for that metric over time.
A better metric would also define the metric and deliver links to other related definitions and metrics. By showing us the lineage of this data – where are we in the chain of connected metrics that make a funnel or a process – a better metric shows us how this metric fits in the overall whole.
If you squint, you can also imagine the ability to jump to a list of records shown or visualized by this metric, along with suggestions for what to do next to change this metric.
Metrics as a generalized pattern for work
The above example uses one process to show how metrics connect in a business. But what if you use the same logic to think about more generalized patterns? Then, you’d create links between different metrics owned by different groups to measure (and perhaps start to predict, if you got enough measurements) how things are going to go.
This sounds like an interesting idea to pursue, and hard to link the performance of metrics as this is a multivariate problem. One idea I’ve thought about is to link the individual metrics and the processes that reference them into a graph where you can explore outcomes by finding the nearness of a metric to a positive outcome. At worst, this might help people to understand how their work is related to metrics owned by other people or departments. At best, it might start highlighting the most important metrics for the business and trending them.
What’s the takeaway? Making metrics better by adding labels and definitions will help the whole team. You can use this knowledge to better establish the relatedness between metrics and have more context to explain when things go well or aren’t going as well as you like.
Links for Reading and Sharing
These are links that caught my 👀
1/ It’s hard to stand out in a world of similar - It’s not just you. More brands and products look similar than ever before. Admiration of design, the pressure of the desire to conform, and other factors drive this outcome. Another factor is under-appreciated: time to market is faster when you use something that already exists. Whether template use is noticed explicitly or not, “grabbing something from the parts bin” is one of the faster ways to get started. You might not notice it when conformity is happening in underlying services used by everyone, but the leap to create something truly new is very high.
2/ The moviegoing experience of the future is personal - Image generation for video is getting better, and fast. This example shows how the leap from prompts to video is going to be more seamless than we expect.
With tech like this, validating an “official” version of content is going to get harder over time. Will creators license their characters to be used in fan fiction, or will it be easy to sell access to generated content? With faster computers, things are going to get weird.
3/ Culture-market fit > strategy - Evan Armstrong argues that product-market fit is actually driven by culture. If people believe that your solution is better, they’ll buy it. This makes sense if you believe that product-buying decisions are essentially individual emotional decisions and that buyers are not often rational actors. I’ve often described this emotional feedback loop as needing the buyer to feel either that you’ve made them a rockstar or that you’ve taken away significant pain that can’t be removed any other way. Armstrong’s description is more elegant and similar.
What to do next
Hit reply if you’ve got links to share, data stories, or want to say hello.
Want more essays? Read on Data Operations or other writings at gregmeyer.com.
The next big thing always starts out being dismissed as a “toy.” - Chris Dixon