Finding Product Insights from Missing Data
When you build new products for external or internal customers, it's the missing data that provides the most insight. Read: "Everything Starts Out Looking Like a Toy" #194
Hi, I’m Greg 👋! I write weekly product essays, including system “handshakes”, the expectations for workflow, and the jobs to be done for data. What is Data Operations? was the first post in the series.
This week’s toy: the folks at Figma made some fun cursors for April Fool’s Day. These are admittedly goofy and harken back to a time when we customized more things about every UI experience to make it seem familiar and fun. Will generative tools bring back similar custom fun? We can hope. Edition 194 of this newsletter is here - it’s April 15, 2024.
Brought to you by Apollo.io, an all-in-one solution providing RevOps teams with access to data, enrichment, outreach, call intelligence, scoring, calendaring, and email automation. They integrate seamlessly with your CRM and existing workflows. The quality of Apollo’s data is unmatched, they are ranked #1 for contact and company data accuracy on G2 with over 6000 reviews and a 4.8 star rating. Learn more …
If you have a comment or are interested in sponsoring, hit reply.
The Big Idea
A short long-form essay about data things
⚙️ Finding Product Insights from Missing Data
In Star Trek: Discovery S04E08, the Discovery crew realizes that an impossible puzzle they’ve been trying to solve about an alien threat is related to a mineral that civilization is trying to mine. This mineral “boronite” has a significant radioactive signal in the sensor array.
The crew doesn’t know where the alien tech is located but does know that they are removing this material. By looking for sectors nearby where there is an absence of a signal they are expecting to be emitted by the material, the team finds the alien technology even though that alien object didn’t appear on their sensors.
Whether you care about Star Trek: Discovery or not, thinking about the way that generative AI tools create results requires you to think about the absence of the signal you are seeking.
Asking: “what should I be seeing that I’m not seeing?” is a good frame when you’re using AI to build expected content. ChatGPT – and other generative AI – examine the current token to measure the expected next token to generate.
In simpler terms, this means when you are asking ChatGPT to be a partner in developing content by analyzing other data, you need to watch carefully to confirm that what you’re seeing is an effective analysis or a rote repetition of the next likely token.
For example: when you prompt the AI this way to summarize some text transcribed from a meeting:
Headline: Produce a summary headline indicating whether the customer wants to buy
You’ll likely get a generic response like:
Headline: Based on the customer’s engagement, they are likely to buy
Which is a lot less useful than the summary a human would write. When you no longer see rote repetition of a prompt and start seeing different individual results for each summary, you’ll know you’re getting a better summation. (Hint: remove the text “Headline:”)
If you provide a more generalized, abstract prompt like:
When you summarize the call, add a brief and informative summary at the top to share with a sales leader
You’ll get results that vary quite a bit more and come closer to matching the way a person would summarize the results of a meeting, while adding the structure that generative AI composes easily.
Building an Internal Meeting Summary Product
Recently I started working on a data product to analyze sales meetings. The twin goals were to produce a workable summary and to create a transcript of the meeting itself. While AI tools easily transcribe meetings these days, creating a useful meeting summary is more challenging than simply asking the feature “summarize the meeting”.
Desired outcomes for this output include:
top line summary - what is the main focus of the sales call
identifying objections - if the prospect had questions or strong objections, list them and any counters offered by the sales rep
summarizing the call - identify the main points of the call including any items raised by the prospect that haven’t been listed yet
Overall, we want to identify enough detail to provide value without over-prompting the system to give us answers that look exactly like the tokens we provided in the prompt. To do this, we ask for abstract concepts in the prompt (“identify any concerns raised by the prospect”) rather than asking for a specific call out on a feature or service.
The goal – like in the fictional Star Trek analogy – is to find the absence of signal where we expect it to be. We’re looking for the top items not already contained in the summary, and we’re also looking for a traditional summary that can inform someone without listening to the entire call.
To build our summary, why not ask a bot? I asked ChatGPT to produce a prompt for a generative AI to build a meeting summary as a starting point for creating the meeting summary prompt, and refined from there.
Moving beyond a “One-shot” Strategy
Here’s what we’ve found so far as we examine the results provided by AI in summarizing sales meetings:
The summaries are often quite good! Although they don’t always convey the meaning you get when listening to a call or reading the call transcript, they do a good job getting the basic elements of the call and in calling out prospect needs.
Too much prompting yields a boring summary. AI tools are so good at following the script (think of this as “Mad Libs” for computers) that if you provide a very structured prompt and don’t allow for “creativity” or variation in the result, you’ll receive back … exactly what you put in. When every customer is “enthusiastic” and “wants to buy” you probably don’t have a realistic sales summary.
Most AI Models are not Multi-modal, yet … What’s the difference between an experienced sales manager hearing an introductory call and a newly ramping salesperson? The sales manager will always see and hear different things that are affecting the call, and may have a divergent opinion on whether that prospect can close. When AI is analyzing text only, there’s a limit to the available context.
How do you move beyond a first take at meeting summaries?
Here are the things to try next:
Carefully change prompts to limit the variables in the outcome
Experiment with new prompt versions as we learn more
See if extracting structured information from the transcripts will help us to fine-tune summaries and make them more effective
It’s a big step forward toward knowing more about the sales process.
What’s the takeaway? AI prompts of unstructured data like call transcripts are a potential gold mine of insights for the sales process. Yet you should be cautious about allowing generative features to summarize too much and focus on identifiying the outlier data presented by each prospect. In the end, you’ll still need to listen to calls and it will be a lot easier to identify which ones need attention.
Links for Reading and Sharing
These are links that caught my 👀
1/ When to hire RevOps? - Adam Schoenfeld released research on the prevalence of RevOps team members at different stages of company development. It might not surprise you to know that faster-growing teams that are earlier in their development (and also teams over 100 people) tend to have a higher need for RevOps.
2/ Debit and credits - Mateus Portela shares an interesting story about transaction (and data) design by explaining a simple concept: double-entry accounting. This is a great article because it demonstrates the lineage of data in transactions and shows how to be a data detective when you encounter related data.
3/ Want to know how much software costs? - If this co-operative price sharing of information succeeds, Price Level will be a very interesting service for startups who want to know what their peers are paying for apps. When there’s no pricing page, there’s novel information to be gathered.
What to do next
Hit reply if you’ve got links to share, data stories, or want to say hello.
The next big thing always starts out being dismissed as a “toy.” - Chris Dixon