Building a Data Operations Practice in RevOps
Fixing problems in a structured way yields a structure to fix unknown problems
Data is everywhere. And when you start looking at the data in your go to market systems, you’ve probably seen some issues in that data. It might be a question of “why did that lead route to that account”; it might be a simple question of “when was the last time we heard from that account?”; or it might be something like “how much do we trust this set of data from this source”? This could be kind of scary until you think about the process of getting from a theory of fixing your data to the actual steps to fix that data. It’s a repeatable process where you identify the problems you see, the concrete changes you want to make to that data, and the way you validate the problem has been resolved (or not).
In part 1 of this series of posts on Data Operations, we discussed the basic idea of a data operations process: that there is important information at the intersection of systems that needs to be managed as an independent process. Because the marketing department and the sales department both have a concept of shared ideas like a “prospect” and a “lead” and an “account”, yet may not have shared definitions for those concepts, there is room for misinterpretation. Data operations as a function exists to provide governance, highlight differences in practice and procedure, and remediate those differences with solutions that work for the whole organization.
When you go beyond this basic definition, there are some higher-order problems to solve:
Moving from theory to practice - it’s nice to say “I’d like fewer duplicates” and believe that changing the creation process will minimize duplicate creation, and better to observe the causes of negative output and build systems to minimize those problems
Operationalizing your vision for data operations - what are you going to do every day (hopefully in repeatable chunks, hopefully possible to do by multiple people on the team) to improve the data in your system?
Defining key measures to use to manage the business - you’ve got plenty of data to review. Which data matters, and how do you know which data has immediate impact on the salespeople you are supporting?
No, we don’t want to boil the ocean with changes. But we do want to establish a repeatable process to identify needed changes, test those ideas, and then implement in a sustainable way.
What does this look like from 30,000 feet?
Fixing your data sounds like a great idea. Most businesses take a break-fix effort, where they identify problems when they crop up and fix only that problem. A system that enforces a series of rules is an antifragile process - one that is specifically designed to respond to and neutralize problems - instead of simply solving a just in time problem.
Building the idea of Data Operations into your business includes the structure and systems required for a data-centric business. By structure I mean the system design to define a set of rules, a method for handling data, and taking into account the motivation of the human actors who work in that system. System design does not need to be complicated, and it ought to be at least a single page definition detailing how that system works and what information it changes. If you add a space in your document to detail open questions needing to be answered later, it allows you to move forward without a “perfect” definition.
It’s all well and good to design a system that works for bots, and then find out that salespeople in your CRM are changing the name of an account because it makes it easier for them to find when they do a search among other similarly-named accounts (that are not actually related). People are the true chaos agents here, so any system you design has to take their motivations into account and build a user experience they actually want to use.
A great STRUCTURE for your data-centric business includes:
Clear definitions of data (what is it, where can I find it, and which systems are the systems of record)
Rules for handling that data when you have a conflict (tie breaking rules)
An understanding of human factors and a people-centered design (to help the people want to follow the rules)
Systems that enforce these rules and automate (enough) but not everything
So which team is going to follow the rules, improve the data, and be the data heroes you’re looking for? Most companies don’t have the benefit of a specific data team, or the ability to outsource problems to a dedicated resource (automated or human) who can fix issues before they become a problem. The team that is going to improve the data in your organization is probably the team that you already have - so you’re going to need to make it easy for them to do things right!
“Doing it right” is not always using automated rules in your CRM to block salespeople from doing their jobs or a nagging report that you send to people to fix their inputs when they put in the wrong information.
Doing it right is a balance of finding the minimum viable information your business needs to run well and continuously improving that information so you can move the needle over time. Change in a data-driven business focused on quality will happen gradually over time and you’ll see the difference by looking backwards.
Fixing your data is a team effort. If you try to fix everything yourself you will not scale. Identifying every problem is well within the purview of a data quality manager, but fixing them all when they happen might not be. If you use the eyes and ears of the business - the people working in the system all day - you’ll get advance warning of a problem. The team to solve these issues will be cross-functional, led by the data quality professionals.
A great TEAM for your data-centric business includes:
A data quality manager (a person who is responsible for data in the organization)
A cross-functional team made up of the people in the team who care most about quality (often someone from marketing, the SDR team, and a Business Intelligence team if you have one)
Quality feedback from the teams that are entering the data (what are you actually hearing from sellers when they try to do their work?)
If you noticed above that I didn’t mention a giant team made up of outsourcers or a magical CRM system that could fix all the problems, that’s because it’s often difficult to find an optimal solution or even any solution when you’re just getting started. Understanding exactly where and how often the problems happen with data in your business is an important first step to building a data-driven business.
The system that you use to surface issues and how you solve those issues is the single most important facet of your data transformation. The solution to some of these issues might be automated, and for others it might require a specific person in your team. Watching these problems accumulate, you’ll then be able to start targeting the kinds of problems that are good candidates for automation.
Automating a system doesn’t mean fixing everything about it. It means finding the smallest repeatable actions that are pretty similar, have existing rules, and are done by people. When you make those repeated actions easier you free your team up to do other things, like watching for new patterns. This process is, by definition, repeatable.
A great PROCESS for your data-centric business includes:
An easy method to identify problems (in Salesforce, you can use the Case object or make your own custom object to track issues, using Chatter to maintain a case context and history)
A series of reports and dashboards to identify the key KPIs valued by the organization (measure what you manage, but don’t manage everything, or you won’t be able to focus)
An intake system to define what you’re going to automate, and a method of providing progress (you might use the Agile methodology to track items over two week sprints, or you might take a more project-based approach)
Transparency and humility (you won’t get this right the first time, but you can continue moving in the right direction).
Identify a few candidates to automate, make these processes small, and go ahead and fix them. Then, see you how you did. You’ll make mistakes.
Use mistakes as a method of identifying breaks in the system, and as an opportunity to create a more resilient forecast for your business. If you encounter a known problem, you need to have a Standard Operating Procedure (SOP) to handle that problem. This allows different members of your team to respond consistently to the same problem even if it happens in an unexpected place.
However, there are unknown unknowns that sometimes occur. A piece of your system will fail, and you will have no idea why, even after an initial investigation. What should you do when this happens? One strategy for dealing with the unknown is to use a concept called “Commander’s Intent”, a strategy used by the armed forces to drive soldiers to a goal without always describing every step of the process. By repeating a concept like this, or using a “5 Whys” technique, you’ll find the root cause of your problem.
What’s the takeaway? Successful improvements in data operations have you to catch mistakes that happen more than once, and prevent them from recurring.
Enabling your team with a great structure and process gives you good tools to create a model for thinking about the problem of interconnected data operations in a typical business. Our model will be a concept called a “Data Factory”. Like a physical factory, a Data Factory takes input (information) and produces an output of a product (improved and enriched information).
A typical factory produces only a few products. Since ours is digital, we gain the advantage of being able to define a prototype microservice - a promise to take information in one format and type and return an answer with diagnostic information about that information or to recombine it into a new package.
Here are a few examples you might build into your data factory, but this is by no means an exhaustive list:
The ability to format a raw phone number into the e.164 standard
A service that takes a contact, finds duplicates, and combines them according to tie-breaking rules
An enrichment service that takes a website and finds information about an account (like the revenue estimates for that business or the technologies used in the business) and applies it to the account in your CRM
A method of enforcing data governance by starting an approval process when the ownership of an opportunity changes
A service to quarantine lead records until they reach a data quality standard
An effective Data Factory is the engine for data in our organization. It will allow you to share information more effectively between teams. It will enable you to map the information flows in your teams. And it will provide the scaffolding for you to manage data in your business. In our Data Factory, we’ll take raw information and process it into highly valuable and usable data to drive your business forward. By identifying a series of microservices for information, we’ll create a blueprint to enable higher quality Go To Market (GTM) motion and analysis of your business.
If you found this useful, consider sharing with a friend.