Does this sound familiar?
- Why are we so bad at metrics?
- Why are we so easily fooled by shiny numbers that don’t mean anything?
- Why is it so hard for us to understand what metrics are telling us?
The answer might surprise you. In the world of statistics, there’s a brainbender called the Fundamental Problem of Causal Inference, which states:
There’s no way to directly observe cause, only correlation.
That is to say, you can see whether things are related using data with any number of analyses, but there’s no known way to look at a pile of data and understand cause. Cause cannot be seen just by looking at numbers. Statisticians and metrics experts are fond of saying that correlation is not causation, citing examples such as a relationship between piracy (real pirates) and global warming. Over the centuries, piracy has declined, and the world’s climate has been changing. If you blindly assume that correlation means causation, then you’d draw the seemingly logical conclusion that pirates cause global warming and fewer pirates means less global warming.
It may seem obvious that correlation is not causation when you look at a pointedly absurd comparison like pirates and global warming, but things get much muddier when you start looking at data like marketing metrics. Why? Most often, the questions asked of marketing are causation questions, questions that begin with “Why?”:
- Why is our lead volume up/down?
- Why didn’t this campaign deliver the results we wanted?
- Why isn’t social media working for us?
Marketers facing these sorts of questions have to turn to what’s on hand, which more often than not is… a pile of data that at best will provide strong correlation but give no indicators of causation. We can answer what, but not why. One of the most cited, problematic forms this problem takes is in attribution. Marketing software records an activity or series of activities about a lead, but all of that information fails to note causation. For example:
- You attended a webinar.
- You downloaded a white paper.
- You replied to us on Twitter.
- You liked us on Facebook.
- You attended one of our conferences.
- You visited a blog post, filled out a form, and became a customer.
When we respond to the why questions, the data can mislead us! What gets “credit” for the conversion to a lead? The blog post, in most marketing automation systems. What actually caused the conversion? You might have actually made up your mind when you read the white paper. You might have been waiting until you had the budget to buy, but your mind was made up a year ago.
How do you solve the problem of causation? How do you get the real answers you’re after? There are only two known valid solutions that don’t involve quantum mechanics and manipulation of the space-time continuum: testing and surveying.
Testing is fairly straightforward. If you know what effect you’re looking for (conversion to a lead, for example), you can set up a variety of causes and see which cause generates the effect you’re aiming for. Testing has its limits, of course; it’s impossible to rewind time and do an A/B content split on surprise television coverage about your company, but for most forms of digital marketing, some testing is possible.
The second method is surveying. Nothing is simpler yet done less frequently than asking a customer how they heard of you or what made them decide to buy your product or service. Make this an integral part of your sales and marketing processes and you’ll gain a much clearer picture of what’s actually influencing the minds of your customers.
What good is all of the data if it can’t prove causation? Throw the data out, right? Wrong. All of the metrics, all of the numbers, all of the correlations are good at telling you where to ask questions. They’re good at helping refine opportunities for testing, opportunities for surveying, opportunities to investigate previously unknown or unimagined connections between different marketing methods. If you see Klout scores and email open rates correlating, you now have a new question to ask of your customers. If you see visits to a landing page and a URL you don’t recognize as the referring source, you now have a new opportunity to investigate. Data has its place, its role in marketing, and data is very good at hinting at new lines of inquiry.
Use your data to understand what. Use your testing and surveying to understand why. If you can do both well, you’ll have a much more complete picture of your marketing. Oh, and if some social media expert, marketer, guru, or scientist makes the bold claim that their metrics product answers causation, demand them to show you how they’ve overcome a centuries-old statistical problem that dates back to David Hume in the 1700s.
You might also enjoy:
- How to Set Your Public Speaking Fee
- What Is The Difference Between Analysis and Insight?
- B2B Email Marketers: Stop Blocking Personal Emails
- Transformer les personnes, les processus et la technologie - Christopher S. Penn - Conférencier principal sur la science des données marketing
- Almost Timely News, 17 October 2021: Content Creation Hacks, Vanity Metrics, NFTs
Want to read more like this from Christopher Penn? Get updates here:
Get your copy of AI For Marketers
Great stuff, Penn. I think the last part on “asking the customer” could be expanded on a bit. It’s not as simple as sending out a survey. As Tom Webster says, you have to *do the work.* In this case, the work (I think) would be hiring someone to do a proper survey.
Ah, what a great way to put things in perspective!
You know, for all the marketing automation and benefits the technology brings, nothing beats talking to customers. No system (yet) can give you the insight you get when you ask someone a question and discover that the whitepaper wasn’t the responsible for conversion but rather a talk with another person at a recent event that actually helped make up the customer’s mind!
I see what DJ Waldow is saying on the “proper survey” however I’d rather not try to make it too complicated, but just do it. Hey, what is the sales rep doing today?