Every morning, my first few tasks when I get to the office are to make a cup of coffee, fire up my RSS reader, and start digging into the day’s news. Nearly every morning, I see the same thing that makes me emote a /facepalm: a headline that reads, “New data shows…” or “New study shows…” followed by an infographic or a witty blog post.
In almost none of these articles do you ever read about the data itself, just the hasty conclusions. This is a major problem because marketers who don’t do their own research and rely on other organizations can be led badly astray, damaging their businesses. Relying on bad data and bad research is like sailing a ship by a faulty map: sooner or later, you’re going to hit an iceberg or sandbar or reef and your ship will sink.
There are three fundamental ways that data collection can go wrong. I’m going to vastly oversimplify here – I recommend reading up on Tom Webster’s blog along with the AAPOR best practices guide if you want to seriously dig in.
Selection bias is when you have a sample of the population that is non-representative of the whole population. For example, if I ask for volunteers to take a survey about, say, Jay Baer’s popularity, only people who have a strong opinion about Jay are likely to respond to the volunteer survey. Thus, my data is skewed.
Measurement bias is when you have a problem in how you measure your data. There are so many ways that measurement can go wrong. For example, if you opt-out of Klout, your Twitter handle returns the same error code as someone who was never part of the Klout database. That’s an important distinction. Doing a quick scan of the Klout API and then rushing an infographic out the door about how many people have opted-out of Klout creates bad conclusions because you have a measurement bias problem.
Finally, intervention bias gives you bad data when you’re trying to compare data. You see this most often with companies offering some kind of paid service and pushing a study to back up their claims. I ran into this with an SEO firm that was claiming its method for boosting SEO was incredibly powerful and they rushed to attribute all of the company’s SEO improvements based on their work. What they failed to account for were all of the other marketing activities that were occurring at the same time that were interfering with the data. Amusingly, after I stopped working with the company, I looked at our SEO data and saw that we were getting the same (or better) results without them.
Whenever you read marketing material disguised as content from a company trying to sell you something and offering research or data to validate it, see if their marketing material offers answers to these three questions:
1. Who did you get your data from (and how did you pick those people)?
2. How did you measure your data?
3. If you drew a conclusion from your data, how did you account for other activities messing with your results?
If the marketing material doesn’t include solid answers to these questions, then do not bet your business, your job, or your marketing budget on the conclusions being offered, because there’s every chance that either a self-serving conclusion was drawn or the data is faulty. Accept it as a nice fluffy piece of content to mark as read and move onto the next thing in your blog reader or inbox.
You might also enjoy:
- What Content Marketing Analytics Really Measures
- Is Social Listening Useful?
- The Biggest Mistake in Marketing Data
- How to Set Your Public Speaking Fee
- How To Start Your Public Speaking Career
Want to read more like this from Christopher Penn? Get updates here:
Get your copy of AI For Marketers
Good article and I agree. I’ve been subject to many marketing efforts that cite data with no source. It’s important to use facts and figures that have a reputable source. More marketing efforts should include content generated from a market research firm. I would also like to see the type of study cited more often. Thanks for the post!
Well said, Chris.
Taking a measurement course was a requirement of my Masters in Education program. This course was my least favorite on paper — esp. because it was to be taught by a *math* teacher (the horror!).
It ended up being my favorite course of my program. The critical analysis of data we did in that course I still apply today — 20 years later.
Thank you, Chris, for pointing this out.
My first boss out of college challenged me to look at data critically, even anecdotal, qualitative information, to dig deeper to better understand what it really meant. Much of my career has been spent gathering data and presenting it to decision makers, and I’ve been very – almost to a fault- conscious of what assumptions have gone into the data analyses. When presented with findings from research, I still find myself looking at it critically.
May there be more of us!