Naomi asks, “What is the biggest pain point for you in marketing data preparation?”
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
- Got a question for You Ask, I’ll Answer? Submit it here!
- Subscribe to my weekly newsletter for more useful marketing tips.
- Find older episodes of You Ask, I Answer on my YouTube channel.
- Need help with your company’s data and analytics? Let me know!
- Join my free Slack group for marketers interested in analytics!
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.
Christopher Penn 0:29
In today’s episode, Naomi asks, What is the biggest pain point for you in data preparation? Well, gosh, there’s so many things that can go wrong in data preparation.
The biggest pain point, though, is that the data you have is not clean or complete.
So very often what happens, particularly with marketing data, even with services like Google Analytics, is the data isn’t either complete, or is improperly collected, or, in some cases is just wrong.
So for example, let’s say you have Google Analytics on your website, and you change themes.
And you forget to put your tracking codes in the new theme.
And you don’t notice this until the end of the month, when you go to do your reporting.
The unfortunate truth here is that you’re out of luck, right? There’s no way to get that data back, it’s permanently gone.
And so you’ve got a pretty big problem at that point, the data that you have, you can report on what you do have, but you’re missing a bunch, right? Is that sort of like a global shortage.
And there are techniques to help mitigate something like that, especially if you are only missing a little bit of data, and you have an overwhelming amount of other data to work with? There are techniques called imputation methods that can guess essentially make a best guess at what happened on those days.
But as we all know, there are marketing anomalies all the time, you may have had a tweet take off that day, you may have had an ad do really well, someone may have dropped an email.
And imputation is going to guess based on things like predictive mean.
So essentially trying to average out all of your other data and make a best guess as to what should have been that spot.
If you had a successful anomaly that day.
It’s not going to be picked up, right.
And so things like your attribution analysis, as well as just basic reporting, are not going to be correct.
So that’s one of the things that’s a big pain point.
Another one is we have incorrect data.
So again, let’s say you’ve got a website, and you’re running Google Analytics.
Actually, no, let’s go do with this one, you’re doing your email marketing, right? And you’re trying to guess the open rate of your email marketing.
But what you realize is that Apple’s mail privacy protection is auto opening, every email that you send to anybody who use the Mail app on iPhone or the back, and things like that.
And so your data is there.
But it’s not correct.
Right? Is it is functionally incorrect, changed by this technology.
And you can no longer rely on that information.
Because it’s not real.
It’s not what you’re trying to gauge, you’re trying to decide, are the emails that we’re sending out to people? Are they being open.
And if a machine is opening every single email, then you don’t know that whether a person ever put eyes on that or not, or if it’s just done by a machine.
And so that’s an example of where you have corrupted data, right.
And you can have the same thing with Google Analytics, too, right? You have bots, and spam traffic showing up in new Google Analytics.
It looks 10,000 visitors yesterday to your website, but 9900 of them were automated traffic.
Again, that’s not something that you can easily repair.
The challenge with all these different ways our data goes wrong is that in many cases, they’re not repairable.
And they’re not something we can go back and re get, right.
We can’t go back in time.
We can only collect data and process data from right now moving forward.
So if you’ve got bad data in your wherever it is, you store your data.
And you don’t know that it’s bad.
You could be making really bad reports and really bad forecasts from it.
So that’s the biggest point in Data Prep is knowing whether your data is any good or not.
Because if it is good, then you can work with it.
You can do statistics and data science and machine learning and artificial intelligence, and all the fun stuff.
But if your data is bad, you can’t do any of that.
It’s like cooking, right? No matter what cool appliances you own, no matter how skilled you are as a chef, if your ingredients are bad,
Christopher Penn 5:28
there’s not much you’re cooking, right? If you if you had meant to buy flour, and instead you got sand, I don’t care how good a cook you are, you’re not making anything edible.
Right? So that’s really is the the biggest pain point in data preparation.
And a lot of companies that do you know, data preparation services, IBM has it built into Watson Studio, this tableau Data Prep and things like that.
There’s all these different tools that makes the processing and the preparing of data better and easier.
But none of them can address bad data, you know, poor data quality, none of them ever will be able to no matter what a vendor promises, there is no tool ever that will be invented that will go back in time and get you clean data from from the past.
I mean, if you do have a time machine, I think I can think of better things to do with that than than fixing your marketing data.
But good question.
Thanks for asking.
You might also enjoy:
- You Ask, I Answer: Google Tag Manager and Google Analytics Integration?
- B2B Email Marketers: Stop Blocking Personal Emails
- How to Measure the Marketing Impact of Public Speaking
- Almost Timely News, 17 October 2021: Content Creation Hacks, Vanity Metrics, NFTs
- What Content Marketing Analytics Really Measures
Want to read more like this from Christopher Penn? Get updates here:
Get your copy of AI For Marketers