Select Page

Magdalena asks, “Can you share two or three good practices of using data in tracking our efforts?”

Great and important question. Many marketers don’t have, for one reason or another, a solid understanding or past experience in statistics. Let’s look at a few of the most basic rules that apply, especially when we’re digging into data.

• Correlation is not causation
• Never manipulate the data to prove a point of view; always start with the scientific method
• Understand how representative your data is or isn’t
• Represent your data faithfully and accurately
• Understand the p-values, margins of error, and statistical significance in your tools and data

Watch the video for full details and explanations.

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

## Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

In today’s episode, Magdalena asks, Can you share two or three good practices of using data and tracking our

efforts?

I want to pivot on this question, because there’s an important question here. And that is some of the best practices in using our data, understanding some basic statistical and mathematical principles that

for one reason or another, many marketers may not have that solid understanding or past experience in using this kind of data. Yet, it’s important because we will make a lot of claims from our data and not necessarily be able to back up those claims, it won’t, we won’t be able to present in a way that inspires confidence in what we’re reporting. So let’s look at a few of the most basic rules number one, by far one almost done was hurt cause correlation is not causation. When we look at our data, we have to understand that a correlation and association between two variables does not mean that one variable causes the other the most famous textbook example of this is

the number of deaths due to drowning in the summer goes up, and so does the number of bottom ice cream eating during the summer goes up. So of course, ice cream causes drowning. Now we know intuitively and can prove out in the data that the the confounding variable, the interfering variable, there is summertime, it’s the weather is is what’s caused us both to go up. So in a marketing sense, understanding that, for example, just because our social media traffic goes up, or our social media engagement goes up, and our Google Analytics web traffic goes up does not necessarily mean that one follows the other, we have to prove that using the scientific method. Which brings me to my second principle, which is never ever manipulate the data to prove a point of view, this is something that really only the worst marketers do. And the reasons for do it, most of the time are not malicious, most of the time is to cover your in front of executives and stakeholders and stuff, but don’t do it. Because it always always comes back to bite you. Instead, what you should be using is the scientific method, which is the asking of a question, the gathering of the data, the creation of a hypothesis that you proved true or false than the the testing at analysis, and then refine it, and then deployment of your observations or the refining of your hypothesis based on all the test results. on yesterday’s episode,

it talked about how I was doing some testing on my newsletter to see which newsletter performs better what type of algorithm to put the content together, this is something I want to test, I have a hypothesis that focusing on click through rate for content that I curate will lead to best performance in email. But I’m not going to manipulate the data in any way to try and show that I’m going to use the scientific methods testing. So that’s number two. Number three is understanding how representative our data is or is not. And this is really important when it comes to any kind of sampling, any kind of surveying or any kind of qualitative data analysis where we are extracting data, there is no way we can extract all the data on many topics, I was doing a piece of work recently on some Yelp reviews, there’s no way I can extract every Yelp review, it’s not realistic, those, this will be more being created. So I have to create a sample. And in order to make that sample valid, I have to make it representative of the population as a whole. So I couldn’t just say, I’m going to sample only Chinese restaurants in Boston and and then extrapolate that to all restaurants everywhere, that would be extremely foolish. And so I would need to make that sample much more representative. Many times when we’re doing marketing, particularly when work in a social media data, we are intentionally or unintentionally taking samples. And we need to understand how representative of the population as a whole our data is, if we don’t understand it, that that’s what biases are in our data, we probably shouldn’t use it or the very least we should provide great big flashing warnings talking about how

how, how biased our, our our data may or may not be based on our best understanding of it really important, and any kind of tool or software vendor you’re working with, that needs to disclose any kind of sampling limits or any kind of representation limits in the date. If they don’t, you can be making really bad decisions based on highly biased data. One of the most common biases here is social media tools that purport to measure influence that use one network only most tools, particularly some of the more primitive ones rely only on Twitter data, which because Twitter’s API has traditionally been very, very open and accessible. Well, if all of your influences are on Instagram, and try and use Twitter data to calibrate you’re going to get a bad result. So understanding again, how representative that data is or is not. The fourth is to represent your data faithfully and accurately.

And this is important when you’re doing charts and graphs and things like that, if you don’t have the ability to, well, everyone has the ability to make their charts say whatever they want. But there’s best practices such as always starting the axes, horizontal and vertical at zero in bar charts, for example, so you can get a true sense of understanding what is in the data, always providing both the absolute numbers and the percentage values so that you can understand the proportions. But also understand how big a number this is, in our recent post on Twitter, bot losses. And, and politicians, we looked at one politician

who lost 300 thousand followers and huge headlines, but it was point 6% of of that politicians audience It was a miniscule percentage. So understanding that we are providing perspective so that people could make a judgment about how important the event actually was, or was not. And finally, being able to test for margin of error, I think is so important. And understanding this, I’m actually going to switch over here to let’s take a look at our data. This is I’m running an A B test on my newsletter. And you can see one of the one of the tests here has, has already been crowned the winner. This is the leading test testing clicks versus page authority for social sharing. Versus

there’s a fourth one that the variant I forgot to rename it

algorithm, what do you see here, I see, you know, the parent, I see the, the three tests after that, and this one here, this third test has been crowned, the winner is this a statistically significant get resolved 197 cent, say, versus 248, 26

clicks here, 30 clicks here, if we were to use software to test out what the p value is the likelihood of error, we see that this is a very high p value, P value should be point 05 or less most of the time, and the smallest p value the better. So having a point three indicates that there is potentially a significant issue here. But the software that I’m using, and this is true of so much marketing software is already crowning a winner, the The result is not statistically significant. So anytime you’re working with any kind of software, which is making a claim about something working better than something else, it needs to provide a p value, it needs to provide a margin of error needs to provide you the statistical back end, so that you can look at and go, yes, that result is valid or know that result is not valid. And if the result is not valid, you need to know that before you go and make decisions that could cost you

potentially millions of dollars in revenue and marketing performance and things like that. If you don’t have statistics in your marketing software, push your vendor to build them in or change vendors and find somebody who does have that in because otherwise you could be making really terrible decisions. Again, if I were to say, Okay, well, this is clearly the algorithm I should be using for all my newsletters for now on. Well, no, I don’t know that. I don’t know that at all. And so I need to understand what exactly is involved in in the in the statistics of the software so that I can make an informed choice that would be my last tip is understand your your margins of error and your statistical significance in any time you’re working with analytics and marketing. So great question, Magdalena a lot of give you five and step two or three. But these are important principles for any kind of marketing software that you’re using that involves data and analytics. As always, please subscribe to the YouTube channel on the news letter. I’ll talk to you soon.

If you want help with your company’s data and analytics. Visit Trust Insights calm today and let

You might also enjoy:

Want to read more like this from Christopher Penn? Get updates here:

Shares