Elizabeth asks, “What’s the most common type of bias you see that we as marketers should be aware of?”
There are so many to choose from, but I’ll start with two: confirmation bias, and selection bias. Confirmation bias corrupts the entire process by looking for a result that fits a predetermined conclusion. Selection bias corrupts the entire process by failing to represent the population you’ve said you’re studying. Watch the video for more details.
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
- Got a question for You Ask, I’ll Answer? Submit it here!
- Subscribe to my weekly newsletter for more useful marketing tips.
- Find older episodes of You Ask, I Answer on my YouTube channel.
- Need help with your company’s data and analytics? Let me know!
- Join my free Slack group for marketers interested in analytics!
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.
In today’s episode, Elizabeth asks, What’s the most common type of bias that you see that we as marketers should be aware of? Gosh, there’s so many to choose from.
Um I’ll start with two of these.
Because these are the two that again, are probably the most common and things that people don’t think about.
The first is confirmation bias.
So confirmation bias, pretty simply is when you start any kind of data science or AI project with a intending to look for a result that fits a predetermined conclusion.
And you see this a ton in research studies especially.
But obviously it can extend to building models which are nothing but software based on data is simple.
Example this, let’s say you’re a SaaS based company, right? And you sell, I don’t know, photo editing services, make something up.
And you want to promote your services.
So you build a predictive model that has the marketing goal of telling people, they should edit their photos more.
Right? You do some market research and all this stuff.
And of course, you come up with a conclusion that yes, people should edit their photos, a lot more mail, better edited photos, perform better on social media, and you create an algorithm that predicts If only you had edited this photo, it would have earned this much more engagement.
That’s kind of a silly example.
But you can see how that particular bias corrupts everything.
Because you are working towards a predetermined conclusion rather than building a model that is truly open to being asked.
trained and can yield results that frankly might not support your business.
Right? Again, you see this most often in research.
There used to be a company not too far from here in the Boston area that would sell this as a service.
They were, they were sort of the antithesis of good research.
They say, hey, you want to prove whatever thing you know, hire our services, we’ll come up with a statistically validated way that proves your thing.
And anytime, you know, once, once that was known, anytime you saw that company’s name in the in the methodology, if it was even listed, you know, yep, that survey is completely invalid.
You know, that they, the company that sponsored it bought the result that they wanted, and it ruined the research company’s credibility, and it ruins the credibility of any company that uses that research company.
So that’s probably the first one that one is relatively easy to spot inside your own organization because your VP of marketing or CEO or somebody says, Hey, I need something that does this.
That’s not how research works.
That’s not how good models work either.
Because one of the things that will happen is that if you are willing to tolerate bias, like that conscious bias in your model, and you are willing to tolerate biases that are discriminatory, that’s going to get your butt kicked over time.
Second, is selection bias.
And this corrupts the process because it is a bias that exists your data set that does not represent the population as a whole.
And so you’re using data that is inherently skewed in some way, consciously or unconsciously.
And it does not represent whatever population you’re trying to serve.
So simple example number of years ago Hubspot did a research study saying, you know, we, they made some claims about inbound marketing, which is their thing.
And when you look at the methodology that says we emailed it was a survey and we emailed our list of you know, however many people we can see the problem here.
You can’t make a survey that claims to represent all marketers based on your own email list.
Because people who are on your email list inherently have a greater affinity towards your products and services.
Right HubSpot was all about and still is about inbound marketing.
Naturally, somebody who subscribed to their email list probably has at least some passing interest in inbound marketing.
And as a result, you have a survey that is biased towards people who like inbound marketing and so to make a claim that this represents all views of all marketers is incorrect because there’s this whole group of people out there, not on their email list who probably don’t give too much.
Have a have a fish at non animal market, they don’t care.
They don’t it’s not their thing.
And so their views are not being represented in the survey methodology because it was it was they were selected out.
They were not part of that population.
What could that have certainly have represented.
You could have certainly said all marketers who are in the HubSpot audience, believe this and that would be valid.
Right? When you’re collecting data for data science and AI projects, the same thing applies if you are collecting data from a specific focus group that you selected.
Did you select a group that was representative of your audience as a whole and b2c? That means balancing against things like age, gender, physical location where that person is other demographics.
This is one of those rare cases where you should be using protected classes like race Gender Identity etc to make sure that you are balanced against the actual population that you don’t have a non representative sample on b2b.
You’d be looking at things like a balance of companies by size by revenue, number of employees.
industry, and it should look like the the balance of as a whole.
So if you do a survey, and your initial data pool comes from 61% technology companies come technology companies just make up 61% of the global economy or even one country’s economy.
They represent, you know, typically a much lower slice, depending on the metric you choose on.
So you would want to be sure that you are balancing your dataset to represent the population that you’re claiming to study.
And selection bias tends to selection bias is not As intentional as confirmation bias confirmation biases, I want to put my thumb on the scale.
And I want it to say this, which is just a horrendous thing to do.
selection bias happens more often than not because of naivety, a naive researcher or marketer, or data scientist will just do something and not think, am I balanced? And that’s something that you can solve with things like checklists like did you balance this dataset? Did you balance the input collection methods? Are you Are you truly representative? Have you checked? Have you validated that your data is representative? If you can solve for both of these biases, your data will inherently across the spectrum of the ways that use data from market research data science and analysis projects to AI models will inherently get better.
It may not deliver the results you’re expecting.
Particularly if you have a preordained conclusion in mind.
Even if you keep your processes clean.
But it will be better.
It’ll be objectively better data.
And if you’re using that data to make decisions which you should be, you will make better decisions.
So really good question.
There’s a whole bunch of other biases.
And there’s, there’s a difference between a statistical bias and a human bias that we’ll leave for another time.
But these are the, I would say the two most common statistical biases that we see in data science and AI, you know, follow up questions, leave in the comments box below.
Subscribe to the YouTube channel on the newsletter, I’ll talk to you soon take care.
want help solving your company’s data analytics and digital marketing problems? Visit Trust insights.ai today and let us know how we can help you
You might also enjoy:
- The Basic Truth of Mental Health
- Retiring Old Email Marketing Strategies
- What Is The Difference Between Analysis and Insight?
- Almost Timely News, 17 October 2021: Content Creation Hacks, Vanity Metrics, NFTs
- Is Social Listening Useful?
Want to read more like this from Christopher Penn? Get updates here:
Get your copy of AI For Marketers