Phil asks, “How do you determine a large enough sample size for things like our survey? I always thought 10% sample would be enough, but you seemed to think that’s not true?”
It depends on the size of the overall population. The smaller the population, the larger the sample you need. It also depends on the level of accuracy you need – how repeatable, and what margin of error you’re comfortable with. Many surveys are done at a 95% confidence level (meaning if you repeated the survey 100 times, 95 times it would come back the same) and anywhere from a 2-3% margin of error (meaning that if 49% of people said no to 1 question and 51% said yes, statistically there is no difference, but if 48/52, then there is a difference). Watch the video for a full explanation and examples.
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
- Got a question for You Ask, I’ll Answer? Submit it here!
- Subscribe to my weekly newsletter for more useful marketing tips.
- Find older episodes of You Ask, I Answer on my YouTube channel.
- Need help with your company’s data and analytics? Let me know!
- Join my free Slack group for marketers interested in analytics!
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.
In today’s episode, Phil asks, How do you determine a large enough sample size for things like a survey how I always thought 10% sampled be enough, but you seem to think that’s not true.
Is is not true.
surveys and statistical validity depend on the size of the population you’re surveying.
The smaller the population, the larger sample, you’re going to need to deal with.
outliers and, and discrepancies.
And it’s tough to explain, you know, let’s do this.
I’m gonna take, I have five colored blocks here, right, three green, three blue to yellow and put them in this hat.
Now I’m gonna pull one block out of this hat.
Remember, three, three, blue, to yellow.
This is a 20% sample of a public f5 if I conclude then based on the sample that every block in this hat is blue, we know that’s not true, right? There’s two yellows and three blues in here.
And so from a very small sample sample size, I have to be able to randomly draw, you know, I pull out to here, still blue, right? I pull out three here.
Okay, now we’re starting to get somewhere now there’s, there’s a yellow in there, pull up for an 80% sample, three blue and one yellow, and then 100% sample five.
So if you have a very small population, one outlier can really ruin the survey size right? Now if yes, I do keep blocks and other creative things at my desk.
If I have a box full of these, right, and I start pulling out a handful.
This is probably about 10%.
You’re gonna see there’s because there’s so many more blocks.
As long as they are properly mixed, when I pull out samples, I can start to see that I’m getting a more representative sample of the population as a whole.
Now, if this black box were 300 million bricks, we wouldn’t be doing this video because my basement would be full.
But at this, if I had 300 minutes, I could pull out 1000 of these.
And again, as long as it was well mixed, I would have a pretty good idea of what the entire sample would look like, or what the entire population look like, based on that sample of 1000.
Because there’s so many, that as long as it’s stirred, I’m getting a representation, that’s what we’re trying to figure out is, can we get a group, a cluster that is representative of the whole that we can extrapolate to the whole, when you have a small group, you can’t do that because there’s such a much greater chance of, of variation of variability that you could end up drawing some really long conclusion Even something as simple as say, like, I’m at a conference, and I get speaker reviews back, and there’s 500 people in the room, and 10 people left reviews and, you know, 15 or 10 people left reviews, five of them said I was a great speaker 5% was a terrible speaker.
Is that representative? No, not even close.
Because there’s a self selection bias, even there, those 10 people felt strongly enough to leave comments.
And the other 490 people didn’t.
And there’s a very good chance that those 490 people felt differently than the 10 people who did decide to respond.
So there’s a whole bunch of different ways that you have to tackle surveys in particular, I would refer you to there’s there’s three reading sources, I think a great one is Edison research.
And my friend Tom Webster, who so go to Edison research calm And also brand savant.com is a good place to go.
And then there are organizations, the American Association, American Association of Public Opinion researchers a4, a p o r.org.
And Castro, the coalition of Americans.
Oh gosh, I don’t know what both of those are great organizations to have detailed best practices about Public Opinion Research and surveys that will give you some really good starting points for understanding how to do surveys Well, how to avoid many of the biases and the traps that that you run into.
Non response bias, meaning that the people who don’t respond are different than the people who do respond is a big one.
If you’re doing a survey of, say, your email newsletter list, and you only send it to people who have opened emails in the past, well, what about all those people who don’t open your emails? Do they feel differently about your brand of your company? You bet they do.
You bet they do.
So You have to keep in mind all these different things can go wrong, your best bet for doing a sample, determining sample size is to use one of the many, many sample size calculators out there on the web.
Survey Monkey has one surveygizmo has one pretty much every surveying company has one.
And they’re going to ask you for two major numbers.
They want to know your confidence level and your confidence interval.
confidence level means that if you repeat a process 100 times what number of times you get the same results.
So when when I have this five blocks in the hat business, right, how many times I repeat this draw 100 times in a row, how many times Am I going to get the same result? That is your confidence level.
Most surveys operate at a 95% confidence.
Well, that’s the general best practice if you repeated the survey 100 times 90 five of those times you get the same result.
That is that will help you determine the sample size, how large of the population? Do you need to survey in order to get that reliability of 95 times out of 100? You get the same results in your survey.
The second is confidence interval or margin of error.
This is how granular Do you need the results to be in order to be able to judge that’s accurate? So let’s say there’s a yes or no question.
Right? And 49% of people said no, and 51% of people said yes.
If you have a margin of error of 3%, meaning any answer could go either way, plus or minus 3%.
Then a 49% of people said no and 51% of people said yes, there’s a large enough margin of error there that you can’t tell which answer is correct, right, because the 49% could be as low as 46% could be as high as 52%.
And the 51%, could be as low as 48%, as high as 54%.
And they overlap That means that your confidence interval is too wide, the catches, the narrower you make the confidence interval, the larger your sample has to be, in order to have it be representative.
The same is true of confidence level, the higher your confidence level 9095 99%, the larger your sample has to be.
If you incur a cost of, you know, for sending out a survey, then you have to make that balance between how much do I want to spend, and how accurate Do I need my survey to be and it is a balancing game to make that determination, especially if you ever want to ask questions, we have to drill down to a subset of your population, then it’s going to get really expensive.
So keep that in mind.
These are good questions to ask before you do a survey because they dictate the type of survey you’re going to do.
They dictate the cost of it.
They dictate what you can and can’t do with the information.
So it’s a really good question.
Again, use my other calculators Spend some time learning about surveys in particular the biases that go into them, because that is what will ruin them more than anything else is having, you know, doing a survey and saying it’s representative.
And then it not be.
Because if you make a decision based on a sample that’s too small and therefore skewed, you could really throw off every decision you make from that, like, Oh, do you spend money on this as a focus of yours? Is this something that people care about? If the answers are skewed, because you didn’t get a good enough sample, you could spend a lot of time and money, a waste a lot of time and money on something that’s not going to work.
So get the serving basics down first before you run the survey.
Because the other thing that’s tricky about services, there’s no going back.
There’s no rewinding.
You can’t fix the data of the survey data after you’ve done it.
Great question, leave your follow up questions here.
In the comments box, subscribe to the YouTube channel on the newsletter I’ll talk to you soon.
want help solving your company’s data analytics and Digital Marketing problems, visit Trust insights.ai today and let us know how we can help you
You might also enjoy:
- Google Analytics: A Content Marketing Engagement Test
- Solicited Review: Content Inc. Second Edition
- Six Types of Marketing Demand Generation
- Four Requirements of Great Marketing Data Visualization
- Google Analytics 4 or Bust: Lessons from Google Marketing Live 2021
Want to read more like this from Christopher Penn? Get updates here:
Get your copy of AI For Marketers