### How Many Survey Responses Do You Really Need?

You don’t need thousands of survey responses to be confident in your research. Even so, nearly every association I have worked with invites all members–as well as past members and non-members in many cases–to participate in their surveys. On one recent project a client invited all 100,000 plus members to complete their member survey.

The reasoning for most association executives is that they need all of those members, or at least a large percentage of them, for the survey to be valid.

But, in fact, in most cases we don’t need thousands of survey responses because with increasing number of responses we see less variability in the results. That’s the power of statistics.

So if you don’t needs thousands, then how many responses do you need? By using a random sample we can measure the accuracy of the survey.

Accuracy is described by the **confidence level** and the **margin of error**.

The **confidence level** describes the level of uncertainty in the sampling method. Most researchers use the 95% confidence level. With a 95% confidence level you can be 95% certain that the true value of a response is within the confidence interval.

So what is the confidence interval? We usually hear it referred to in news stories as the margin of error. It means that with a confidence level of 95% then the response may vary by five percentage points higher or lower than if the whole population for that survey had responded to the question. So, for example, if the margin of error is 5 and 30% of your survey participants say that blue is their favorite color, then you can be confident that between 25% and 35% of the population would say their favorite color is blue. The larger the margin of error then the less confidence you should have in the data.

All of this is to say that you don’t usually need thousands and thousands of responses to be confident in your data.

The following table shows the number of responses needed for a variety of example population sizes, calculated at the 95% confidence interval and with a +/- 5% margin of error.

You can see from the table that the association with 100,000 members only needed 383 responses from a random sample–at least for the topline survey findings. It is important to note that most survey data is analyzed for subgroups, such as responses from men versus women or particular age groupings. When this is the case then the size of each subgroup will need to be correspondingly larger for those results to be meaningful.

There are several sample size calculators on the Internet. The Raosoft Sample Size Calculator is a good sources for determining sample sizes. It includes easy-to-understand explanations of terms such as margin of error, confidence level and population size.

To be sure, many association executives consider surveys a member benefit of sorts. Their reasoning is that the survey is a way for members to register their opinions with the association on issues of importance, so all members should have the opportunity to give their opinions. But if your association has a robust research agenda and fields numerous smaller surveys during the course of a year, you risk survey fatigue.

The bottom line is that it’s always good to ask a simple question. How many survey responses do we really need?

Think and share with me: rwedewer@tecker.com.