“Why aren’t people responding?”

This is the perpetual question asked by anyone doing survey research, and it’s one that I am no stranger to myself. There are common strategies to combat low survey participation, but what happens when they fail?

Last year, I was co-principal investigator on a small Advanced Technological Education (ATE) grant to conduct a nationwide survey of high school biology teachers. This was a follow-up to a 1998 survey done as part of an earlier ATE grant my institution had received. In 1998, the survey was done entirely by mail and had a 35 percent response rate. In 2018, we administered an updated version of this survey to nearly 13,000 teachers. However, this time, there was one big difference: we used email.

After a series of four messages over two months (pre-notice, invitation, and two reminders), an incentivized survey, and intentional targeting of high school biology teachers, our response rate was only 10 percent. We anticipated that teachers would be busy and that a 15-minute survey might be too much for many of them to deal with at school. However, there appeared to be a bigger problem: nearly two-thirds of our messages were never opened and perhaps never even seen.

To boost our numbers, we decided to return to what had worked previously: the mail. Rather than send more emails, we mailed an invitation to individuals who had not completed the survey, followed by postcard reminders. Individuals were reminded of the incentive and directed to a web address where they could complete the survey online. The end result was a 14 percent response rate.

I noticed that, particularly when emailing teachers at their school-provided email addresses, many messages never reach the intended recipients. Although use of a mail-exclusive design may never be likely, an alternative would be to heed the advice of Millar and Dillman (2011): administer a mixed-mode, web-then-mail messaging strategy to ensure that spam filters don’t prevent participants from being a part of surveys. Asking the following questions can help guide your method-of-contact decisions and help avoid troubleshooting a low response rate mid-survey.

  1. Have I had low response rates from a similar population before?
  2. Do I have the ability to contact individuals via multiple methods?
  3. Is using the mail cost- or time-prohibitive for this particular project?
  4. What is the sample size necessary for my sample to reasonably represent the target population?
  5. Have I already made successful contact with these individuals over email?
  6. Does the survey tool I’m using (Survey Monkey, Qualtrics, etc.) tend to be snagged by spam filters if I use its built-in invitation management features?

These are just some of the considerations that may help you avoid major spam filter issues in your forthcoming project. Spam filters may not be the only reason for a low response rate, but anything that can be done to mitigate their impact is a step toward a better response rate for your surveys.


Reference

Millar, M., & Dillman, D. (2011). Improving response to web and mixed-mode surveys. Public Opinion Quarterly 75, 249–269.

About the Authors

Lindsay Barone

Lindsay Barone box with arrow

Program Evaluator, Cold Spring Harbor Laboratory, DNA Learning Center

Dr. Lindsay Barone is an anthropologist at Cold Spring Harbor Laboratory’s DNA Learning Center. Trained in both biological and cultural anthropology, she focuses her research on informal science education, science literacy, and biology education for grades K-16. She is responsible for the evaluation of a number of biology and biotechnology-focused programs, including the recently ended National Science Foundation-Advanced Technological Education program Genomic Approaches in Biosciences.

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Related Blog Posts

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant number 1841783. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.