Every week new email invitations arrive asking me to participate in an online survey concerning some product or service I recently used. And each time, as I read the stated reasons why I should comply with the request, I find myself taking a mental inventory of what I know or don’t know about the subject matter, what I can or cannot recall about my user experience, how positive or not positive the user experience was, and how important I think this product or service is in my life to be worthy of my time to answer their survey questions.
Last week I was asked by one of my trade organizations to participate in an online survey about their quarterly magazine. Or is it a monthly magazine? Maybe every two months? I am not sure, but I do know that I receive it and I read it. I stared at the email invitation taking the usual inventory, sifting through my usual battery of qualifying questions, pondering whether I should complete this survey or not. Yes, I told myself, I remember receiving this magazine, I know that I read it when it arrives, but do I really have anything to say about this magazine? My opinion of this magazine falls in some neutral territory where I don’t recall anything singular that I do not like about the magazine yet there is nothing that stands out as exceptional. Among all the research-related material I read, this magazine just doesn’t jump out at me. So, after much contemplation, including the anticipated lengthy grid questions about attributes that I have little opinion about (forcing non-substantive responses), I elected to delete the email invitation and move on with my day.
At about the same time, I received an invite to participate in an online survey for a financial institution. Once again, I made the arguments for and against participation until I finally decided that yes indeed I had something to say to this company and actually had been waiting for this opportunity to voice my concerns about a few issues I have had with their product and their service. At last my chance to vent! And vent I did even to the point of desperately conjuring a response to questions that were so poorly worded and response scales so poorly constructed that a neutral (middle-of-the-scale) answer might be the only reasonable choice when in fact “not applicable” (an answer option not given) was the honest response. Skipping questions, of course, was not permissible.
Where is telephone research when you need it? Many research firms that have made their business from telephone interviewing are looking for other sources of revenue to make up for the fact that clients have made the switch from telephone to online. And yet telephone interviewing continues to represent a highly credible research method. While there are clearly sampling and coverage issues, particularly in the use of landline frames, the method itself – an individual directly speaking with another individual – is a classically valid and important research mode.
Self-selection bias – as I demonstrated in my ultimate refusal to complete the online survey concerning a trade magazine – is a problem in all research and yet telephone research designs, with the typical protocol of repeated callbacks to gain cooperation from the sample list, are one of the best at mitigating this potential biasing effect. By repeatedly calling back in order to speak with the person who will hopefully become a survey respondent, the researcher has escalated the study’s credibility in having incorporated the opinions of not just people who are ready to vent but individuals who have more neutral yet meaningful contributions to make to the research. By lessening the potential for self-selection bias, this more inclusive approach also reduces error due to nonresponse while increasing precision. Indeed, it is proven that increasing callback attempts reduces nonresponse bias while improving the accuracy of survey data.
Telephone research improves accuracy not only by maximizing the inclusion of the intended sample but also by way of the interviewer who acts as the channel by which questions are administered. It is because of the interviewer that questions can be repeated and possibly clarified, and out-of-the-expected responses can be noted and reviewed during the field period for possible adjustments in survey design. So, when the telephone respondent is asked about something he or she knows little about, the interviewer – unlike an unforgiving online screen demanding a response before the respondent is able to move forward in the survey — can be allowed to accept a “don’t know” or “not applicable” answer.
A more representative sample, accurate data, and user-friendly experience are just a few of the reasons why telephone research continues to play an all-important role in research design.
 Groves, Robert, Douglas Wissoker, Liberty Greene, Molly McNeeley, and Darlene Montemarano. 2001. “Common Influences on Noncontact Nonresponse across Household Surveys: Theory and Data.” Unpublished manuscript.