A few years ago I teamed up with Linelle Blais (my client at the American Cancer Society) to conduct a test of online versus traditional paper survey mode effects. At issue was whether the Volunteer Satisfaction Study, which we had been conducting for many years nationwide, could be converted from a self-administered paper questionnaire delivered via USPS to an online format. By shifting to online we hoped to save the Society money as well as provide a faster turnaround of the findings. This experiment set out to determine: the rate at which volunteers would respond to a Web survey; the rate at which sub-groups would respond; the completeness of volunteers’ responses; and, the degree to which item responses vary as a result of Web vs. paper completion.
A fairly complete account of this research was published in Quirk’s last year (“A Volunteered Response”), and much of the following comments are taken from this article.
Among other things this research confirmed what researchers elsewhere had found, including a lower response rate on the Web but also a lower rate of item nonresponse. Interestingly, respondents to the Web survey not only answered more questions but also were significantly more likely to respond to the open-ended question asking for their suggestions to improve volunteer satisfaction. Sixty-five percent (65%) of the volunteers in the Web mode answered this question compared to 52% of the volunteers in the paper mode. The sentiment of these comments (i.e., positive vs. negative vs. neutral remarks) did not differ greatly across modes; however, the length of these comments varied hugely by mode. The average word count of comments made by volunteers responding to the online survey was 13 times higher than the word count among volunteers responding on paper – 268 words per comment vs. 20 words per comment, respectively.
There is also some indication that the quality of the open-end comments in the Web mode may be superior to that in the paper mode. A cursory analysis of comments in both modes suggests that comments from Web respondents are more detailed (e.g., references to specific examples or names) and tend to be more constructive (i.e., offer suggestions for improvement) than comments from the paper questionnaire.
The readability of these comments, however, appears to be a different issue. Looking at the readability scores, based on the Flesch-Kincaid Grade Level analysis, the comments from the paper survey read at an 8th grade level while comments from the online respondents read at the 7th grade level. Whether this is a function of the younger age of Web respondents or the informal (even, sloppy) writing style many email users have adopted or something else is left for further research.
Open-ended questions – both in qualitative and quantitative research – are typically added to interview and questionnaire designs to enrich the data, bringing meaningful insight that might otherwise be lost in a completely close-ended approach. Given that the quality of these responses can vary across modes should serve as an important consideration in constructing our research designs as well as in our analyses and interpretation.