An article posted back in 2011 in Research Design Review — “13 Factors Impacting the Quality of Qualitative Research” — delineated three broad areas and 13 specific components of qualitative research design that can influence the quality of research outcomes. One factor, under the broad category of “The Environment,” is the “presence of observers/interviewers as well as other participants.” In other words, how does the inclusion of other people — whether it be client observers, interviewers, fellow participants, videographers, or note takers — affect the attitudes, behaviors, and responses we gain from our research efforts? Does research, almost by definition, create an artificial social context where participants/respondents seek others’ approval leading to a false understanding of their realities?
Social desirability bias is not a new concern in research design and its influence on the ultimate usefulness of our qualitative and quantitative research has been the focus of attention for quite some time. Tourangeau, Rips, and Rasinski (2000) discuss social desirability in the context of sensitive questions:
“[The] notion of sensitive questions presupposes that respondents believe there are norms defining desirable attitudes and behaviors, and that they are concerned enough about these norms to distort their answers to avoid presenting themselves in an unfavorable light.”
Nancarrow and Brace — in their article “Saying the ‘right thing’: Coping with social desirability in marketing research” (2000) — address the under- and over-reporting associated with social desirability bias and outline numerous techniques that have been used to deal with the problem — e.g., emphasizing the need for honesty, promises of confidentiality, and question manipulation by softening the suggestion that the respondent should know the answer to a particular question or behave in certain way.
Online technology and the ever-growing online research designs that are emerging — within social media, mobile, bulletin boards, communities, and survey research — have allayed social-desirability concerns. The belief among some researchers is that one of the beauties of the virtual world is that inhabitants basically live in solitude, stating that a key advantage to online qualitative research, for instance, is the obliteration of social desirability bias and hence the heightened validity of online vs. offline designs*.
The idea that researchers who design online studies can ignore potential bias due to social desirability seems misguided. In fact, a good case can be made that the Internet and online technology have unleashed a dynamic capacity for posturing and the need for approval. Popularity and even celebrity – so elusive to the everyday person in earlier times — have become preoccupations. You only need to witness the apparent race for Facebook friends, LinkedIn connections, Twitter followers, and YouTube or blog views – as well as the “vanity” and online self-publishing craze — to gain some insight into the potential competitiveness — i.e., pursuit of social stature — fueled by the realm of online. In this way, the virtual social environment has encouraged a look-at-me way of thinking and behaving.
So, how real are those at-the-moment snippets transmitted by mobile research participants (which may be meant to impress the researcher more than inform)? How honest are those product reviews or blog comments? What is the extent of bravado being exhibited in our online communities, bulletin boards, and social network exchanges? The answer is we do not know, and yet it doesn’t take a great leap of faith to acknowledge that the individual attitudes and behavior we capture online are potentially distorted by an underlying need for social approval.
To paraphrase Mark Twain, the reports of the death of social desirability bias in online research are greatly exaggerated; and, to the contrary, social needs have blossomed in the online world. More than ever, people are asking, “Do you like me?” and, in doing so, presenting the researcher with a critical design issue that impacts the quality of our outcomes.
Nancarrow, C., & Brace, I. (2000). Saying the “right thing”: Coping with social desirability bias in marketing research. Bristol Business School Teaching and Research Review, 3(11).
Tourangeau, R., Rips, L., & Rasinski, K. 2000. The Psychology of Survey Response. Cambridge University Press.
Given that one of the most advantageous aspects of conducting in person focus groups is that you can read the body language of the participants, I think that using online sources for focus groups may result in the critical limitation of not being able to read the body language of the participants. I see the benefit of using online venues, if there really is a phenomenon in which participants tend to be more honest in their responses. I’m not sure if perhaps conducting an initial in person focus group followed by an online focus group with the same participants might be beneficial.
Thank you, Jeanette. I absolutely believe in mixed methods and believe that the future of experimentation in research design lies in a better understanding of how different methods can work together to gain a truer picture of the target segment on a given issue.
While I agree with many excellent points here, everything is relative.
1. Paul from Accelerant is correct–I know from having done 100s of focus groups (in-person and online) that you get a much more direct responses in online FGs. In OLFGs, people seem more comfortable giving negative feedback and in sharing opinions that differ from other participants.
2. My personal hypothesis–which I can’t prove with data–is that the desire for social status is less of an issue in one-off online research efforts. A survey participant, a focus group participant, a FB poll taker–those are very different than participating in ongoing research exercises/communities where stature can develop over time. Not all online research is, IMO, subject to the same level of risk.
Of course, the desire for social approval impacts research–both offline and online. The onus is on the researcher to minimize it as much as possible and to report the results very carefully (we must always remind our clients–the people who receive research results) that there can be a big difference between what people say they think/do and what they actually think/do. This doesn’t mean the research is “bad”–just that we need to understand how it can be used.
Thanks Kathryn. Great comments and I think you make my point. The need for social approval is there but getting a handle on it is difficult. We need to acknowledge it and consider it in our online designs, just as we have in the offline arena.