Social Desirability

Shared Constructs in Research Design: Part 2 — Bias

Part 1 of the discussion of shared constructs — “Shared Constructs in Research Design: Part 1 – Sampling” — acknowledges the distinctiveness between quantitative and qualitative research while Research biashighlighting the notion that there are fundamental constructs common to a quality approach to research design regardless of method or, in the case of qualitative research, paradigm orientation. Three such constructs are sampling, bias, and validity. Part 1 of this discussion focused on sampling (prefaced by a consideration of paradigms in qualitative research and the importance of quality research design regardless of orientation). This article (Part 2) discusses bias.

Bias in qualitative research design has been the topic of a number of articles in Research Design Review over the years. One of these articles is a broad discussion on paying attention to bias in qualitative research and another explores social desirability bias in online research. An article written in 2014 examines the role of empathy in qualitative research and its potential for enhancing clarity while reducing the bias in qualitative data, and another article in RDR talks about visual cues and the importance of visual cues in mitigating sources of bias in qualitative research. Other articles concerning bias in RDR are specific to methods. For example, a couple of articles discuss mitigating interviewer bias in the in-depth interview method — “In-depth Interviewer Effects: Mitigating Interviewer Bias” and “Interviewer Bias & Reflexivity in Qualitative Research” — while another article focuses on ethnography and mitigating observer bias, and a fourth article considers the potential bias in mobile (smartphone) qualitative research.

Others in the field of psychology have discussed various aspects of bias in qualitative research. For example, Linda Finlay (2002) discusses the value of reflexivity as a tool to, among other things, “open up unconscious motivations and implicit biases in the researcher’s approach” (p. 225). Ponterotto (2005) looks at the varying role and understanding of bias across paradigm orientations in qualitative research among the postpositivists, constructivist–interpretivist researchers, and critical–ideological researchers. In psychiatry, Whitley & Crawford (2005) suggest ways to mitigate investigator bias and thereby increase the rigor in qualitative studies. Morrow (2005) asserts that “all research is subject to researcher bias” and highlights the subjectivity inherent in qualitative research and explores bracketing and reflexivity as a means of “making one’s implicit assumptions and biases overt to self and others” (p. 254). And researcher bias is central to the Credibility component of the Total Quality Framework (Roller & Lavrakas, 2015).

Social scientists such as Williams & Heikes (1993) examine the impact of interviewer gender on social desirability bias in qualitative research; while Armour, Rivaux, and Bell (2009) discuss researcher bias within the context of analysis and interpretation of two phenomenological studies. In a recent paper, Howlett (2021) reflects on the transition to online technical research solutions and the associated methodological considerations, such as the negative impact of selection bias due to weak recruitment and engagement strategies.

Among healthcare researchers, Arcury & Quandt (1999) discuss recruitment with a focus on sampling and the use of gatekeepers, with an emphasis on the potential for selection bias which they monitored by way of reviewing “the type of clients being referred to us, relative to the composition of the site clientele” (p. 131). Whittemore, Chase, & Mandle (2001) define quality in qualitative research by way of validity standards, including investigator bias — “…a phenomenological investigation will need to address investigator bias (explicitness) and an emic perspective (vividness) as well as explicate a very specific phenomenon in depth (thoroughness)” (p. 529). And Morse (2015), who is a pioneer in qualitative health research and has written extensively on issues of quality in qualitative research design, highlights the mitigation of researcher bias as central to the validity of qualitative design, offering “the correction of researcher bias” as one recommended strategy for “establishing rigor in qualitative inquiry” (p. 33).

Another shared and much discussed construct among qualitative researchers — validity — is the focus of Part 3 in this discussion.

Arcury, T. A., & Quandt, S. A. (1999). Participant recruitment for qualitative research: A site-based approach to community research in complex societies. Human Organization, 58(2), 128–133. Retrieved from

Armour, M., Rivaux, S. L., & Bell, H. (2009). Using context to build rigor: Application to two hermeneutic phenomenological studies. Qualitative Social Work, 8(1), 101–122.

Finlay, L. (2002). Negotiating the swamp: The opportunity and challenge of reflexivity in research practice. Qualitative Research, 2(2), 209–230. Retrieved from

Howlett, M. (2021). Looking at the ‘field’ through a Zoom lens: Methodological reflections on conducting online research during a global pandemic. Qualitative Research, 146879412098569.

Morrow, S. L. (2005). Quality and trustworthiness in qualitative research in counseling psychology. Journal of Counseling Psychology, 52(2), 250–260.

Morse, J. M. (2015). Critical analysis of strategies for determining rigor in qualitative inquiry. Qualitative Health Research, 25(9), 1212–1222.

Ponterotto, J. G. (2005). Qualitative research in counseling psychology: A primer on research paradigms and philosophy of science. Journal of Counseling Psychology, 52(2), 126–136.

Roller, M. R., & Lavrakas, P. J. (2015). Applied qualitative research design: A total quality framework approach. New York: Guilford Press.

Whitley, R., & Crawford, M. (2005). Qualitative research in psychiatry. Canadian Journal of Psychiatry, 50(2), 108–114. Retrieved from

Whittemore, R., Chase, S. K., & Mandle, C. L. (2001). Validity in qualitative research. Qualitative Health Research, 11(4), 522–537. Retrieved from

Williams, C. L., & Heikes, E. J. (1993). The importance of researcher’s gender in the in-depth interview: Evidence from two case studies of male nurses. Gender and Society, 7(2), 280–291.

Finding Meaning: 4 Reasons Why Qualitative Researchers Miss Meaning

Research of any kind that is interested in the human subject is interested in finding meaning.  It Meaningis typically not enough to know that a behavior has occurred without knowing the significance of that behavior for the individual.  Even survey research, with its reliance on mostly preconceived closed-ended questions, is designed with some hope that sense (i.e., meaning) can be derived by cross tabbing data from one question with another, factor analyzing, t-testing, z-testing, regressing, correlating, and any number of statistical techniques.

Yet, it is qualitative research that is usually in charge of finding meaning.  It is not good enough to know who does what, for how long, or in what manner.  Qualitative researchers are not so Read Full Text

Knowing What We Don’t Know: Social Desirability & Time-use Diaries

Back in February 2012, Research Design Review posted a discussion titled “Accounting for Social Desirability Bias in Online Research,” questioning the idea that social desirability is less a factor in the online mode (compared to more traditional research methods) and that, to the contrary, “individual attitudes and behavior we capture online are potentially distorted by an underlying need for social approval.”  Social desirability is an interesting and important source of error in research regardless of mode and worthy of consideration in all of our design strategies.

Social desirability as a factor in measurement error is particularly relevant when people assume there are socially-acceptable behaviors or attitudes connected with the research topic.  Behavior and attitudes, for instance, associated with healthy eating, exercise, religious activities, and (in the old days) visits to the library.  Because direct questioning about issues potentially laden with social (or personal-worth) status can distort a true picture of behavior, researchers have used time-diary data based on a person’s methodical recording of daily activities.  Concerning library visits, Iiris Niemi’s 1993 paper compared interview survey data with actual behavior recorded in time-use diaries showing that “the interview method produced 70% more library visits than the diary method.”  Similarly, Niemi reports that a much higher percentage of people claimed to engage in daily physical exercise when asked directly compared to the diary data documenting actual frequency of exercise activity.

Over reporting of religious activities, and specifically religious service attendance, has been the focus of Philip Brenner and other researchers.  Looking at religious service attendance among the U.S. population, researchers have concluded that there is a positive relationship between direct measures (i.e., direct questioning) and socially-desirable responding.  For example, 40%-50% report attending church every week when asked directly in a survey question, yet time-use diary studies (that do not tell participants what behaviors are of particular interest) indicate that only 20%-30% actually attend church weekly.  Some have linked this disparity to the idea that people have a tendency to interpret the survey question, ‘How often do you attend church?’ as ‘Are you a good, church-going person?’  Interestingly, the gap in direct versus indirect data on this issue as well as this tendency – that is, to re-interpret the church-attendance question to one about personal identity or worth – appears to be a U.S. phenomenon with people in other countries reporting very similar frequency of attendance regardless of research method.

Until social stigmas disappear and people do not feel threatened by reporting their true nature, research design will always struggle with the problem of social desirability.  Even though time-use diaries – as well as other more modern, in-the-moment approaches, such as mobile research – may be just as susceptible to censorship as typical survey responses, it seems right that unobtrusive yet personal modes of data collection may move us closer to knowing what we don’t know about consumer behavior.