Shared Constructs in Research Design: Part 3 — Validity

validity in research designNot unlike Part 1 (concerning sampling) and Part 2 (concerning bias) of the discussion that began earlier, the shared construct of validity in research design has also been an area of focus in several articles posted in Research Design Review. Most notable is “Quality Frameworks in Qualitative Research” posted in February 2021 in which validity is discussed within the context of the parameters or strategies various researchers use to define and think about the dimensions of rigor in qualitative research design. This article uses the Total Quality Framework (Roller & Lavrakas, 2015) and criteria of Lincoln and Guba (1985) to underscore the idea that quality approaches to design cuts across paradigm orientation, leading to robust and valid interpretations of the data.

Many other qualitative researchers, across disciplines, believe in the critical role that the shared construct of validity plays in research design. Joseph Maxwell, for example, discusses validity in association with his realism approach to casual explanation in qualitative research (Maxwell, 2004) and discusses in detail five unique dimensions of validity, including descriptive, interpretative, and theoretical validity (Maxwell, 1992). And of course, Miles & Huberman were promoting greater rigor by way of validity more than three decades ago (Miles & Huberman, 1984).

More recently, Koro-Ljungberg (2010) takes an in-depth look at validity in qualitative research and, with extensive literature as the backdrop, makes the case that “validity is in doing, as well as its (un)making, and it exhibits itself in the present paradox of knowing and unknowing, indecision, and border crossing” (p. 609). Matteson & Lincoln (2008) remind educational researchers that validity does not solely concern the analysis phase of research design but “the data collection method must also address validity” (p. 672). Creswell & Miller (2000) discuss different approaches to determine validity across three paradigm orientations — postpositivist, constructivist, and critical — and “lens” of the researcher, participants, and researchers external to the study.

Among qualitative health researchers, Morse (2020) emphasizes the potential weakness in validity when confusing the analysis of interpretative inquiry with that associated with “hard, descriptive data” (p. 4), and Morse et al. (2002) present five verification strategies and argue that validity (as well as reliability) is an “overarching” construct that “can be appropriately used in all scientific paradigms” (p. 19).

These researchers, and those discussed in Part 1 – Sampling and Part 2 – Bias, are admittedly a small share of those who have devoted a great deal of thought and writing concerning these shared constructs. The reader is encouraged to utilize these references to build on their understanding of these constructs in qualitative research and to grow their own library of knowledge.


Creswell, J. W., & Miller, D. L. (2000). Determining validity in qualitative inquiry. Theory into Practice, 39(3), 124–130.

Koro-Ljungberg, M. (2010). Validity, responsibility, and aporia. Qualitative Inquiry, 16(8), 603–610.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage Publications.

Matteson, S. M., & Lincoln, Y. S. (2008). Using multiple interviewers in qualitative research studies: The influence of ethic of care behaviors in research interview settings. Qualitative Inquiry, 15(4), 659–674. Retrieved from

Maxwell, J. A. (1992). Understanding and validity in qualitative research. Harvard Educational Review, 62(3), 279–300.

Maxwell, J. A. (2004). Casual explanation, qualitative research, and scientific inquiry in education. Educational Researcher, 33(2), 3–11.

Miles, M. B., & Huberman, A. M. (1984). Drawing valid meaning from qualitative data: Toward a shared craft. Educational Researcher, 13(5), 20–30.

Morse, J. (2020). The changing face of qualitative inquiry. International Journal for Qualitative Methods, 19, 1–7.

Morse, J. M., Barrett, M., Mayan, M., Olson, K., & Spiers, J. (2002). Verification strategies for establishing reliability and validity in qualitative research. International Journal of Qualitative Methods, 1(2), 13–22.

Roller, M. R., & Lavrakas, P. J. (2015). Applied qualitative research design: A total quality framework approach. New York: Guilford Press.

Beyond the Behavior-plus-“why” Approach: Personal Meaning as Insight

Researchers are desperate to understand behavior. Health researchers want to know what leads to a lifetime of smoking and how the daily smoking routine affects the quality of life. Education researchers examine the behavior of model teaching environments and contemplate best practices. Psychologists look for signs of social exclusion among victims of kid-waterslidebrain injuries. Marketing researchers chase an elusive explanation for consumer behavior, wanting to know product and service preferences in every conceivable category. And, if that were not enough, researchers of all ilk, to a lesser or greater extent, grapple with an often ill-fated attempt to predict (and shape) behaviors to come.

But researchers have come to appreciate that behavior is not enough. It is not enough to simply ask about past behavior, observe current behavior, or capture in-the-moment experiences via mobile. Behavior only tells part of a person’s story and, so, researchers passionately beef-up their research designs to include “why” – focusing on not just what people do but why they do it. “Why,” of course, is often phrased as “what,” “how,” or “when” questions – “What was going on at the time you picked up your first cigarette?” – but, whatever the format, the goal is the same, i.e., to get Read Full Text

Satisfaction Research & Other Conundrums

Greg Allenby, marketing chair at Ohio State’s business school, published an article in the May/June 2014 issue of Marketing Insights on heterogeneity or, more specifically, on the idea that 1) accounting for individual differences is essential to understanding Conundrumthe “why” and “how” that lurks within research data and 2) research designs often mask these differences by neglecting the relative nature of the constructs under investigation. For instance, research concerning preference or satisfaction is useful to the extent it helps explain why and how people think differently as it relates to their preferences or levels of satisfaction, yet these are inherently relative constructs that only hold meaning if the researcher understands the standard (the “point of reference”) by which the current question of preference or satisfaction is being weighed – i.e., my preference (or satisfaction) compared to…what? Since the survey researcher is rarely if ever clued-in on respondents’ points of reference, it would be inaccurate to make direct comparisons such as stating that someone’s product preference is two times greater compared to someone else’s.

The embedded “relativeness” associated with responding to constructs such as preference and satisfaction is just one of the pesky problems inherent in designing this type of research. A related but different problem revolves around the personal interpretation given Read Full Text