research design

Research Quality & the Impact of Monetary Incentives

The following is adapted from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp. 78-79).

Monetary incentives & quality researchGaining cooperation from research participants and respondents is important to the quality of qualitative and survey research. A focus on gaining cooperation helps to mitigate potentially weakened data due to the possibility that the individuals who do not cooperate — do not participate in the research — differ in meaningful ways compared to those who do cooperate. As mentioned in an article posted earlier in Research Design Review, an effective component to the researcher’s strategy for gaining cooperation among participants is the offer of material incentives (e.g., cash, a gift card, prized tickets to a sporting event, donation to a favorite charity).

Although monetary incentives are routinely given to qualitative research participants to boost cooperation, the researcher needs to keep in mind that the offer of a cash (or equivalent) incentive may also jeopardize the quality of the actual focus group discussion, in-depth interview, or observation. The following is one example of how monetary incentives may have the unwanted effect of skewing participants’ responses in an in-depth interview (IDI) study.

Cook and Nunkoosing (2008) conducted an in-person IDI study with 12 “impoverished elders” in Melbourne, Australia to investigate community services for the poor among those “who are excluded or at risk of exclusion from their communities.” Research participants could participate in up to two interviews and were given $20 for each interview.

In reviewing the key findings, the researchers observed many “interview interactions that were atypical.” At least part of these irregularities was attributed to the monetary incentive which, according to Cook and Nunkoosing, helped to create an interview environment where interviewees were motivated “to manage the presentation of self, retain control over the exchange of information, and reduce the stigma of poverty by limiting disclosure and resisting researcher questioning” (p. 421).

The importance of the incentive in the interview process became clear when interviewees volunteered comments such as “I need the $20 . . . ” and critically compared the $20 to better (i.e., higher) cash incentives offered by other research studies. In this way interviewees were in effect “selling” their stories to the interviewer (and, some would say, at a bargain price) which, based on the researchers’ analyses, tainted interviewees’ responses with “stylized accounts” (or “rehearsed narratives”) as well as “minimal disclosure,” as seen in this excerpt from the transcripts (p. 424):
Participant: What did you want to know?
Interviewer: All about you.
Participant: That’s about it, like, there’s not too much.
Interviewer: Do you want to tell me a bit more? I don’t really know who you are yet.
Participant: You do.
Interviewer: Tell me a bit about who you are, what you like, what you don’t like.
Participant: I don’t like him [Gesturing toward the other agency client].

This dialog came towards the end of a 30-minute interview and helps to illustrate “the researcher’s frustration at [their] inability to engage the participant in in-depth discussion” (p. 423).

Research design is always a balancing act involving various trade-offs associated with meeting the key objectives, method(s) and strategy for engaging the target population(s), and the efficient use of available resources. An important researcher skill is understanding the implications of these trade-offs to the integrity of the final data and overall quality of the research investigation. A monetary incentive may be highly effective in securing participation in our research but what is its ultimate impact on data quality? This is the concern of a skilled researcher.

Cook, K., & Nunkoosing, K. (2008). Maintaining dignity and managing stigma in the interview encounter: The challenge of paid-for participation. Qualitative Health Research, 18(3), 418–427. https://doi.org/10.1177/1049732307311343

Shared Constructs in Research Design: Part 3 — Validity

validity in research designNot unlike Part 1 (concerning sampling) and Part 2 (concerning bias) of the discussion that began earlier, the shared construct of validity in research design has also been an area of focus in several articles posted in Research Design Review. Most notable is “Quality Frameworks in Qualitative Research” posted in February 2021 in which validity is discussed within the context of the parameters or strategies various researchers use to define and think about the dimensions of rigor in qualitative research design. This article uses the Total Quality Framework (Roller & Lavrakas, 2015) and criteria of Lincoln and Guba (1985) to underscore the idea that quality approaches to design cuts across paradigm orientation, leading to robust and valid interpretations of the data.

Many other qualitative researchers, across disciplines, believe in the critical role that the shared construct of validity plays in research design. Joseph Maxwell, for example, discusses validity in association with his realism approach to causal explanation in qualitative research (Maxwell, 2004); and discusses in detail five unique dimensions of validity, including descriptive validity, interpretative validity, theoretical validity, evaluative validity, and generalizability (Maxwell, 1992). And of course, Miles & Huberman were promoting greater rigor by way of validity more than three decades ago (Miles & Huberman, 1984).

More recently, Koro-Ljungberg (2010) takes an in-depth look at validity in qualitative research and, with extensive literature as the backdrop, makes the case that “validity is in doing, as well as its (un)making, and it exhibits itself in the present paradox of knowing and unknowing, indecision, and border crossing” (p. 609). Matteson & Lincoln (2008) remind educational researchers that validity does not solely concern the analysis phase of research design but “the data collection method must also address validity” (p. 672). Creswell & Miller (2000) discuss different approaches to determine validity across three paradigm orientations — postpositivist, constructivist, and critical — and “lens” of the researcher, participants, and researchers external to the study.

Among qualitative health researchers, Morse (2020) emphasizes the potential weakness in validity when confusing the analysis of interpretative inquiry with that associated with “hard, descriptive data” (p. 4), and Morse et al. (2002) present five verification strategies and argue that validity (as well as reliability) is an “overarching” construct that “can be appropriately used in all scientific paradigms” (p. 19).

These researchers, and those discussed in Part 1 – Sampling and Part 2 – Bias, are admittedly a small share of those who have devoted a great deal of thought and writing concerning these shared constructs. The reader is encouraged to utilize these references to build on their understanding of these constructs in qualitative research and to grow their own library of knowledge.

 

Creswell, J. W., & Miller, D. L. (2000). Determining validity in qualitative inquiry. Theory into Practice, 39(3), 124–130.

Koro-Ljungberg, M. (2010). Validity, responsibility, and aporia. Qualitative Inquiry, 16(8), 603–610. https://doi.org/10.1177/1077800410374034

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage Publications.

Matteson, S. M., & Lincoln, Y. S. (2008). Using multiple interviewers in qualitative research studies: The influence of ethic of care behaviors in research interview settings. Qualitative Inquiry, 15(4), 659–674. Retrieved from http://qix.sagepub.com/cgi/doi/10.1177/1077800408330233

Maxwell, J. A. (1992). Understanding and validity in qualitative research. Harvard Educational Review, 62(3), 279–300.

Maxwell, J. A. (2004). Casual explanation, qualitative research, and scientific inquiry in education. Educational Researcher, 33(2), 3–11.

Miles, M. B., & Huberman, A. M. (1984). Drawing valid meaning from qualitative data: Toward a shared craft. Educational Researcher, 13(5), 20–30. https://doi.org/10.3102/0013189X013005020

Morse, J. (2020). The changing face of qualitative inquiry. International Journal for Qualitative Methods, 19, 1–7. https://doi.org/10.1177/1609406920909938

Morse, J. M., Barrett, M., Mayan, M., Olson, K., & Spiers, J. (2002). Verification strategies for establishing reliability and validity in qualitative research. International Journal of Qualitative Methods, 1(2), 13–22.

Roller, M. R., & Lavrakas, P. J. (2015). Applied qualitative research design: A total quality framework approach. New York: Guilford Press.

Quality Frameworks in Qualitative Research

The following is a modified excerpt from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp. 20-21)

Many researchers have advanced strategies, criteria, or frameworks for thinking about and promoting the importance of “the quality” of qualitative research at some stage in the research design. There are those who focus on quality as it relates to specific aspects—such as various validation and verification strategies or “checklists” (Barbour, 2001; Creswell, 2013; Brinkmann & Kvale, 2015; Maxwell, 2013; Morse et al., 2002), validity related to researcher decision making (Koro-Ljungberg, 2010) and subjectivity (Bradbury-Jones, 2007), or the specific role of transparency in assessing the quality of outcomes (Miles, Huberman, & Saldaña, 2014). There are others who prescribe particular approaches in the research process—such as consensual qualitative research (Hill et al., 2005), the use of triangulation (Tobin & Begley, 2004), or an audit procedure (Akkerman, Admiraal, Brekelmans, & Oost, 2006). And there are still others who take a broader, more general view that emphasizes the importance of “paying attention to the qualitative rigor and model of trustworthiness from the moment of conceptualization of the research” (Thomas & Magilvy, 2011, p. 154; see also, Bergman & Coxon, 2005; Whittemore et al., 2001).

The strategies or ways of thinking about quality in qualitative research that are most relevant to the Total Quality Framework (TQF) are those that are (a) paradigm neutral, (b) flexible (i.e., do not adhere to a defined method), and (c) applicable to all phases of the research process. Among these, the work of Lincoln and Guba (e.g., 1981, 1985, 1986, and 1995) is the most noteworthy. Although they profess a paradigm orientation “of the constructionist camp, loosely defined” (Lincoln et al., 2011, p. 116), the quality criteria Lincoln and Guba set forth more than 35 years ago is Read Full Text