research design

Qualitative Research in APA Style

For the first time ever, the Publication Manual of the American Psychological Association (American Psychological Association [APA], APA Publication Manual2020) now includes relevant discussions pertaining to qualitative research. In the 7th (most recent) edition, the APA manual now integrates definitions and explanations of qualitative articles along side quantitative articles, gives a description and outlines approaches to qualitative meta-analyses in addition to quantitative meta-analyses, offers the reader unique data-sharing considerations associated with qualitative research, and presents a lengthy, detailed section on the “Reporting Standards for Qualitative Research.”

Even if you are not a devotee of the APA referencing style, qualitative researchers will benefit from reviewing the considerations found in the manual. For instance, the APA reporting standards stipulate five main areas for the Method section of a qualitative research article: 1) overview of the research design; 2) research participants and/or other data sources; 3) participant recruitment; 4) data collection; and 5) data analysis. It is noteworthy that in the Method area pertaining to research participants, APA recommends that the author go beyond discussing the number and demographic or cultural characteristics of the study participants to include “personal history factors” (e.g., trauma exposure, family history) “that are relevant to the specific contexts and topics of their research” (p. 100). With their emphasis on specific contexts, APA cites Morse (2008) and her discussion on the importance of reporting relevant details of the participants, which may or may not include demographic information — “Some demographic information may be pertinent: If it is, keep it; if not, do not report it” (p. 300). Morse goes on to remind researchers that “in qualitative inquiry, the description of the context is often as important as the description of the participants” (p. 300).

In addition to these characteristics, the APA style also states that, in the spirit of transparency, authors of a qualitative research article should discuss the researcher-participant relationship. Specifically, the manual asks authors to “describe the relationships and interactions between researchers and participants that are relevant to the research process and any impact on the research process (e.g., any relationships prior to the study, any ethical considerations relevant to prior relationships)” (p. 100).

These and other discussions on reporting standards — e.g., pertaining to the participant recruitment process and sampling, data collection strategy, and data analysis, along with a discussion of methodological integrity — are useful reading to not only the researcher who hopes to publish their work but also to qualitative researchers who are looking for a condensed version of qualitative research design considerations.

It has been a long time coming but hats off to APA for acknowledging qualitative methods and for giving careful thought to the unique attributes associated with qualitative designs in adapting their style standards.

American Psychological Association. (2020). The publication manual of the American Psychological Association (7th ed.). Washington, DC.

Morse, J. M. (2008). “What’s your favorite color?” Reporting irrelevant demographics in qualitative research. Qualitative Health Research, 18(3), 299–300. https://doi.org/10.1177/1049732307310995

Research Quality & the Impact of Monetary Incentives

The following is adapted from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp. 78-79).

Monetary incentives & quality researchGaining cooperation from research participants and respondents is important to the quality of qualitative and survey research. A focus on gaining cooperation helps to mitigate potentially weakened data due to the possibility that the individuals who do not cooperate — do not participate in the research — differ in meaningful ways compared to those who do cooperate. As mentioned in an article posted earlier in Research Design Review, an effective component to the researcher’s strategy for gaining cooperation among participants is the offer of material incentives (e.g., cash, a gift card, prized tickets to a sporting event, donation to a favorite charity).

Although monetary incentives are routinely given to qualitative research participants to boost cooperation, the researcher needs to keep in mind that the offer of a cash (or equivalent) incentive may also jeopardize the quality of the actual focus group discussion, in-depth interview, or observation. The following is one example of how monetary incentives may have the unwanted effect of skewing participants’ responses in an in-depth interview (IDI) study.

Cook and Nunkoosing (2008) conducted an in-person IDI study with 12 “impoverished elders” in Melbourne, Australia to investigate community services for the poor among those “who are excluded or at risk of exclusion from their communities.” Research participants could participate in up to two interviews and were given $20 for each interview.

In reviewing the key findings, the researchers observed many “interview interactions that were atypical.” At least part of these irregularities was attributed to the monetary incentive which, according to Cook and Nunkoosing, helped to create an interview environment where interviewees were motivated “to manage the presentation of self, retain control over the exchange of information, and reduce the stigma of poverty by limiting disclosure and resisting researcher questioning” (p. 421).

The importance of the incentive in the interview process became clear when interviewees volunteered comments such as “I need the $20 . . . ” and critically compared the $20 to better (i.e., higher) cash incentives offered by other research studies. In this way interviewees were in effect “selling” their stories to the interviewer (and, some would say, at a bargain price) which, based on the researchers’ analyses, tainted interviewees’ responses with “stylized accounts” (or “rehearsed narratives”) as well as “minimal disclosure,” as seen in this excerpt from the transcripts (p. 424):
Participant: What did you want to know?
Interviewer: All about you.
Participant: That’s about it, like, there’s not too much.
Interviewer: Do you want to tell me a bit more? I don’t really know who you are yet.
Participant: You do.
Interviewer: Tell me a bit about who you are, what you like, what you don’t like.
Participant: I don’t like him [Gesturing toward the other agency client].

This dialog came towards the end of a 30-minute interview and helps to illustrate “the researcher’s frustration at [their] inability to engage the participant in in-depth discussion” (p. 423).

Research design is always a balancing act involving various trade-offs associated with meeting the key objectives, method(s) and strategy for engaging the target population(s), and the efficient use of available resources. An important researcher skill is understanding the implications of these trade-offs to the integrity of the final data and overall quality of the research investigation. A monetary incentive may be highly effective in securing participation in our research but what is its ultimate impact on data quality? This is the concern of a skilled researcher.

Cook, K., & Nunkoosing, K. (2008). Maintaining dignity and managing stigma in the interview encounter: The challenge of paid-for participation. Qualitative Health Research, 18(3), 418–427. https://doi.org/10.1177/1049732307311343

Shared Constructs in Research Design: Part 3 — Validity

validity in research designNot unlike Part 1 (concerning sampling) and Part 2 (concerning bias) of the discussion that began earlier, the shared construct of validity in research design has also been an area of focus in several articles posted in Research Design Review. Most notable is “Quality Frameworks in Qualitative Research” posted in February 2021 in which validity is discussed within the context of the parameters or strategies various researchers use to define and think about the dimensions of rigor in qualitative research design. This article uses the Total Quality Framework (Roller & Lavrakas, 2015) and criteria of Lincoln and Guba (1985) to underscore the idea that quality approaches to design cuts across paradigm orientation, leading to robust and valid interpretations of the data.

Many other qualitative researchers, across disciplines, believe in the critical role that the shared construct of validity plays in research design. Joseph Maxwell, for example, discusses validity in association with his realism approach to casual explanation in qualitative research (Maxwell, 2004) and discusses in detail five unique dimensions of validity, including descriptive, interpretative, and theoretical validity (Maxwell, 1992). And of course, Miles & Huberman were promoting greater rigor by way of validity more than three decades ago (Miles & Huberman, 1984).

More recently, Koro-Ljungberg (2010) takes an in-depth look at validity in qualitative research and, with extensive literature as the backdrop, makes the case that “validity is in doing, as well as its (un)making, and it exhibits itself in the present paradox of knowing and unknowing, indecision, and border crossing” (p. 609). Matteson & Lincoln (2008) remind educational researchers that validity does not solely concern the analysis phase of research design but “the data collection method must also address validity” (p. 672). Creswell & Miller (2000) discuss different approaches to determine validity across three paradigm orientations — postpositivist, constructivist, and critical — and “lens” of the researcher, participants, and researchers external to the study.

Among qualitative health researchers, Morse (2020) emphasizes the potential weakness in validity when confusing the analysis of interpretative inquiry with that associated with “hard, descriptive data” (p. 4), and Morse et al. (2002) present five verification strategies and argue that validity (as well as reliability) is an “overarching” construct that “can be appropriately used in all scientific paradigms” (p. 19).

These researchers, and those discussed in Part 1 – Sampling and Part 2 – Bias, are admittedly a small share of those who have devoted a great deal of thought and writing concerning these shared constructs. The reader is encouraged to utilize these references to build on their understanding of these constructs in qualitative research and to grow their own library of knowledge.

 

Creswell, J. W., & Miller, D. L. (2000). Determining validity in qualitative inquiry. Theory into Practice, 39(3), 124–130.

Koro-Ljungberg, M. (2010). Validity, responsibility, and aporia. Qualitative Inquiry, 16(8), 603–610. https://doi.org/10.1177/1077800410374034

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage Publications.

Matteson, S. M., & Lincoln, Y. S. (2008). Using multiple interviewers in qualitative research studies: The influence of ethic of care behaviors in research interview settings. Qualitative Inquiry, 15(4), 659–674. Retrieved from http://qix.sagepub.com/cgi/doi/10.1177/1077800408330233

Maxwell, J. A. (1992). Understanding and validity in qualitative research. Harvard Educational Review, 62(3), 279–300.

Maxwell, J. A. (2004). Casual explanation, qualitative research, and scientific inquiry in education. Educational Researcher, 33(2), 3–11.

Miles, M. B., & Huberman, A. M. (1984). Drawing valid meaning from qualitative data: Toward a shared craft. Educational Researcher, 13(5), 20–30. https://doi.org/10.3102/0013189X013005020

Morse, J. (2020). The changing face of qualitative inquiry. International Journal for Qualitative Methods, 19, 1–7. https://doi.org/10.1177/1609406920909938

Morse, J. M., Barrett, M., Mayan, M., Olson, K., & Spiers, J. (2002). Verification strategies for establishing reliability and validity in qualitative research. International Journal of Qualitative Methods, 1(2), 13–22.

Roller, M. R., & Lavrakas, P. J. (2015). Applied qualitative research design: A total quality framework approach. New York: Guilford Press.