validity

Qualitative Tech Solutions: Coverage & Validity Considerations

Back in 2018, Research Design Review posted an article titled “Five Tech Solutions to Qualitative Data Collection: What Strengthens or Weakens Data Quality?” The focus of this article is on a presentation given in May 2018 concerning technological alternatives TQF Credibilityto qualitative research data collection. Importantly, the aim of the presentation was, not to simply identify different approaches to data collection beyond the in-person and telephone modes but rather, to examine the strengths and limitations of these technological solutions from a data quality – specifically, Credibility – standpoint.

Broadly speaking, technological approaches to qualitative research data gathering offer clear advantages over in-person methods, particularly in the areas of:

  • Representation, e.g., geographic coverage, potential access to hard-to-reach population segments;
  • Cooperation, e.g., convenience and flexibility of time and place for participants, appropriateness for certain demographic segments (18-49 year olds*);
  • Validity associated with data accuracy, e.g., research capturing in-the-moment experiences do not rely on memory recall;
  • Validity associated with the depth of data, e.g., capturing multiple contextual dimensions through text, video, and images;
  • Validity associated with data accuracy and depth allowing for the triangulation of data;
  • Researcher effects, e.g., mitigated by the opportunity for greater reflection and consistency across research events;
  • Participant effects, e.g., mitigated by the multiple ways to express thoughts, willingness to discuss sensitive issues, and (possibly) a lower tendency for social desirability responding; and
  • Efficient use of resources (i.e., time, money, and staff).

There are also potential drawbacks to any technological solution, including those associated with:

  • Uneven Internet access and comfort with technology among certain demographic groups (e.g., sampling favors “tech savvy” individuals), hard-to-reach and marginalized segments of the population;
  • Difficulty in managing engagement, including the unique researcher skills and allocation of time required;
  • Potential participant burnout from researcher’s requests for multiple input activities and/or days of engagement. This is a type of participant effect that negatively impacts validity;
  • Nonresponse due to mode, e.g., unwillingness or inability to participate to a mostly text-based discussion;
  • Data accuracy, e.g., participant alters behavior in a study observing in-home meal preparation;
  • Missing important visual &/or verbal cues which may interfere with rapport building and an in-depth exploration of responses;
  • Difficulty managing analysis due to lots and lots of data (in volume & formats);
  • Fraud, misrepresentation – “Identity is fluid and potentially multiple on the Internet” (James and Bushner, 2009, p. 35) and people may not share certain images or video that reveal something “embarrassing” about themselves**; and
  • Security, confidentiality, anonymity (e.g., data storage, de-identification).

 

 

* https://www.pewresearch.org/internet/fact-sheet/internet-broadband/

** https://www.businesswire.com/news/home/20180409006050/en/Minute-Maid-Debuts-New-Campaign-Celebrates-Good

James, N., & Busher, H. (2009). Online interviewing. London: Sage Publications.

The Stanford Prison Experiment: A Case for Sharing Data

The October 2019 issue of American Psychologist included two articles on the famed Stanford Prison Experiment (SPE) condThe Stanford Prison Experimentucted by Philip Zimbardo in 1971. The first, “Rethinking the Nature of Cruelty: The Role of Identity Leadership in the Stanford Prison Experiment” (Haslam, Reicher, & Van Bavel, 2019), discusses the outcomes of the SPE within the context of social identity and, specifically, identity leadership theories espousing, among other things, the idea that “when group identity becomes salient, individuals seek to ascertain and to conform to those understandings which define what it means to be a member of the relevant group” (p. 812) and “leadership is not just about how leaders act but also about their capacity to shape the actions of followers” (p. 813). It is within this context that the authors conclude from their examination of the SPE archival material that the “totality of evidence indicates that, far from slipping naturally into their assigned roles, some of Zimbardo’s guards actively resisted [and] were consequently subjected to intense interventions from the experimenters” (p. 820), resulting in behavior “more consistent with an identity leadership account than…the standard role account” (p. 819).

In the second article, “Debunking the Stanford Prison Experiment” (Le Texier, 2019), the author discusses his content analysis study of the documents and audio/video recordings retrieved from the SPE archives located at Stanford University and the Archives of the History of American Psychology at the University of Akron, including a triangulation phase by way of in-depth interviews with SPE participants and a comparative analysis utilizing various publications and texts referring to the SPE. The purpose of this research was to learn whether the SPE archives, participants, and comparative analysis would reveal “any important information about the SPE that had not been included in and, more importantly, was in conflict with that reported in Zimbardo’s published accounts of the study” (p. 825). Le Texier derives a number of key findings from his study that shed doubt on the integrity of the SPE, including the fact that the prison guards were aware of the results Read Full Text

“Did I Do Okay?”: The Case for the Participant Reflexive Journal

It is not unusual for an in-depth interview (IDI) or focus group participant to wonder at some point in an interview or discussion if the participant “did okay”; that is, whether the participant responded to the researcher’s questions in the Reflexivitymanner in which the researcher intended. For instance, an interviewer investigating parents’ healthy food purchases for their children might ask a mother to describe a typical shopping trip to the grocery store. In response, the mother might talk about the day of the week, the time of day, where she shops, and whether she is alone or with her children or someone else. After which she might ask the interviewer, Is that the kind of thing you were looking for? Is that what you mean? Did I do okay in answering your question? The interviewer’s follow up might be, Tell me something about the in-store experience such as the sections of the store you visit and the kinds of food items you typically buy.

It is one thing to misinterpret the intention of a researcher’s question – e.g., detailing the logistics of food purchasing rather than the actual food purchase experience – but another thing to adjust responses based on any number of factors influenced by the researcher-participant interaction. These interaction effects stem, in part, from the participant’s attempt to “do okay” in their role in the research process. Dr. Kathryn Roulston at the University of Georgia has written quite a bit about interaction in research interviews, including an edited volume Interactional Studies of Qualitative Research Interviews.

The dynamics that come into play in an IDI or focus group study – and in varying degrees, ethnographic research – are of great interest to qualitative researchers and important considerations in the overall quality of the research. This is the reason that a lot has been written about the researcher’s reflexive journal and its importance in Read Full Text