The October 2019 issue of American Psychologist included two articles on the famed Stanford Prison Experiment (SPE) conducted by Philip Zimbardo in 1971. The first, “Rethinking the Nature of Cruelty: The Role of Identity Leadership in the Stanford Prison Experiment” (Haslam, Reicher, & Van Bavel, 2019), discusses the outcomes of the SPE within the context of social identity and, specifically, identity leadership theories espousing, among other things, the idea that “when group identity becomes salient, individuals seek to ascertain and to conform to those understandings which define what it means to be a member of the relevant group” (p. 812) and “leadership is not just about how leaders act but also about their capacity to shape the actions of followers” (p. 813). It is within this context that the authors conclude from their examination of the SPE archival material that the “totality of evidence indicates that, far from slipping naturally into their assigned roles, some of Zimbardo’s guards actively resisted [and] were consequently subjected to intense interventions from the experimenters” (p. 820), resulting in behavior “more consistent with an identity leadership account than…the standard role account” (p. 819).
In the second article, “Debunking the Stanford Prison Experiment” (Le Texier, 2019), the author discusses his content analysis study of the documents and audio/video recordings retrieved from the SPE archives located at Stanford University and the Archives of the History of American Psychology at the University of Akron, including a triangulation phase by way of in-depth interviews with SPE participants and a comparative analysis utilizing various publications and texts referring to the SPE. The purpose of this research was to learn whether the SPE archives, participants, and comparative analysis would reveal “any important information about the SPE that had not been included in and, more importantly, was in conflict with that reported in Zimbardo’s published accounts of the study” (p. 825). Le Texier derives a number of key findings from his study that shed doubt on the integrity of the SPE, including the fact that the prison guards were aware of the results Read Full Text
It is not unusual for an in-depth interview (IDI) or focus group participant to wonder at some point in an interview or discussion if the participant “did okay”; that is, whether the participant responded to the researcher’s questions in the manner in which the researcher intended. For instance, an interviewer investigating parents’ healthy food purchases for their children might ask a mother to describe a typical shopping trip to the grocery store. In response, the mother might talk about the day of the week, the time of day, where she shops, and whether she is alone or with her children or someone else. After which she might ask the interviewer, Is that the kind of thing you were looking for? Is that what you mean? Did I do okay in answering your question? The interviewer’s follow up might be, Tell me something about the in-store experience such as the sections of the store you visit and the kinds of food items you typically buy.
It is one thing to misinterpret the intention of a researcher’s question – e.g., detailing the logistics of food purchasing rather than the actual food purchase experience – but another thing to adjust responses based on any number of factors influenced by the researcher-participant interaction. These interaction effects stem, in part, from the participant’s attempt to “do okay” in their role in the research process. Dr. Kathryn Roulston at the University of Georgia has written quite a bit about interaction in research interviews, including an edited volume Interactional Studies of Qualitative Research Interviews.
The dynamics that come into play in an IDI or focus group study – and in varying degrees, ethnographic research – are of great interest to qualitative researchers and important considerations in the overall quality of the research. This is the reason that a lot has been written about the researcher’s reflexive journal and its importance in Read Full Text
The Total Quality Framework (TQF) offers researchers a way to think about basic research principles at each stage of the qualitative research process – data collection, analysis, reporting – with the goal of doing something of value with the outcomes (i.e., the usefulness of the research). The first of the four components of the TQF is Credibility which pertains to the data collection phase of a qualitative study. A detailed discussion of Credibility can be found in this 2017 Research Design Review article.
This article – and in similar fashion to the companion articles associated with the other three components of the TQF – explains the chief elements that define Credibility, stating that “credible qualitative research is the result of effectively managing data collection, paying particular attention to the two specific areas of Scope and Data Gathering.” Although a great deal of the discussions thus far have been centered on traditional qualitative methods, the increasingly important role of technological solutions in qualitative research makes it imperative that the discussion of Credibility (and the other TQF components) expand to the digital world.
The online asynchronous focus group (“bulletin board”) method has been around for a long time. It is clearly an approach that offers qualitative researchers many advantages over the face-to-face mode while also presenting challenges to the integrity of research design. The following presents a snapshot of the online bulletin board focus group method through the lens of the two main ingredients of the TQF Credibility component – Scope and Data Gathering. This snapshot is not an attempt to name all the strengths and limitations associated with the Credibility of the online asynchronous focus group method but rather highlight a few key considerations.