Is all qualitative research of equal value? Are the findings derived from one focus group study just as useful as those obtained from another focus group study? Are the outcomes from observational research or in-depth interviews (IDIs) valuable regardless of the design peculiarities (i.e., how the research was conducted)?
More specifically, what are the strengths and limitations of the design elements that inform the usefulness of research outcomes? Was the research objective and approach well-conceived, realistic? What was the sampling method? How was recruitment conducted? What procedures were in place to maximize cooperation and rapport, and minimize nonresponse? Was the moderator/observer/interviewer guide carefully thought out and designed to achieve the research objective (e.g., using a funnel approach to develop a moderator’s outline)? Is it clear how the researcher conducted the analysis? Were the analytical processing and verification techniques appropriate, thorough, and inclusive of researcher reflexivity? And are the final interpretations and implications drawn from the research warranted given the strengths and limitations of the design elements, i.e., how the research was conducted?
These are the kinds of questions that all users of qualitative research – e.g., the research sponsors, the people who ultimately implement the findings, other researchers who hope to utilize the research design in other contexts – should be asking. The answers to these questions are important, not to unequivocally “accept” or “reject” the research but rather, to derive some level of confidence in the outcomes. In this way, the value of the research can be weighed, allowing the user of the research to determine how much importance to place on the findings.
The ability to make this determination is something that should be granted to anyone who has some reason to engage with a qualitative research study. This is why it behooves researchers to take the initiative and provide the details that users need to gauge the value of research outcomes. Researchers can do this by including a discussion in the final research document of the strengths and limitations of the design elements. This discussion can be facilitated by employing a series of criteria by which each design element is considered, and the reliability and validity of the research can be evaluated. For instance, not unlike the “design display” that helps to examine research found in the literature, the researcher can create a “quality display” that dissects different aspects of a study’s design by the four components of the Total Quality Framework*, i.e., Credibility, Analyzability, Transparency, and Usefulness. A quality display for an IDI study with recent college graduates might look something like the following: [NOTE: Click on the image to enlarge]
With the quality display, researchers empower the users of their studies to decide for themselves their sense of confidence in the outcomes and weigh the value of the research for their own purposes.
Image captured from: http://www.wpclipart.com/holiday/election_Day/scales/scales_4.png.html