Quality Standards

Qualitative Tech Solutions: Coverage & Validity Considerations

Back in 2018, Research Design Review posted an article titled “Five Tech Solutions to Qualitative Data Collection: What Strengthens or Weakens Data Quality?” The focus of this article is on a presentation given in May 2018 concerning technological alternatives TQF Credibilityto qualitative research data collection. Importantly, the aim of the presentation was, not to simply identify different approaches to data collection beyond the in-person and telephone modes but rather, to examine the strengths and limitations of these technological solutions from a data quality – specifically, Credibility – standpoint.

Broadly speaking, technological approaches to qualitative research data gathering offer clear advantages over in-person methods, particularly in the areas of:

  • Representation, e.g., geographic coverage, potential access to hard-to-reach population segments;
  • Cooperation, e.g., convenience and flexibility of time and place for participants, appropriateness for certain demographic segments (18-49 year olds*);
  • Validity associated with data accuracy, e.g., research capturing in-the-moment experiences do not rely on memory recall;
  • Validity associated with the depth of data, e.g., capturing multiple contextual dimensions through text, video, and images;
  • Validity associated with data accuracy and depth allowing for the triangulation of data;
  • Researcher effects, e.g., mitigated by the opportunity for greater reflection and consistency across research events;
  • Participant effects, e.g., mitigated by the multiple ways to express thoughts, willingness to discuss sensitive issues, and (possibly) a lower tendency for social desirability responding; and
  • Efficient use of resources (i.e., time, money, and staff).

There are also potential drawbacks to any technological solution, including those associated with:

  • Uneven Internet access and comfort with technology among certain demographic groups (e.g., sampling favors “tech savvy” individuals), hard-to-reach and marginalized segments of the population;
  • Difficulty in managing engagement, including the unique researcher skills and allocation of time required;
  • Potential participant burnout from researcher’s requests for multiple input activities and/or days of engagement. This is a type of participant effect that negatively impacts validity;
  • Nonresponse due to mode, e.g., unwillingness or inability to participate to a mostly text-based discussion;
  • Data accuracy, e.g., participant alters behavior in a study observing in-home meal preparation;
  • Missing important visual &/or verbal cues which may interfere with rapport building and an in-depth exploration of responses;
  • Difficulty managing analysis due to lots and lots of data (in volume & formats);
  • Fraud, misrepresentation – “Identity is fluid and potentially multiple on the Internet” (James and Bushner, 2009, p. 35) and people may not share certain images or video that reveal something “embarrassing” about themselves**; and
  • Security, confidentiality, anonymity (e.g., data storage, de-identification).

 

 

* https://www.pewresearch.org/internet/fact-sheet/internet-broadband/

** https://www.businesswire.com/news/home/20180409006050/en/Minute-Maid-Debuts-New-Campaign-Celebrates-Good

James, N., & Busher, H. (2009). Online interviewing. London: Sage Publications.

Supporting Observational Research

The following is a modified excerpt from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp. 217-219) which is a qualitative methods text covering in-depth interviews, focus group discussions, ethnography, qualitative content analysis, case study, and narrative research.

An important element in the Total Quality Framework Analyzability component is Verification, i.e., taking steps to establish some level of support for the data gathered in order to move the researcher closer to achieving high quality outcomes. The verificationSupporting qualitative data tools at the ethnographer’s disposal go beyond those identified for the in-depth interview (IDI) and group discussion methods in that they include the technique of expanded observation. For example, Lincoln and Guba (1985) stated that it is “more likely that credible findings and interpretations” will come from ethnographic data with “prolonged engagement” in the field and “persistent observation” (p. 301). The former refers to spending adequate time at an observation site to experience the breadth of stimuli and activities relevant to the research, and the purpose of the latter (i.e., persistent observation) is “to identify those characteristics and elements in the situation that are most relevant to the problem or issue” (p. 304)—that is, to provide a depth of understanding of the “salient factors.” Both prolonged engagement and persistent observation speak to the idea of expanding observation in terms of time as well as diligence in exploring variables as they emerge in the observation. Although expanding observations in this way may be unrealistic due to the realities of deadlines and research funding, it is an important verification approach unique to ethnography. When practicable, it is recommended that researchers maximize the time allotted for observation and train observers to look for the unexpected or examine more closely seemingly minor occurrences or variables that may ultimately support (or contradict) the observer’s dominant understanding.

The ultimate usefulness of expanded observation is not unlike deviant or negative case analysis (see earlier link). In both instances, the goal is to identify and investigate observational events (or particular variables in these events) that defy explanation or otherwise contradict the general patterns or themes that appear to be emerging from the data. For example, a researcher conducting in-home nonparticipant observations of young mothers Read Full Text

Cognitive Interviewing: A Few Best Practices

Cognitive interviewing is a method used by survey researchers to investigate the integrity of their questionnaire designs prior to launching the field portion of the study. In the edited volume Cognitive Interviewing Methodology, Kristen Miller (2014) describes cognitive interviewing as “a qualitative method that examines the question-response process, specifically the processes and considerations used by respondents as they form answers to survey q4 attributes of the CI methoduestions,” further explaining that “through the interviewing process, various types of question-response problems that would not normally be identified in a traditional survey interview, such as interpretive errors and recall accuracy, are uncovered” (p. 2). In this way, survey researchers identify the users’ (i.e., survey respondents’) possible meaning and interpretation of survey questions – having to do with question structure or format and terminology – that may or may not deviate from the researcher’s intent. Importantly, the objective of the cognitive interview is not to simply determine whether a questionnaire item “makes sense” to an individual  but to go beyond that to explore the individual’s lived experience (personal context, attitudes, perceptions, behavior) in relationship to their interpretation and/or ability to answer a particular question.

Although not typically included under the “qualitative research” umbrella (with in-depth interviewing, focus group discussions, and observation), four of the 10 unique attributes associated with qualitative research are notably relevant to the cognitive interviewing method. They are the: importance of meaning, flexibility of design, participant-researcher relationship, and researcher skill set. These distinctive qualities of the cognitive interviewing method, and qualitative methods generally, define why researchers opt for Read Full Text