Qualitative Tech Solutions: Coverage & Validity Considerations

Back in 2018, Research Design Review posted an article titled “Five Tech Solutions to Qualitative Data Collection: What Strengthens or Weakens Data Quality?” The focus of this article is on a presentation given in May 2018 concerning technological alternatives TQF Credibilityto qualitative research data collection. Importantly, the aim of the presentation was, not to simply identify different approaches to data collection beyond the in-person and telephone modes but rather, to examine the strengths and limitations of these technological solutions from a data quality – specifically, Credibility – standpoint.

Broadly speaking, technological approaches to qualitative research data gathering offer clear advantages over in-person methods, particularly in the areas of:

  • Representation, e.g., geographic coverage, potential access to hard-to-reach population segments;
  • Cooperation, e.g., convenience and flexibility of time and place for participants, appropriateness for certain demographic segments (18-49 year olds*);
  • Validity associated with data accuracy, e.g., research capturing in-the-moment experiences do not rely on memory recall;
  • Validity associated with the depth of data, e.g., capturing multiple contextual dimensions through text, video, and images;
  • Validity associated with data accuracy and depth allowing for the triangulation of data;
  • Researcher effects, e.g., mitigated by the opportunity for greater reflection and consistency across research events;
  • Participant effects, e.g., mitigated by the multiple ways to express thoughts, willingness to discuss sensitive issues, and (possibly) a lower tendency for social desirability responding; and
  • Efficient use of resources (i.e., time, money, and staff).

There are also potential drawbacks to any technological solution, including those associated with:

  • Uneven Internet access and comfort with technology among certain demographic groups (e.g., sampling favors “tech savvy” individuals), hard-to-reach and marginalized segments of the population;
  • Difficulty in managing engagement, including the unique researcher skills and allocation of time required;
  • Potential participant burnout from researcher’s requests for multiple input activities and/or days of engagement. This is a type of participant effect that negatively impacts validity;
  • Nonresponse due to mode, e.g., unwillingness or inability to participate to a mostly text-based discussion;
  • Data accuracy, e.g., participant alters behavior in a study observing in-home meal preparation;
  • Missing important visual &/or verbal cues which may interfere with rapport building and an in-depth exploration of responses;
  • Difficulty managing analysis due to lots and lots of data (in volume & formats);
  • Fraud, misrepresentation – “Identity is fluid and potentially multiple on the Internet” (James and Bushner, 2009, p. 35) and people may not share certain images or video that reveal something “embarrassing” about themselves**; and
  • Security, confidentiality, anonymity (e.g., data storage, de-identification).

 

 

* https://www.pewresearch.org/internet/fact-sheet/internet-broadband/

** https://www.businesswire.com/news/home/20180409006050/en/Minute-Maid-Debuts-New-Campaign-Celebrates-Good

James, N., & Busher, H. (2009). Online interviewing. London: Sage Publications.

One comment

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.