An important aspect related to Scope within the Credibility component of the Total Quality Framework (TQF) for qualitative research design is the extent to which the researcher is successful in gaining cooperation from the participants. In an in-depth interview (IDI) study, the researcher is concerned with the impact that the proportion of selected interviewees not interviewed or only partially interviewed has on the integrity of the data. This is the domain of research that is often termed “nonresponse.” If this proportion is large and/or if the group that is selected but not interviewed differs in meaningful ways from those who are interviewed, bias can infiltrate the final data of an IDI study and compromise the credibility of the research.
To avoid this, qualitative researchers need to give serious a priori thought to how they will gain high and representative levels of cooperation from the persons they have selected to interview, and how individuals who do not cooperate may differ in past experiences, attitudes, behaviors, and knowledge compared to interviewees. The researcher must keep in mind that bias may enter into the outcomes, and the credibility of the study’s findings and interpretations thereby weakened, if the characteristics of those in the sample who do not cooperate with an IDI study are correlated with the key topics the study is investigating. Likewise, qualitative researchers using the IDI method should also constantly monitor the representativeness of the group of selected participants that does cooperate and watch whether the characteristics of that group deviate from the characteristics of the target population. This may be difficult in the case of the email IDI (or other asynchronous text-based mode) where the interviewer must stay alert to the consistency of participants’ responses and recognize when the identity of the interviewee may have changed (i.e., someone other than the recruited research participant is the one now responding). For instance, in an email IDI study among Read Full Text
Data Gathering is one of two broad areas of the Total Quality Framework Credibility component that affects all qualitative research, including ethnographic research. There are three primary aspects concerning the gathering of data in ethnography that require serious consideration by the researcher in the development of the study design. To optimize the measurement of ethnographic data, and hence the quality of the outcomes, researchers need to pay attention to:
How well the observers have identified and recorded all the information (e.g., verbal and nonverbal behavior, attitudes, context, sensory cues) pertinent to the research objectives and constructs of interest. A well-developed observation guide and observation grid can assist greatly in this effort. Not unlike the development of an in-depth interview or discussion guide, the ethnographer seeks to identify those observable events—including the specific individuals (or types of individuals), the verbal and nonverbal behaviors, attitudes, sensory and other environmental cues—that will further the researcher’s understanding of the issues. During the design development phase, the researcher might isolate the observations of interest by:
Looking at earlier ethnographic research on the subject matter and/or with similar study populations.
Interviewing the clients or those who have requested the research to learn everything they know about the topic and their past work in the area.
Consulting the literature or other experts concerning the behaviors and other occurrences associated with particular constructs.
“Shagging around” (LeCompte & Goetz, 1982) the observation site(s) to casually assess the environment and begin to learn about the participants.
Observer effects, specifically—
Observer bias, that is, behavioral and other characteristics (e.g., personal attitudes, values, traits) of the observer that may alter the observed event or bias their observations. For example, an observer as a complete participant would bias the observational data if there was an attempt to “educate” participants on a subject matter for which the observer had personal expertise or knowledge.
Observer inconsistency, that is, an inconsistent manner in which the observer conducts the observations that creates unwarranted and unrepresentative variation in the data. For example, an on-site nonparticipant observer conducting in-home observations of the use of media and technology would be introducing inaccuracies in the data by observing and recording the use of television and gaming in some households but not in others where television and gaming activities took place.
Participant effects, specifically, the extent to which observed participants alter a naturally occurring event, leading to biased outcomes. This is often called the Hawthorne effect, whereby the people being observed, either consciously or unconsciously, change what is being measured in the observation because they are aware of the observer. For example, an ethnographer conducting an overt, on-site passive observation of teaching practices in a school district would come away with misleading data if one or more school teachers deviated from their usual teaching styles during the observations in order to more closely conform with district policies.
LeCompte, M. D., & Goetz, J. P. (1982). Ethnographic data collection in evaluation research. Educational Evaluation and Policy Analysis, 4(3), 387–400.
Roller, M. R., & Lavrakas, P. J. (2015). Applied qualitative research design: A total quality framework approach. New York: Guilford Press.
Back in 2018, Research Design Review posted an article titled “Five Tech Solutions to Qualitative Data Collection: What Strengthens or Weakens Data Quality?” The focus of this article is on a presentation given in May 2018 concerning technological alternatives to qualitative research data collection. Importantly, the aim of the presentation was, not to simply identify different approaches to data collection beyond the in-person and telephone modes but rather, to examine the strengths and limitations of these technological solutions from a data quality – specifically, Credibility – standpoint.
Broadly speaking, technological approaches to qualitative research data gathering offer clear advantages over in-person methods, particularly in the areas of:
Representation, e.g., geographic coverage, potential access to hard-to-reach population segments;
Cooperation, e.g., convenience and flexibility of time and place for participants, appropriateness for certain demographic segments (18-49 year olds*);
Validity associated with data accuracy, e.g., research capturing in-the-moment experiences do not rely on memory recall;
Validity associated with the depth of data, e.g., capturing multiple contextual dimensions through text, video, and images;
Validity associated with data accuracy and depth allowing for the triangulation of data;
Researcher effects, e.g., mitigated by the opportunity for greater reflection and consistency across research events;
Participant effects, e.g., mitigated by the multiple ways to express thoughts, willingness to discuss sensitive issues, and (possibly) a lower tendency for social desirability responding; and
Efficient use of resources (i.e., time, money, and staff).
There are also potential drawbacks to any technological solution, including those associated with:
Uneven Internet access and comfort with technology among certain demographic groups (e.g., sampling favors “tech savvy” individuals), hard-to-reach and marginalized segments of the population;
Difficulty in managingengagement, including the unique researcher skills and allocation of time required;
Potential participant burnout from researcher’s requests for multiple input activities and/or days of engagement. This is a type of participant effect that negatively impacts validity;
Nonresponse due to mode, e.g., unwillingness or inability to participate to a mostly text-based discussion;
Data accuracy, e.g., participant alters behavior in a study observing in-home meal preparation;
Missing important visual &/or verbal cues which may interfere with rapport building and an in-depth exploration of responses;
Difficulty managing analysis due to lots and lots of data (in volume & formats);
Fraud, misrepresentation – “Identity is fluid and potentially multiple on the Internet” (James and Bushner, 2009, p. 35) and people may not share certain images or video that reveal something “embarrassing” about themselves**; and
Security, confidentiality, anonymity (e.g., data storage, de-identification).