A TQF Approach to Sample Size

Credibility TQF componentSample size and sampling in qualitative research design have been discussed elsewhere in Research Design Review, see “Sample Size in Qualitative Research & the Risk of Relying on Saturation” and “Shared Constructs in Research Design: Part 1 — Sampling.” In June 2022, “A TQF Approach to Choosing a Sample Design” was posted to RDR and considers ways to ensure that research participants are representative (share defining characteristics) of the population being studied.

The following is a modified excerpt from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp. 26-27) that briefly examines a Total Quality Framework (TQF) approach to another facet of sample design, i.e., sample size.


How large a sample to use is a decision that qualitative researchers need to make explicitly and carefully in order to increase the likelihood that their studies will generate credible data by well representing their population of interest. Unlike quantitative researchers who most often rely on statistical formulae to determine the sample sizes for their studies, qualitative researchers must rely on (a) past experience and knowledge of the subject matter; and (b) ongoing monitoring during the data-gathering period, which includes applying a set of decision rules, such as those listed in “Designing a Quality In-depth Interview Study: How Many Interviews Are Enough?” These decision rules consider (a) the complexity of the phenomena being studied, (b) the heterogeneity or homogeneity of the population being studied, (c) the level of analysis and interpretation that will be carried out, and (d) the finite resources available to support the study. These types of decision guidelines, along with past experience, should provide qualitative researchers with the considerations they need to carefully judge the amount of data necessary to meet their research objectives. (Of note, if a researcher does not have sufficient past personal experience, a literature review, or speaking directly with other researchers who do have such experience, should serve well.)

As importantly, during the period when data are being gathered, researchers should also closely monitor the amount of variability in the data, compared to the variability that was expected, for the key measures of the study. Based on this monitoring, researchers are responsible for making a “Goldilocks decision” about whether the sample size they originally decided was needed is too large, too small, or just about right. In making a decision to cut back on the amount of data to be gathered, because there is less variability in what is being measured than anticipated, the researcher needs to make certain that those cases that originally were sampled, but would be dropped, are not systematically different from the cases from which data will be gathered. In making a decision to increase the size of the sample, because there is more variability in what is being measured than anticipated, the researcher needs to make certain that the cases added to the sample are chosen in a way that is representative of the entire population (e.g., using the same orderly approach that was used to create the initial sample).

In all instances, and if the necessary resources (staff, time, budget) are available, it is prudent for a researcher to error on the side of having more rather than less data. Gathering too much data does no harm to the quality of the study’s findings and interpretations, but having too little data leaves the researcher in the untenable position of harming the quality of the study because the complexity of what was being studied will not be adequately represented in the available data. For example, case study research to investigate new public school policies related to the core science curriculum might include in-depth interviews with school principals and science teachers, observations of science classes in session, and a review of students’ test papers; however, as a complex subject matter, the research may be weakened by not including discussions with the students and their parents as well as by a failure to include all schools (or a representative sample of schools) in the research design.

Roller, M. R., & Lavrakas, P. J. (2015). Applied qualitative research design: A total quality framework approach. New York: Guilford Press.

The TQF Qualitative Research Proposal: Credibility of Design

A Total Quality Framework (TQF) approach to the qualitative research proposal has been discussed in articles posted elsewhere in Research Design Review, notably “A Quality Approach to the Qualitative Research Proposal” (2015) and “Writing Ethics Into Your Qualitative Proposal” (2018). The article presented here focuses on the Research Design section of the TQF proposal and, specifically, the Credibility component of the TQF. The Credibility component has to do with Scope and Data Gathering. This is a modified excerpt from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp. 339-340).

TQF Proposal Image-DesignScope

A TQF research proposal clearly defines the target population for the proposed research, the target sample (if the researcher is interested in a particular subgroup of the target population, e.g., only African American and Hispanic high school seniors in the district who anticipate graduating in the coming spring), how participants will be selected for the study, what they will be asked to do (e.g., set aside school time for an in-depth interview [IDI]), and the general types of questions to which they will be asked to respond (i.e., the content areas of the interview). In discussing Scope, the researcher proposing an IDI study with African American and Hispanic high school students would identify the list that will be used to select participants (e.g., the district’s roster of seniors who are expected to graduate); the advantages and drawbacks to using this list (e.g., not everyone on the roster may consider themselves to be African American or Hispanic); the systematic (preferably random) procedure that will be used to select the sample; and the number of students that will be selected as participants, including the rationale for that number and the steps that will be taken to gain cooperation from the students and thereby ideally ensure that everyone selected actually completes an interview (e.g., gaining permission from the school principal to allow students to take school time to participate in the IDI, and from parents/guardians for students under 18 years of age who cannot give informed consent on their own behalf).

Data Gathering

The data-gathering portion of the Research Design section of the proposal highlights the constructs and issues that will be examined in the proposed research. This discussion should provide details of the types of questions that will be asked, observations that will be recorded, or areas of interest Read Full Text

Qualitative Tech Solutions: Coverage & Validity Considerations

Back in 2018, Research Design Review posted an article titled “Five Tech Solutions to Qualitative Data Collection: What Strengthens or Weakens Data Quality?” The focus of this article is on a presentation given in May 2018 concerning technological alternatives TQF Credibilityto qualitative research data collection. Importantly, the aim of the presentation was, not to simply identify different approaches to data collection beyond the in-person and telephone modes but rather, to examine the strengths and limitations of these technological solutions from a data quality – specifically, Credibility – standpoint.

Broadly speaking, technological approaches to qualitative research data gathering offer clear advantages over in-person methods, particularly in the areas of:

  • Representation, e.g., geographic coverage, potential access to hard-to-reach population segments;
  • Cooperation, e.g., convenience and flexibility of time and place for participants, appropriateness for certain demographic segments (18-49 year olds*);
  • Validity associated with data accuracy, e.g., research capturing in-the-moment experiences do not rely on memory recall;
  • Validity associated with the depth of data, e.g., capturing multiple contextual dimensions through text, video, and images;
  • Validity associated with data accuracy and depth allowing for the triangulation of data;
  • Researcher effects, e.g., mitigated by the opportunity for greater reflection and consistency across research events;
  • Participant effects, e.g., mitigated by the multiple ways to express thoughts, willingness to discuss sensitive issues, and (possibly) a lower tendency for social desirability responding; and
  • Efficient use of resources (i.e., time, money, and staff).

There are also potential drawbacks to any technological solution, including those associated with:

  • Uneven Internet access and comfort with technology among certain demographic groups (e.g., sampling favors “tech savvy” individuals), hard-to-reach and marginalized segments of the population;
  • Difficulty in managing engagement, including the unique researcher skills and allocation of time required;
  • Potential participant burnout from researcher’s requests for multiple input activities and/or days of engagement. This is a type of participant effect that negatively impacts validity;
  • Nonresponse due to mode, e.g., unwillingness or inability to participate to a mostly text-based discussion;
  • Data accuracy, e.g., participant alters behavior in a study observing in-home meal preparation;
  • Missing important visual &/or verbal cues which may interfere with rapport building and an in-depth exploration of responses;
  • Difficulty managing analysis due to lots and lots of data (in volume & formats);
  • Fraud, misrepresentation – “Identity is fluid and potentially multiple on the Internet” (James and Bushner, 2009, p. 35) and people may not share certain images or video that reveal something “embarrassing” about themselves**; and
  • Security, confidentiality, anonymity (e.g., data storage, de-identification).





James, N., & Busher, H. (2009). Online interviewing. London: Sage Publications.