Don’t get too married to your research data because it may just be an illusion. That is the premise of Jonah Lehrer’s captivating article in The New Yorker magazine (“The Truth Wears Off: Is there something wrong with the scientific method?” December 13, 2010). Lehrer makes the point that the repeatability – which is to say, the integrity – of scientific data is fleeting. Using examples from experimental research in psychology, zoology, and biology (biomedical and neuroscience), Lehrer concludes that, “Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true.”
Central to the article is an attempt to explain what Joseph Banks Rhine (a psychologist at Duke University from 1927 to the early 1960s) called the “decline effect” – meaning, the demonstrably reduced significance of research data over time (i.e., the vanishing of significant outcomes). In his effort to uncover reasons why the decline effect exists, Lehrer cites fascinating examples of this phenomenon and ultimately leads us to the notion that it is our human frailty and the bias it introduces into our research designs that make research findings suspect. We alone potentially bias our research by the fact that:
“We hate to be wrong.”
We harbor “strong a-priori beliefs.”
We cling to data that “makes sense.”
These human frailties are not necessarily conscious human conditions – that is, we don’t consciously use survey data to promote our own truths, or analyze survey data with the intent of molding it to conform to our pre-conceived beliefs, or refrain from reporting survey data that intuitively makes no sense – but rather subtle, unconscious barriers that potentially blind researchers to the truth in their data.
While marketing research professionals typically lack the resources, time, or inclination to experiment with many of the design issues discussed here in RDR and elsewhere, it is no less important to consider how we collect our survey data and on what basis we come to our conclusions. Are we designing studies that simply replicate earlier research known to be fraught with sampling and/or non-sampling error? Do we become adherents to a particular survey mode at the exclusion of all others? Do we ‘see’ more in our survey data than is actually there? Do we over-reach our interpretations of data from designs that are qualitative or limited in scope, such as social media research? Do we ignore or discount research findings that refute our or our clients’ expectations?
Jonathan Schooler, a psychologist at the University of California-Santa Barbara who has witnessed the decline effect first-hand, calls for a “more transparent” approach to research design – one that clearly specifies upfront the objectives, sampling, and “level of proof” required of the data to warrant significance. And, just like the AAPOR Transparency Initiative, Schooler advocates for an open system where research design details are available to all researchers. The idea of openness and transparency in research design (which was also discussed in a November 2009 RDR post) addresses the robustness of our efforts and the quality of our research as far as providing a realistic measure of truth.
Researchers on some level need to talk about the unintended, imperceptible blindness that permeates our designs, belies our data, and feeds the illusion of our findings.