The ongoing discussions concerning “DIY research” – its inevitability, its goodness (clients’ “innovative” research solutions), and the opportunity it provides professional researchers to gain closer relationships with clients by way of supporting their DIY efforts – are difficult to disagree with. As someone who works with clients to maximize research effectiveness, including their foray into DIY research, it is easy to see the inevitability, the (sometimes) unique solutions that emerge, and even a new type of closeness that results from a shift in role from research provider to tutor.
It is curious that discussions on assisting DIY clients tend to focus on the mechanics. There is a difference, however, between 1) the mechanics of data collection (in whatever form) and rudimentary analysis, and 2) the ability to actually see what the research will tell you (to guide research design) as well as what it does tell you when data collection is completed. The mechanic knows how to take pieces and parts and put an engine together but it is the engineer who has applied scientific principles to design the model from which each piece develops and the system as a whole functions.
As supporters of DIY research, we are bolstering the population of research mechanics who may learn with robotic precision how to select the appropriate mode, design a questionnaire, generate a representative sample, and understand the difference between a 60% response compared to 30%; yet ill-equipped for the task of analysis. Research analysis is much more than a summing of parts. Research analysis is a process requiring the skilled ability to look for and appreciate the underlying patterns of responses, to reveal the cohesive whole leading to conclusions that mean something. It is the gestalt of our outcomes – not the pieces and parts – that matter. It is the ability to see how certain aspects of the data complement each other and how others don’t. It is the ability to conclude that some research results are truths while others are simply diversions.
We owe our reality of research results to the analysts. Potentially different realities emerge depending on who conducts the analysis and what is analyzed. This is a basic theme running through Malcolm Gladwell’s works including Outliers and his recent article in The New Yorker (“The Order of Things“). The article on rankings talks about how rankings result from “arbitrary judgments” concerning which variables (data) to look at and how much weight each deserves. To illustrate his point, Gladwell invites us to visit Jeffrey Stake’s “Ranking Game” site and watch the rankings change as weights are manipulated.
But subjective bias is only part of the pitfall awaiting research mechanics. As importantly, analysis is about the ability to identify nuances, make connections, and describe the qualities of the whole. Anyone who has turned over a dataset, transcripts, audio/video tapes, and other raw results to the uninitiated has experienced the anguish of having outcomes reduced to bits of data or isolated comments that have little import to the reality lurking in the whole of responses and the spaces that lie between them.
We may be able to train end-users as research mechanics, as technicians who learn how one piece connects to another part, but so what? Let’s avoid “so what” research by taking the lead in analytical teams with our clients.
One comment