Seemingly every week a new report or policy analysis from a think tank or research-based advocacy group is released and covered by mainstream media, yet my study published earlier this year suggests that such reports may hit a brick wall in government when confronted by skeptical policy professionals, many of whom appear to believe that most think tank research lacks credibility.
Policy professionals in bureaucracies across the world confront research from diverse sources such as academia, think tanks and advocacy groups as part of their job conducting policy analysis and briefings for senior officials and elected decision makers. Governments have never had more access to policy-relevant research than today. Indeed, we see the emergence of what some have called “report wars” among various opposing studies amid the widespread perception in government circles that “you can’t really play the policy game unless you have a study.” So we should care about how it is consumed within government and what, if any, biases may be present.
Rather than simply ask policy analysts in government how they view various sources of policy research, I got them to reveal it (without them knowing) by conducting a randomized controlled experiment in the British Columbia provincial government public service in Canada. They read a number of excerpts of policy research on minimum wage and income-splitting policy—in two separate experiments— from academia, think tanks and research-based advocacy groups and then were asked to assess the credibility of the articles. The experimental design was simple: for half of the respondents, the authorship of the studies was randomly switched but the content remained the same, allowing for systematic comparison of the average credibility assessments (very credible, somewhat credible, somewhat not credible, not credible at all) across the control and experimental groups.
There were systematic and at times extraordinarily large differences between the credibility assessments across those policy professionals in the control and experimental groups, both of which read precisely the same research excerpts. That is, merely the name of source that conducted the research powerfully shaped how policy professionals in government viewed the credibility of the research.
The result? There were systematic and at times extraordinarily large differences between the credibility assessments across those policy professionals in the control and experimental groups, both of which read precisely the same research excerpts. That is, merely the name of source that conducted the research powerfully shaped how policy professionals in government viewed the credibility of the research.
Take for example a recent study from the Fraser Institute, one of the most prolific think tanks on the right in Canada, titled The Economic Effects of Increasing BC’s Minimum Wage, which was one of the think tank research articles tested in this experiment. The control group read this research excerpt, accurately attributed to the Fraser Institute, and assigned it the lowest average credibility of all the other minimum wage research articles. Yet in the experimental group, the same Fraser Institute research excerpt, when the authorship was randomly reassigned as “written” by University of Toronto academics, its average credibility score jumped dramatically. How much? In statistical terms, Fraser Institute think tank article experienced a nearly 300% increase in the odds of selecting a higher credibility category for those subjects in the treatment conditions (UofT source attribution) over the control conditions (correct Fraser Institute attribution). Put simply: the Fraser Institute affiliation was a significant drag on the perceived credibility of their report and analysis.
Think tanks on the left didn’t fair much better: for example, a recent article from the University of Toronto by economists Campolieti et al. (2012) received very high average credibility assessments in the control group, but when authorship was randomly reassigned as “written” by the Canadian Centre for Policy Alternatives (CCPA)—a left of centre think tank—there was a 68% decrease in the odds of selecting a higher credibility category for those subjects in the treatment conditions over the control conditions. That is, for the same research excerpt, actually written by UofT economists, when posing as from the CCPA, its credibility plummets.
All in all, what emerged from this experiment is that academic research was afforded a privileged position of credibility among policy professionals in government, followed by think tanks, which were followed by research-based advocacy groups (e.g. Canadian Federation for Independent Business, Canadian Labour Congress). These results have since been replicated with policy analysts in Ontario, Saskatchewan and Newfoundland provincial governments in a forthcoming published paper.
Is this a good or a bad story? On one hand, it may be comforting to learn that policy analysts in government are skeptical of reports that emerge from organizations with analysis and conclusions that (always seem to) align with their stated ideological proclivities—save for the handful of think tanks with no obvious ideological bent. On the other hand, policy analysts seem to be strongly influenced by the source as a shortcut to its credibility, and may be giving academic research a free pass, falsely assuming that we are able to fully separate our own biases from the construction of our research questions and reporting of our findings (for example, leaving contradictory or null results unpublished).
As one who has published work in both academic and think tank venues, to my mind the bigger threat is that we risk treating think tank reports and analysis as primary research when most are fundamentally advocacy , a distinction that policy professionals in Canada appear to make in practice based on the results of this experiment. In the end, our policy professionals are sorting through the “report wars” prudently based on their source, but perhaps too strongly, to the exclusion of the actual content of the evidence and argument.