Communicating uncertainty is one of the biggest challenges journalists face, said Laura Helmuth, national editor of health, science, and environment at The Washington Post, who moderated the session at the colloquium on uncertainty in science communication. Uncertainty is hard to explain and understand. Journalists typically have so much to explain in their stories that they can be tempted to leave uncertainty out. “You have to pick your explanatory battles, and this is a battle that we often put off,” said Helmuth.
But journalists are getting better at it, she added. They are becoming more aware of how uncertainty can be misused, as was the case when the tobacco industry argued that the health effects of smoking were uncertain. They have learned to avoid the trap of false balance, so as not to overstate
the uncertainty that exists. “We’re getting better at covering uncertainty as a subject in an interesting way,” said Helmuth.
Scientific evaluations of public policies should explicitly express the limits to knowledge. However, policy analysis “with what I call incredible certitude has been common,” said Charles Manski, Board of Trustees Professor of economics in the Department of Economics at Northwestern University. The predictions that researchers make are often fragile, resting on unsupported assumptions and limited data (Manski, 2013), but economists and other social scientists tend to make exact predictions of policy outcomes while rarely expressing uncertainty. “It’s not that they’re fraudulent,” said Manski. “It’s that you assume more than you have the basis to assume using the data that you have.”
Good examples, he observed, are the predictions, known as scores, made by the Congressional Budget Office (CBO) of the budgetary implications of pending federal legislation. The impacts of new legislation are difficult to foresee, yet the CBO makes 10-year point predictions, with no quantitative measure of uncertainty. Similarly, official statistics from federal agencies, such as the gross domestic product growth rate or the poverty rate, suffer from various kinds of errors, yet agencies typically report only point estimates.
Some agencies do aim to communicate uncertainty transparently. A notable case is the National Weather Service, which in a tweet issued on August 27, 2017, as rainfall from Hurricane Harvey was beginning to inundate Southeast Texas, said, “This event is unprecedented & all impacts are unknown & beyond anything experienced.”
Manski listed several manifestations of incredible certitude. One is conventional certitude, which he described as statistics or estimates that are accepted as true by society but may not be true. Dueling certitudes are contradictory predictions made with alternative assumptions, as when analysts draw opposite conclusions about such issues as illegal drug policies. Conflating science and advocacy is when analysis aims to generate a predetermined conclusion, as with the practice of “model shopping,” where advocates for a particular position go looking for a model that supports their views. Wishful extrapolation is using untenable assumptions to extend a conclusion in a desired direction, as when limited studies of drug outcomes are used to predict what will happen in clinical practice. Illogical certitudes draw unfounded conclusions based on deductive errors, as with research that misinterprets the heritability of personal traits. Finally, media overreach is when journalists do premature or exaggerated reporting of policy analysis.
Why do researchers express certainty when they should be expressing uncertainty? Manski pointed to two reasons. The first is that the scientific community tends to reward strong and novel findings. The second is that the public wants unequivocal policy recommendations. Analysts at the CBO, for instance, know that their point estimates should be accompanied by ranges of uncertainty. But they may believe, Manski speculated, that the members of the U.S. Congress are psychologically or cognitively unable to deal with uncertainty. (Although, he added, such estimates in the United Kingdom do include uncertainty.) Or they may believe, because the CBO has established an admirable reputation for impartiality, that it is best to leave well enough alone and have the CBO express certitude when it scores legislation, even if the certitude is conventional rather than credible.
The problem with this approach, said Manski, is that the existing social contract to take CBO scores at face value could eventually break down. Maintaining trust may require expressing uncertainty. “Once you accept incredible certitude and take numbers at face value when they shouldn’t be, there may be a slippery slope from incredible certitude to utter disregard for truth. I do not think this is a second-order issue. In fact, it may be even more important to face up to uncertainty today than in the past.”
This page intentionally left blank.