another 1918, and an available technology, but also high-yield eggs, one dose per person, high efficacy, unparalleled acceptance, favorable publicity, sustained congressional support, wide private involvement, adequate state operations, three months to complete vaccinations, no useful stockpiling, no liability legislation, few (if any) opportunity costs, etcetera. In short, we advocate a comprehensive definition and review of assumptions everyone can see and weigh before decision and remember after. The review thus should be public. This seems to us a proper base for formal reevaluation.


Without it, we doubt reevaluators will be any better off than Ford was in July and early August 1976. Having publicly expressed in March no “ifs” except uncertainty about the coming of pandemic—which did not distinguish likelihoods in spring from those in summer, or differentiate spread from severity—he had no grounds to think about a change. As long as Cooper told him “it” remained a possibility with probability “unknown,” Ford was stuck. Anyone would be.


We can see two ways to derive the details and distinctions for a useful analysis of the decision. One is to get the issue posed according to its component parts and argued in probabilistic terms. The other is to hunt for answers to the question Sencer once was asked by Alexander, in effect:

What evidence on which things, when and why, would make us change the course we now propose, and to what?

We do not see these two as mutually exclusive, and we think both are of use. Either would allow for reassessment of earlier decisions. The first may be best but will be hard indeed to get from public health officials. If so, the second becomes the Secretary’s recourse. It is the nearest substitute we can suggest for probability analysis.


For purposes of sharpening assumptions and distinguishing them, nothing beats an exercise in probability. Deciding on a swine flu program is like placing a bet without knowing the odds. A serious stake in the outcome ought to concentrate the mind on breaking down the issue and scrounging for anything that might inform judgment. If one has “scientific” evidence from laboratory tests, one need not scrounge, but swine flu decisions are not like that. Expertise counts for a lot, but only by way of informing subjective judgment. To assign a number to the likelihood that something will occur is to expose one’s judgment for comparison with that of others. This leads to explicitness about everyone’s reasons. If two people assign different numbers, the question becomes, why? That starts them digging into the detail of their own—and each other’s—reasoning.


But doctors, at least of the older generation, rarely think in probabilistic terms and, if asked, dislike it. Some of the scientists involved with the swine flu decision did participate in an exercise to estimate the probabilities of an epidemic and its severity. This was not done as part of any decision-making deliberation, but as an academic exercise, a favor for a colleague writing a paper.35 As scientists accustomed to thinking about experiments and “truth,” they were uncomfortable expressing subjective estimates, even if based on expert knowledge and experience. They resented having to quantify their judgments.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement