ATTACHMENT C
Comments on the Report “Expert Consultation on Infectiousness of Organisms Studied in the NEIDL Risk Assessment”

The committee was informed that NIH elected to use a “modified Delphi method” to generate dose response estimates due to the absence of human data for predicting infections. This process involved soliciting opinions on human infective doses (HIDs) from an expert panel of biodefense specialists and laboratory researchers via questionnaires. Opinions were sought on values for HID10, HID50, and HID90, or the levels of inhalation exposure at which 10 percent, 50 percent, and 90 percent of an exposed human population might become infected with aerosolized pathogenic agents, for 13 pathogens. Although the report on the Delphi process was not presented to the committee at the September 22 meeting with the BRP, the committee subsequently asked for and was given a copy of the draft process report (9 July, 2010 draft by Sam Bozzette). This appendix elaborates on the committee’s concerns with this process.

One major concern is about a lack of a cohesive scientific rationale in the report for “votes” on parameter values, especially those for the human infectivity point estimates, but also for the other elicited parameters. The report introduces a presumption that “extrapolation from animal experiments is risky because of interspecies differences” and concludes that the elicited opinions of experts converged and “differed from fragmentary human, animal, and laboratory data in reasonable ways.” No scientific support is presented for this conclusion.

The NRC committee was told that there were “three rounds of voting” by the experts for all 13 pathogens, and that “individual expert curves” were used for the uncertainty analysis. Essentially no further information was contained in the presentation on this “Modified Delphi method” or how the values from the different experts were used in the analyses.

The copy of the report and its appendices obtained by the committee included the questionnaire used to elicit the “informed iterative confidential voting” by the members of the panel. Rounds of voting were repeated until an unspecified definition of “consensus” was met or for a specified number of cycles (3). Most of the work was done independently by the experts who were provided instructions, background materials (presumably appendix tables prepared by Tetra Tech and lists of abstracts and references from Tetra Tech’s literature search), and an electronic questionnaire form to record the first round votes on human infectivity, half-life, and percentage increases in vulnerability for more susceptible human populations. On May 18, 2010, 6 of the 8 experts convened for a day to discuss results of the first round votes and participated in the second round of voting, but no transcript or summary of the discussions among the experts was provided. Apparently, consensus was not reached at this stage of the modified Delphi process, and a third round of votes was conducted later. The only references cited in the report are a Rand study on Delphi process (Dalkey et al., 1962) and two studies on consensus methods (Fink et al., 1984; Jones and Hunter, 1995).

The central idea of a Delphi process is to solicit information (data, evidence, judgment, opinion) from experts separately and then consider anonymous feedback from the other experts that is then used to revise the information initially provided. Regardless of the details of the approach, a Delphi method is necessarily focused on expert judgments on complex and uncertain quantities. Kaplan (1992) recommends eliciting the “evidence” from experts, not their opinions of point estimates for unknown parameters, to build a consensus body of evidence for use in risk analysis. In addition, Morgan and Henrion (1990) discuss combining judgments from experts and



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 26
A TTACHMENT C Comments on the Report “Expert Consultation on Infectiousness of Organisms Studied in the NEIDL Risk Assessment” The committee was informed that NIH elected to use a “modified Delphi method” to generate dose response estimates due to the absence of human data for predicting infections. This process involved soliciting opinions on human infective doses (HIDs) from an expert panel of biodefense specialists and laboratory researchers via questionnaires. Opinions were sought on values for HID10, HID50, and HID90, or the levels of inhalation exposure at which 10 percent, 50 percent, and 90 percent of an exposed human population might become infected with aerosolized pathogenic agents, for 13 pathogens. Although the report on the Delphi process was not presented to the committee at the September 22 meeting with the BRP , the committee subsequently asked for and was given a copy of the draft process report (9 July, 2010 draft by Sam Bozzette). This appendix elaborates on the committee’s concerns with this process. One major concern is about a lack of a cohesive scientific rationale in the report for “votes” on parameter values, especially those for the human infectivity point estimates, but also for the other elicited parameters. The report introduces a presumption that “extrapolation from animal experiments is risky because of interspecies differences” and concludes that the elicited opinions of experts converged and “differed from fragmentary human, animal, and laboratory data in reasonable ways.” No scientific support is presented for this conclusion. The NRC committee was told that there were “three rounds of voting” by the experts for all 13 pathogens, and that “individual expert curves” were used for the uncertainty analysis. Essentially no further information was contained in the presentation on this “Modified Delphi method” or how the values from the different experts were used in the analyses. The copy of the report and its appendices obtained by the committee included the questionnaire used to elicit the “informed iterative confidential voting” by the members of the panel. Rounds of voting were repeated until an unspecified definition of “consensus” was met or for a specified number of cycles (3). Most of the work was done independently by the experts who were provided instructions, background materials (presumably appendix tables prepared by Tetra Tech and lists of abstracts and references from Tetra Tech’s literature search), and an electronic questionnaire form to record the first round votes on human infectivity, half-life, and percentage increases in vulnerability for more susceptible human populations. On May 18, 2010, 6 of the 8 experts convened for a day to discuss results of the first round votes and participated in the second round of voting, but no transcript or summary of the discussions among the experts was provided. Apparently, consensus was not reached at this stage of the modified Delphi process, and a third round of votes was conducted later. The only references cited in the report are a Rand study on Delphi process (Dalkey et al., 1962) and two studies on consensus methods (Fink et al., 1984; Jones and Hunter, 1995). The central idea of a Delphi process is to solicit information (data, evidence, judgment, opinion) from experts separately and then consider anonymous feedback from the other experts that is then used to revise the information initially provided. Regardless of the details of the approach, a Delphi method is necessarily focused on expert judgments on complex and uncertain quantities. Kaplan (1992) recommends eliciting the “evidence” from experts, not their opinions of point estimates for unknown parameters, to build a consensus body of evidence for use in risk analysis. In addition, Morgan and Henrion (1990) discuss combining judgments from experts and 26

OCR for page 26
the Delphi process in this context (pages 164-169). Two points raised by Morgan and Henrion (1990) merit consideration by NIH and its consultants: 1) elicited scientific judgment is not a substitute for proper scientific research; and 2) strict quality control of the process is needed. Appendix Table 1.A of the report provides a summary of a small portion of the human and animal literature. However, the author does not distinguish clinical data from opinions or simulation results (judgments at best) in this table. For example, no inhaled doses are known for any human inhalation anthrax cases to date, yet three papers are cited for opinions or judgments about possible human infective doses. The most relevant studies for risk assessment in non- human primates and rabbits (recent USAMRIID) are not included in the table. Further, only four studies, two each from human and non-human primates, were included in Table 1.A for Franscisella tularensis rather than citing the extensive knowledge base for tularemia dose- response relationships for human and non-human primates (8 human infectivity studies; 11 non- human studies for dose-dependencies, including asymptomatic, mild, moderate, severe, and fatal tularemia). The report does not cite historical and recent evidence for laboratory-associated tularemia infections or natural outbreaks of pulmonary tularemia that merit consideration for risk assessment. The value of the elicited results for predicting human effects is highly uncertain. The metric for eliciting human infectious doses for aerosolized particles including pathogens appears purely hypothetical, and not based on valid scientific studies that measured this parameter. A recent illustrative study with norovirus reported an average number of virions per particle of nearly 400 (Teunis et al., 2009), but the nature and impact of clumping on variability and . uncertainty in dose-response relationships was not addressed in the Delphi process Further, the value of eliciting human infectivity for airborne infectious particles is questionable for pathogens with arthropod vectors as the predominant route of infection. Similarly, the elicited parameter for potentially more vulnerable populations appears purely hypothetical rather than arising from a valid scientific study that corrects for dose. The phrasing of the elicited parameter (median increase in vulnerability of 5 human groups (young [undefined], older [undefined], diabetes, HIV , pregnancy) that might be more susceptible, in general, to infections of unspecified bacteria and unspecified viruses) is too vague to merit inclusion in the analysis for these 13 biothreat agents. Existing scientific data for normal and more susceptible animals are inconsistent with the magnitude of susceptibility elicited by the expert panel. The maximum elicited parameter for increased vulnerability (30 percent) is dwarfed by actual variability in ID measured in murine populations, which shifted 5 orders of 50 magnitude for salmonellosis infection (Bohnhoff et al., 1964). Thus, it appears that the modified Delphi process elicited “opinions” that are quite hypothetical, rather than judgments based on data. It is unclear how the experts, individually and collectively, used the background information provided, or expanded the body of evidence to form opinions about human infectivity and other unknown parameters in the process. It is also unclear how the panel used the background information that included multiple host species and multiple routes of infection, including arthropod-borne vectors. The data on Rift V alley Fever (Table 1.A of the report) lists two experiments involving inoculation in rhesus monkeys and in rats. It states that some rats become infected asymptomatically. This disease may be spread by insects such as mosquitoes, as well as by direct contact, but precise data are scarce. How did the experts on the panel use the available data? How was a dose-response for aerosol exposure for droplets containing RVF virus particles developed? How did the experts extrapolate from data in monkeys and rats to the probability of infection in a human? What assumptions 27

OCR for page 26
were made about the number of virions in an aerosol droplet, and how many droplets inhaled were needed for infection, especially at a low concentration of such droplets in the air? Did each expert make an assessment for each number solicited in the questionnaire? Were opinions from panel members with specific expertise weighted differently than panel members with less expertise? For example, it is unclear if the feedback and discussion session considered specific expertise of the panel member whose study of the outbreak in Kenya of Rift V alley Fever was recently published, or if this expert’s elicited parameters were weighted differently than those from experts without direct experience. Additional References for Appendix Bohnhoff, M., Miller, C.P ., and W.R. Martin. (1964). Resistance of the mouse’s intestinal tract to experimental Salmonella infection. I. Factors which interfere with the initiation of infection by oral inoculation.J Exp Med 1(20):805-816. Dalkey, N.C., and H-H. Olaf. (1962). An Experimental Application of the Delphi Method to the Use of Experts (RM-727/1-ABR). Santa Monica, The RAND Corporation. Fink, A., Kosecoff, J., Chassin, M., Brook, R. (1984). Consensus methods: Characteristics and guidelines for use. Am J Public Health 74(9):979-983. Jones, J., and D. Hunter. (1995). Qualitative research: Consensus methods for medical and health services research. BMJ 311:376-380. Kaplan. S. (1992). "Expert information" versus "expert opinions." Another approach to the problem of eliciting /combining /using expert knowledge in probabilistic risk analysis. Reliability Engineering and System Safety 35:61-72. Morgan, M.G., and M. Henrion. (1990) .Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis. Cambridge: Cambridge University Press. National Research Council. (2010). Evaluation of the Health and Safety Risks of the New USAMRIID High-Containment Facilities at Fort Detrick, Maryland. W ashington, DC: National Academies Press. Teunis, P .F., Moe, C.L., et al. (2008) Norwalk virus: how infectious is it?J Med Virol 80(8):1468-76. 28