Harvey Fineberg, chair of the Symposium Planning Committee, explained the plan for the final session. First, he would ask the moderators from the various sessions, all members of the planning committee, to offer their perspectives, summarizing and perhaps adding their own personal comments about key points that were raised in each of the sessions. He then wanted to allow an opportunity for those from the National Institutes of Health (NIH) and the National Science Advisory Board for Biosecurity (NSABB) to raise any issues or topics or questions they would like to address, and to invite further comment from those on the stage and in the audience. Finally, the microphones would be opened for additional comments, suggestions, and ideas that anyone present or listening on the Web would like to include in the record.
Dr. Fineberg invited Charles Haas (“Informing the Policy Framework: The Risk and Benefit Assessment”) to give the first summary comments. Dr. Haas began with one editorial comment. Data gaps, particularly on laboratory safety, were thought to limit the ability to do an absolute risk assessment, and there had been a good set of questions from the floor about the need to develop scholarship and support for those studies. His comment was that such data are not totally absent, and it might have been informative to use whatever data were available, even though they were poor, as part of the effort to bound the potential risks that could occur.
Dr. Haas also commented that, if a pure “risk acceptable” rule is to be used as a basis for decision making, it should be recognized that information is lacking on what the level of acceptability should be. Rocco
Casagrande had presented an updated analysis using new data on seasonal versus 1918 influenza, which raised the broader point that risk assessments in general need to be living and to be adaptable to new information as it comes along. Dr. Haas also cited Adam Finkel’s statement that leaving uncertainty out is a violation of first principles.
He quoted Dr. Finkel that “Is it safe?” is a vapid question because it is intrinsically without meaning without a reference level. A hierarchy of potential judgment rules exists. Both Tony Cox and Dr. Finkel made that clear, and also that explicit judgments about what rule is to be used need to be made. Kara Morgan called this “deciding how to decide,” and she noted that there is rich scholarship from the decision analysis community that needs to be brought to bear. And stakeholder input needs to be included to develop the decision rules.
Dr. Cox had cited the need to avoid the “fallacy of coherence”: Just because risk has been accepted in the past does not mean that an informed judgment going forward would make that same numerical risk acceptable. A useful task would be to assess whether or not collection of more information would make a decision better. There is a rich literature on the concept of the value of information in this regard.
Dr. Haas concluded by citing a number of miscellaneous problems that had come up in the discussion. For example, Dr. Casagrande had expressed the concern that bench researchers may not be familiar enough with epidemiological parameters to assess transmissibility. Next, risk–benefit analysis could be used to improve the risk profile of proposed experiments—in other words, envisioning an iterative process of some sort. Dr. Finkel had argued that risk and benefit analyses should be balanced, humble, and explicit about value judgments. And finally, there had been comments from the audience that particularly long-term benefits may be difficult to value and highly uncertain. His editorial comment in response was that, while this may very well be true, it should not mean that one should walk away from the effort to attempt to quantify them using whatever information one had available.
Barry Bloom shared reflections from two sessions, first on behalf of Michelle Mello (“The Policy Landscape: United States”) and then from the panel he had moderated (“The Policy Landscape: International Dimensions of Gain-of-Function [GOF] Research”). Dr. Mello’s comments included:
- There is no set of policies that targets the specific group of pathogens defined by the NSABB. Instead, the federal policy framework consists of a series of partially overlapping statutes and regulations that are largely tied to specific pathogens and to federal research funding. None of the panelists pointed to major gaps in this framework other than noting that the Department of
Health and Human Services targets a very narrow set of experiments and the dual use research of concern (DURC) policy covers only 15 pathogens. However, their comments did reinforce the NSABB’s observation that the strength of the policy oversight is stronger for some pathogens than for others.
- Existing law does not really reach research that is not conducted with federal funding (i.e., industry-sponsored research). This raises the question, should it? And if so, through what mechanism?
- The time to regulate is at the time the research is conceived. The point of publication is far too late. Having a strong review process up front avoids a lot of problems down the line—and also establishes that institutions have acted with due care (which may come up in litigation). Funding agencies and institutions can engage principal investigators (PIs) at the point of designing their protocols to think through the risk issues. This is especially useful because many PIs do not understand dual use risk issues.
- Regulators, including both institutions and federal agencies, can benefit from greater use of consultation. Talking with each other and with external experts can boost the quality of review and the dissemination of knowledge and best practices.
- Epistemological question: How do we know if a regulatory approach is working? Beyond the absence of rare, catastrophic events, what should we use as performance measures? The panelists suggested public trust, but in her view, this is both hard to assess and a narrow measure. The NSABB may wish to think (in relation to its Key Finding 2) about what it means to say the policy frameworks are “effective.”
- One tension in oversight is between the desire for transparency and the risk that public disclosure of sensitive information will elevate the very dual use risks that oversight is aiming to minimize.
- The criteria that the NSABB set forth for reviewing GOF research are reasonable, but not very specific. They rely on subjective judgments such as “likely” and “highly.” Yet there is a tension between pursuing greater specificity in regulations and providing enough flexibility to make case by case judgments. Also, it is not clear how to get more specific about some of these standards.
- How much variation should be tolerated in how institutional review committees evaluate research? On the one hand, one would like to have common standards applied in a reliable fashion. On the other hand, institutions have different capacities, and there might be something one can learn from their individual innovations in practices. The panelists did not see a major prob-
lem with having a “patchwork of institution-dependent rules”; this is something the NSABB may wish to consider.
Dr. Bloom then turned to the comments on the session he had moderated (“The Policy Landscape: International Dimensions of GOF Research”). It was clear from the very beginning of the sessions on the first day that everyone involved in this meeting recognizes that science and the risks and benefits have global implications, and GOF research clearly has raised global concerns. The session included major presentations on the groundbreaking progress made by the European Union (EU), which showed that it was possible to have discussions and bring policies from 28 countries to a common focus, and bring scientific academies in almost all of those countries to a consensus on the scientific policies that would govern this research. The discussions emphasized the need to expand and extend the discussion among countries in Europe. The panelists would be very interested in discussions after the U.S. policies are formulated, and they were eager to find out ways in which discussion and consultation can be expanded to include all countries.
In this context, the session heard a very important discussion of the InterAcademy Partnership, a global network of science and medical academies that now links academies in 128 countries and four regions. That could serve as a useful focus for extending the discussions of GOF research in a coherent way to responsible scientific bodies that already exist and perhaps should be considered in moving forward.
A suggestion that emerged from the session was that the best place to start is probably with discussion within the scientific community rather than going directly to policy makers one at a time, one country at a time, until there is some general understanding and agreement within the scientific community. Then the complexities of those dialogues and discussions could be simplified to a level that could gain understanding and support from political leaders.
The session also heard about the value of not just pontificating but having important partnerships and collaborations that enable transparency, technology transfer, and training to occur. These can also be a way of maintaining standards and identifying low standards that need to be addressed.
He offered several personal reflections about what he had learned during the meeting.
- He had come to the view that process is probably as important as principles. It is not clear, given the technicalities of the science, that the lay public, and even government officials, are going to understand the technicalities. But if the processes at every level
are transparent, maybe that is the best way to gain trust within the scientific community and within the public at large. And that means the processes as he was conceiving them, and as the NSABB conceives them, are a set of tiered processes that occurs at multiple levels from the investigator, the Institutional Biosafety Committee (IBC), the institutions, study sections, and all the way up to the higher levels of policy.
- His second reflection on the meeting is that whatever one does, it has to be recognized that science is changing dramatically so that policies cannot be fixed in time to predict what possibilities, opportunities, technologies, and threats will be coming in the future. The policies need to be flexible in some way to accommodate new knowledge and adapt to new opportunities and possibilities and yet have a clear-cut framework that people can work with.
- Finally, he supported Gabriel Leung’s comment about why the Biological Weapons Convention (BWC), as far as we know, largely works. Why do the Helsinki principles actually govern how human experimentation is done? He would say it is less legal liability and lawsuits than it is to ask, what are the principal constraints on scientists? He believed those have to do in general with constraints on reputation, credibility, integrity, and respect in the scientific community. Matthew Meselson, for example, when asked how one could possibly encourage more action to enforce the BWC, raised the interesting possibility of making it impossible for scientists who violated international law to travel overseas as another constraint that would be of high value for scientists. So he believed enforcement at a moral level is highly possible.
Baruch Fischhoff offered his comments on the session devoted to “Informing Policy Design: Insights from the Science of Safety and the Science of Public Consultation.” He began with some nomenclature, using the term “social science” for those not familiar with that part of the world to include social, behavioral, and decision science. Behavioral science is the study of individuals; it is psychology, microeconomics, neuroscience, and other social sciences. For larger groupings, it is sociology, anthropology, and political science. And decision science is management science, the cost, risk, and benefit analysis of that form of applied mathematics that takes human behavior into consideration. For problems of this complexity and subtlety, he argued that insights from all these fields are needed.
The framing of the human dimensions that he believed came out of the session is that reducing the risks and realizing the benefits of these technologies depends on people at the level of individuals, organizations,
and policies. Second, relying on intuition in designing and evaluating the systems that deal with these technologies is natural, but it is unfortunate because those intuitions are often wrong or imprecise. Third, the biological research community faces the challenge of not having what some economists call the absorptive capacity for social science. That is, there is nobody on the inside who can tell when they have a social science problem, define it in terms that would be recognizable to a social scientist, and find somebody who will help them to work on the problem. That is on the demand side. On the supply side, the social science community may lack the incentives for addressing biological science issues because its incentive scheme is to publish on relatively narrow topics. He thought the symposium was fortunate to have speakers in his session who have that bridge which requires them to draw on different social sciences as well as to see the value for the basic science to engage in applied problems.
He then asked what kinds of issues one would find if one brought the social sciences to bear? One is to identify the places in which scientific judgment affects the prediction of outcomes. Many of the statements heard during the symposium had to do with scientists anticipating how transmissible something would be. Given that this a discovery process, there are likely to be surprises. So it is smart to recognize that these are scientific judgments and to elicit them in the best, most accountable way possible. Second, these are ethical judgments and analyses: for example, how you define them, who you share them with, where various publics are engaged in the process. Third is the communication to and from stakeholders so that one can develop the technologies in the ways that are most sensitive to their needs and keep them properly apprised of developments.
A fourth problem, more from the social sciences, is the normalization of pathology and the virtue. One can become accustomed to best practices that are terrible by any absolute standard. But as Ruthanne Huising’s talk and Dr. Bloom’s comments illustrate, there is also the possibility of the normalization of virtue. There are things that one just does not do, and this is part of the kind of bottom-up process of acculturation and socialization that Dr. Huising discussed.
Fifth, there can be a mismatch between the technology and the regulatory mechanisms in terms of not just government regulation but also the societal controls that one has over technologies. One can have regulatory control mechanisms that do not have the requisite variety for technology that is moving very quickly when institutions were developed for a different environment. Another problem that one runs into is the neglect of opportunity costs. A good deal is known about the technologies in which one has invested and much less about the ones in which one has not invested.
Dr. Fischhoff concluded, in the spirit of Dr. Bloom’s two personal comments, with two recommendations.
- Given the difficulty of bridging the basic and social sciences, there would be value in creating centers that would serve as a kind of clearinghouse for helping interested biologists to find social scientists who could help them work with their problems and social scientists to find the people with whom they are willing to work. They could help make the case to department heads that this is a worthy pursuit to spend as much time as all three of the speakers have had working with clients to apply the social science that is available and to create the needed evidence for what some people call adaptive management.
- The second is to develop shadow alternative evaluation processes. That is, if current mechanisms are not up to it, alternative mechanisms are needed. Monica Schoch-Spana’s talk illustrated the potential to bound the set of deliberative mechanisms whereby this might work. But one will not really know how they would work until people with the different kinds of expertise and cultural experiences come together and explore them. And one might hope that if there were some worked examples—maybe like some of the conventions that people have talked about—they would eventually become the normal thing that people do. It is very hard to get people to repeal regulations that promise safety, but sometimes they just atrophy. And maybe they will go away if we have something better.
Philip Dormitzer offered his reflections on the ideas raised by what Dr. Fineberg called the “interested parties” in his session (“Best Practices to Inform National Policy Design and Implementation: Perspectives of Key Stakeholders in the Biomedical and Public Health Communities”). He began with Michael Callahan, who pointed out that the European Union and the United States are not the future epicenter—and may not even be the present epicenter—of GOF research. And similarly, government funding may not necessarily be the dominant mode of funding for this research. It is necessary to expand the thinking about how one might influence these processes. Another very interesting point was that some of the case studies he offered where mechanisms of control of infectious agents of concern were lost not due to any malicious intent but due to the necessities facing people operating under difficult circumstances. There are circumstances where consultative mechanisms might help, where forms of assistance might help, and also where incentives need to be created to encourage people to limit risks when there is no capacity to regulate their behavior.
Robert Fisher had discussed the inherent conflict between the need for evidence-based decision making at the regulatory level, which is necessarily time consuming and expensive, and the frequent need to act quickly, particularly in these emerging or outbreak situations. This conflict has to be reconciled, and the considerations around policy for GOF studies of concern play into that. And this also raised the earlier point that estimation of risk can really only be judged in a context of expected benefit. Without benefit, why would one take any risk? These things play into the sorts of mechanisms that one might pursue to try to control the risks of GOF studies of concern.
Dr. Dormitzer commended Jonathan Moreno for trying to identify where there are areas of consensus regarding policy for GOF research. He did not know if everyone agreed on those areas of consensus, but he thought they were close enough to be worth mentioning. There is consensus that there are times when it is necessary to move quickly, but also that some regulation is needed. There is consensus that biocontainment is imperfect, that risk mitigation heavily involves human factors, especially as the mechanical and environmental factors get under better control. He thought that there was consensus it would be desirable to have alternatives to risky experiments, and that gain of function experiments are not fully predictable, but the capacity is probably improving.
Dr. Moreno also had a very interesting proposal for what he called R-BATs, or Risk–Benefit Assessment Teams. The idea is that there would be real-time, ongoing, interactive evaluation of experiments of concern or experiments that may not yet be of concern but could venture into that area so that there was not simply a checkpoint—for example, at the time of funding and another at the time of publication—but an ongoing process of interaction. Dr. Dormitzer thought that might not take care of the whole issue, but it could make a very solid contribution.
Finally, Ethan Settembre had discussed some of the lessons of the first H1N1 pandemic in 2009 and then the H7N9 outbreak response in 2013, making the point that GOF research is an inherent part of the routine business of vaccine production. Unintended consequences of GOF policy choices therefore needed to be considered.
Dr. Dormitzer noted that today sequence analysis is a part of risk analysis and vaccine virus selection, but it is secondary at this point to phenotypic, clinical, and epidemiologic characterizations. He thought, however, that will start to shift over time. It is certainly never going to be the case that a sequence analysis can replace current approaches, but the volume of relevant sequence data is likely to increase dramatically. It is now possible to sequence flu strains directly from harvested secretions; there is no need to grow the virus. The ability to do that sequencing is becoming increasingly widespread, and it is quite con-
ceivable that these will be done in some sort of handheld devices in the coming decade.
Dr. Dormitzer closed with some personal observations. One was an increasing need to consider integration of the multiple biosafety and biosecurity regimens. The other was a concern about unintended consequences: for example, from the “blowback” onto vaccine production from the controversies over GOF studies of concern—or GOF research more generally—in academia.
Ronald Atlas began the discussion of the session he had moderated (“International Governance: Opportunities for Harmonizing GOF Research Policy and Practice”) by remarking that he had learned that the international dimensions of the debate about GOF research, risks, and benefits cannot be ignored. A number of possible ways of approaching that on an international scale had been suggested. One was to go to a non-regulatory framework to take ethics or other sorts of systems that have gained traction and are accepted across the biomedical field, build on those, and essentially build a culture of responsibility within the community that would assure the public that everyone was taking the appropriate mitigation steps. Another was to simply accept that nations that were carrying out GOF research would develop their own sets of regulatory frameworks. Another was to allow the efforts that are ongoing in areas like the United States and the European Union to begin to cross-fertilize each other and to bring together groups that would then allow for voluntary harmonization without going to an international organization like the World Health Organization (WHO). And finally, the higher level is to go to a United Nations agency such as WHO and attempt the perhaps impossible task of coming up with a global regulatory scheme.
Dr. Atlas thought that another important point from the session came from Keiji Fukuda: the need to find a compelling and readily understood reason to come together at the international level to take action. What would that reason be for GOF research? Dr. Atlas suggested that it could be “preventing a global pandemic.” That could mean that the research is absolutely necessary because it will provide the vaccines, the surveillance, or whatever to prevent the pandemic. Or to take the opposite side, the research itself is a risk because something could get out and cause a pandemic. That is the dilemma underlying the entire debate over GOF research, and he was still not sure there would ever be an answer that was satisfactory to everyone.
Dr. Fineberg then asked if any NSABB members had comments or questions. Joseph Kanabrocki from the University of Chicago and co-chair of the NSABB WG began with some observations. He was heartened that the comments and discussion suggested that the NSABB had not made any major missteps. He was also pleased that there was movement away
from a list-based system to a phenotypic system that the NSABB has been recommending for a number of years. That had not been explicitly stated but he thought it was implicit in the discussions.
Dr. Kanabrocki said that, speaking personally, he had heard a number of things on which the NSABB WG had not yet deliberated that he would like to see added to the NSABB report. These included incident reporting mechanisms that could address the lack of data highlighted by the risk and benefit assessment as well as the need for harmonization, both on the national level and the international level. He thought it should be something the final report called for more explicitly, and addressed some of the ideas about how that could be accomplished. He also hoped that the NSABB would recommend a code of conduct for scientists engaged in this type of research.
Dr. Kanabrocki then returned to the three phenotypes recommended in the draft report as the criteria for identifying GOF studies of concern. The original version of the NSABB WG’s Draft Working Paper included resistance to countermeasures as an example. He stressed that it was intended only as an example, but, unfortunately, people seemed to have seized on it as the one aspect of the third criterion. So he wanted to remind everyone that for him—and he thought most of the NSABB as well—the third phenotype is what makes this an issue of pandemic potential. He thought the first and second traits go to the animal pathogen interface, and the third trait is where one addresses human public health, the societal aspects of pandemic. He thought that the third trait remains critical, though it might be possible to revise the language in a way that is more palatable.
Susan Wolf, an NSABB member from the University of Minnesota, raised the issue of oversight design and said she wanted to try out two ideas, one at the institutional and one at the federal level. This is crystalized by the flow chart introduced at the symposium (see Figure 2-2). The NSABB has developed the chart to communicate visually the oversight process it is planning. At the institutional level, who decides that an experiment is a potential GOF study of concern? At the moment, the NSABB is envisioning the initial determination would be made by the PI and the local oversight authorities, presumably the IBC. Her concern was how to avoid recapitulating the history of Institutional Review Boards (IRBs), which she characterized as being very slow to design, much less to put in place the sort of “learning” oversight system where there is a systematic effort to gather experience and share lessons learned and also to identify unjustified variations in how the rules are applied. There is a substantial amount of research on this problem and she hoped it would be applied to ensure that the GOF system would be state of the art.
Her other concern was at the federal level and what would happen if a GOF study of concern is identified at the local level. Who would review
it and apply the several principles the NSABB was proposing? Could one answer be a new Federal Advisory Committee Act (FACA) committee charged with this task?
Marie-Louise Hammarskjöld, an NSABB member from the University of Virginia, asked about the issue of how to capture research done without federal funding, citing increased interest from industry in university research. She thought that, given that the concern was potential pandemic risk, the board might not be doing its job if it did not deal with that part of the research enterprise.
Jim LeDuc, an NSABB member and Director of the Galveston National Laboratory at the University of Texas Medical Branch, was particularly interested in risk mitigation. His question to the panel was how to create a foundation upon which a policy can be built that clearly articulates the requirements for biosafety and biosecurity, and importantly, a culture of responsibility that spans the scope from the individual scientist all the way through to the institutional leadership.
Dr. Atlas reacted to the question of the IBC versus the national level and suggested that a great deal was learned during the early days of the Recombinant DNA Advisory Committee (RAC). He commented that the IBCs sent cases to the full national board until the RAC was able to demonstrate to the local IBCs what was and was not of greater concern. The RAC refined the principles, and he thought the same approach should be taken for GOF studies of concern. What is needed is to create a learning process, an iterative process, where there is appropriate consultation from the national back to the local and eventually the local learns how to handle the cases and the burden on the national board diminishes.
The RAC had also dealt with the question of federal funding. It turned out that the first cases that came to the RAC were from industry, which wanted the national approval. Industry did not want to go around the system; it wanted to become part of the system even though it was not mandated to do so. He had no reason to think the same thing would not happen here.
Dr. Dormitzer said that he could certainly speak for having been in companies when there are national and accepted standards. Even when not required to follow them, companies in general want to do so. In fact, the most distressing situations are those where there is a lack of clarity over what the expectations are. And that is why the ideas about advisory boards and groups to which companies can turn to ascertain what those standards are, even if compliance is voluntary, are useful. He thought there would be a widespread desire to meet the standards.
Dr. Fineberg added a comment about the discussion of the importance of the scientific community building and reinforcing a culture of safety as well as a discussion about the importance and practicality of public
engagement and about the various types of publics. It seemed to him that in the thinking of the NSABB, going forward it would be useful to consider a model that incorporates, at an appropriate level, a FACA-like entity and relevant public participation as a way of building the kind of larger trust, and, frankly, reinforcing the community of safety, both within and around the scientific community, on which success ultimately will depend.
Dr. Fischhoff commented that he was involved with the Food and Drug Administration (FDA) over the past few years as the Center for Drug Evaluation and Research developed a benefit–risk framework (FDA, 2013). The framework was developed jointly with its staff and resembles Kara Morgan’s model of deliberative criteria-based frameworks (see Box 2-3). It was designed to help people tell their story in a way that one could see what the logic was; one could compare across decisions; and one could find the decisions that were—as someone has mentioned—anomalous and that gave industry a clearer sense of the kind of things that the FDA was approving.
Dr. Fineberg made another observation on the first and fundamental question of the phenotypic inclusiveness or exclusiveness. One of the things he heard repeatedly in the course of the discussion was the importance of circumscribing the domain of concern so that neither the scientific community nor the regulatory authority, nor, frankly, the interested publics were needlessly burdened with a wide variety of questions that truly do not raise and rise to a level of concern. At the same time, there was a lot of discussion as to whether the current formulation—where the requirement is that a given experiment affects all of the elements—is a sufficient degree of circumscription. He thought that the real challenge for the NSABB was to reflect its actual intent in its description and to do so in a way that is clear and understandable over time. So, for example, he thought that one could be overly fixed on the models that depend on familiarity with influenza as the case. He thought the policy that will be promulgated ultimately needs to be capable of dealing with GOF research, and increasingly, experiments that intend to develop entirely novel organisms with capacities and capabilities that are not currently even expressed in existing microorganisms. And if one thinks that broadly, defining a phenotypic space that involves virulence, and involves transmissibility, and involves resistance to treatment, if that is how one wishes to characterize it, one could imagine placing imaginably any organism at a point in space that has those three attributes defined. Thought of that way, there is an aspect of this space where one would not want research to go at all. There is an aspect of that space where one would not want to require further review. And then there is an aspect of that space, depending on the starting point and the direction of the experiment to make it worse or to make it
better—and this is where vaccine development comes in so importantly—would dictate that it may, then, be a topic that requires consideration as a GOF study of concern. He said he hoped that it would be possible for the NSABB to mull over this question and to think about ways to characterize and describe exactly what it believes should determine a consideration for GOF studies of concern. And, perhaps, to be explicit about excluding vaccine development research, which is so fundamental to protection and actually contrary to the concerns. And to be able to apply the principles more generally as new ideas with different organisms will naturally arise in the creative minds of scientists.
Dr. Kanabrocki agreed and said that he wanted to clarify again that, as his NSABB WG co-chair Ken Berns had said on the first day, the NSABB was not really worried about what goes in, but what comes out. The NSABB WG was not saying that the experiments of concern are only those that would result in the three phenotypes. What they were saying is the experiments of concern are those that result in an organism that displays those three phenotypes, and there is a difference. Because one could begin with two of the three and contribute the third and that would be an experiment of concern.
Dr. Fineberg then opened the floor to questions and comments from the participants. Wendy Hall from the Department of Homeland Security asked a question in terms of precedent. First, how important is it that one has full awareness of the GOF experiments being proposed throughout a variety of different labs in the United States? She was not sure there is clarity across the academic community at any one point in time about who is planning and doing what. Her second question related to the experience with the Select Agent rules, which were implemented in 300 various labs with a substantial range in the quality of performance. In GOF research, is there any precedent—if the academic community had full visibility, peer to peer, institution to institution—that there could be corrective elements from the institutional bodies with each other to redirect or help labs not performing as well? Her hope was to avoid the need for the government to have to come down with tough, restrictive language across the board that affects everyone because of a case where one or two labs make an error that makes the mainstream press.
Dr. Fineberg responded that her question reinforced the importance of the scientific community itself coming together in a coherent way on this and related issues of safety and security. From a personal point of view, he did not think the government alone could accomplish this, nor could the community, acting without the guidance of shared standards. So he thought the efforts would be mutually reinforcing.
Dr. Schoch-Spana from the Center for Health Security of the University of Pittsburgh Medical Center picked up a point that Marc Lipsitch from
Harvard University had made about the capacity for innovation, not just prevention. Are there things, such as special research funds, that could incentivize scientists to try alternative approaches to GOF studies of concern? If systems are put in place and data are gathered about the kinds of experiments that are not funded, those data could be synthesized to identify lines of work that need to be replaced with safer alternatives, and research to develop those alternatives could be eligible for special funding.
Nicolas Evans from the University of Pennsylvania offered two comments. The first concerned the Declaration of Helsinki, which was a great initial work in establishing norms in human subjects research and biomedical ethics. But he thought that the FDA’s removal of the Declaration of Helsinki from its regulations was an indicator that, as a model for governing the life sciences, one should be especially careful about the way one seeks international collaboration. If the United States sets up or attempts to initiate other arrangements for governing GOF research, only to pull out of them because it does not want them referenced in its own legislation, that would pose a major problem. He also built on Dr. Lipsitch’s and Susan Wolf’s comments about the critique that IRBs and biomedical ethics chill biomedical research, commenting that it had been made many times and citing two recent works (Klitzman, 2015; Schneider, 2015).
Dr. Evans also offered three other comments.
- He thought it was very important conceptually to make a clear distinction between general GOF research, which is accepted as a valuable and commonly used technique, and specific GOF experiments resulting in the creation of novel pandemic pathogens that is beneficial. For example, the Gryphon Scientific benefits assessment had concluded that a portion of the studies it assessed provided unique benefits.
- Dr. Evans noted that health care workers, the people who bear the disproportionate burden of risk in the event of an infectious disease outbreak, had been entirely absent from the discussions.
- Regarding innovation, he commented that because $820 million had been provided to synthetic biology research over the past half decade, it seemed prudent to also spend a small amount of money on innovation in applied biosafety, such as on material science to improve personal protective equipment.
Jenna Ogilvie from the National Academies of Sciences, Engineering, and Medicine’s staff brought two questions from the Web. The first was from Grigory Khimulya from Harvard College. Do current oversight frameworks provide adequate treatment of novel pathogens that
were never seen before and are not on the pathogen lists mentioned in the NSABB’s draft recommendations? For example, if a new potentially pandemic pathogen like Middle East respiratory syndrome (MERS) is identified, would GOF studies of concern with this pathogen fall under proposed regulation? The second question, to Dr. Casagrande from Gryphon Scientific, came from John Kadvany from Policy & Decision Science in Menlo Park, California, prompted by publications suggesting that GOF research has characteristics of so-called potential “normal accidents,” in which a technology combines highly negative outcomes (e.g., a nuclear plant meltdown) with unquantified and perhaps unquantifiable scenarios falling outside even the most complete probabilistic risk analysis. Gryphon Scientific’s work suggests that such scenarios may be relevant with the extreme negative outcome being pandemic risk. Did Dr. Casagrande have an opinion on this characterization of GOF studies of concern? Is it correct in some respects as it may be for some contemporary technologies? Or is there a characterization fueling clashing GOF risk perceptions?
Dr. Casagrande commented from outside his role as PI of the risk and benefit assessment to push back a little bit on several comments he had heard about what could be learned from the successes of the BWC. He thought that the protocol was a better exemplar because it banned first use of bacteriological warfare. In contrast, several members of the BWC have violated its provisions, leading him to conclude that one ought to learn from its failures, such as the lack of a verification and inspection regime and the lack of an enforcement capability that is relevant internationally.
Dr. Lipsitch from Harvard University commented that there had been considerable discussion about whether there is consensus that there are any experiments that everyone would agree would never be acceptable and any experiments everyone would agree should never be impeded. He said he could certainly think of experiments and developments one would never want to impede and suggested that there should be a green line as well as a red line. He thought that whatever regulatory framework or oversight framework is developed, it would be incredibly helpful to have at least those two kinds of cases spelled out by some examples in order to build our intuition for the next time something comes up that is not envisioned yet. He also thought some more contestable case studies, where there would not be an easy consensus, would be useful.
Dr. Kanabrocki responded to Dr. Lipsitch. The NSABB WG had tried on a number of occasions to think of experiments that absolutely should not be done. And every single example that came up was of an experiment that lacked scientific merit. So he suggested that, in his personal view, it would be a struggle to think of experiments that have scientific merit that should not be done.
Gerald Epstein from the Department of Homeland Security suggested that it would be useful to go back to the Department of Health and Human Services (HHS) framework that Larry Kerr had described on the first day, and the test that a proposed project would have to satisfy before it was deemed acceptable for funding. One was that the pathogen to be constructed was one that might occur by a natural process, so that there was a reasonable expectation nature might get there first. If it is not something nature might do on its own, one could not argue the work was to defend against a potential natural development. This might be an example of something on the other side of the line, at least from the precedent of the existing HHS framework.
Dr. Fineberg closed the session by expressing the Academies’ deep appreciation to everyone who had taken part, in person or via the Web. He commended the work being done in Europe and commented that, in his view, a policy about GOF research that applies only to one country is not a policy that will work for the safety of the world. And that is something of which one needed to be very mindful. He also commented that it was evident from all the discussion that whatever the next iteration of conclusions and recommendations that emerge from the NSABB is, it will really be one step in a process that is likely to continue. It will require continued refinement, the engagement of the scientific community, and finding creative ways for the public that is interested and affected by GOF research to be involved in the process of decision making going forward.