Page 25

5

Using Science-Based Risk Assessment to Develop Food Safety Policy

The purpose of this session was to explore how science-based risk assessments are utilized to develop food safety policy. The session began with an overview of the risk assessment process, followed by the promises and pitfalls of risk assessment, the recently completed federal government microbiological risk assessment of Salmonella enteriditis in eggs, and risk communication. The session ended with a report on a series of World Health Organization consultations on microbiological risk assessments.

HISTORICAL PERSPECTIVE OF RISK ASSESSMENT AND REVIEW OF STEPS IN THE PROCESS

Presented by Joseph V. Rodricks, Ph.D.

Managing Director, The Life Sciences Consultancy

Risk assessment is the process through which information on risks is identified, organized, and analyzed in a systematic way to get a clear, consistent presentation of the data available for practical decision-making. It is not a formula, but an analytical framework that defines the types of data and methodologies that are to be used to analyze a risk, and explains why, and also details the uncertainties and problems associated with particular assessments. The results of the risk assessment process are then the basis for risk management process, the process by which solutions for controlling risks are obtained. The purpose of risk management is public health protection.

The first attempts to deal with hazardous agents began in the 1940s to 1950s, when toxicologists looked at data on hazardous chemicals, such as pesticides and food additives, and derived limits on exposure in order to protect human health. In 1954, two Food and Drug Administration (FDA) toxicologists, Lehman and Fitzhugh, published a paper that defined the basis for what is now referred to as the acceptable daily intake (ADI), a level thought to be a threshold intake of a chemical for a very large population of people, below which there should be no significant toxicity risks. In this paper, the toxicologists not only characterized a procedure for defining the ADI, they also described the use and application of safety factors and how animal data would be used so that interested individuals could understand how the ADIs were derived. The development of the ADI was based on the notion that hazardous chemicals will not be a



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 25
Page 25 5 Using Science-Based Risk Assessment to Develop Food Safety Policy The purpose of this session was to explore how science-based risk assessments are utilized to develop food safety policy. The session began with an overview of the risk assessment process, followed by the promises and pitfalls of risk assessment, the recently completed federal government microbiological risk assessment of Salmonella enteriditis in eggs, and risk communication. The session ended with a report on a series of World Health Organization consultations on microbiological risk assessments. HISTORICAL PERSPECTIVE OF RISK ASSESSMENT AND REVIEW OF STEPS IN THE PROCESS Presented by Joseph V. Rodricks, Ph.D. Managing Director, The Life Sciences Consultancy Risk assessment is the process through which information on risks is identified, organized, and analyzed in a systematic way to get a clear, consistent presentation of the data available for practical decision-making. It is not a formula, but an analytical framework that defines the types of data and methodologies that are to be used to analyze a risk, and explains why, and also details the uncertainties and problems associated with particular assessments. The results of the risk assessment process are then the basis for risk management process, the process by which solutions for controlling risks are obtained. The purpose of risk management is public health protection. The first attempts to deal with hazardous agents began in the 1940s to 1950s, when toxicologists looked at data on hazardous chemicals, such as pesticides and food additives, and derived limits on exposure in order to protect human health. In 1954, two Food and Drug Administration (FDA) toxicologists, Lehman and Fitzhugh, published a paper that defined the basis for what is now referred to as the acceptable daily intake (ADI), a level thought to be a threshold intake of a chemical for a very large population of people, below which there should be no significant toxicity risks. In this paper, the toxicologists not only characterized a procedure for defining the ADI, they also described the use and application of safety factors and how animal data would be used so that interested individuals could understand how the ADIs were derived. The development of the ADI was based on the notion that hazardous chemicals will not be a

OCR for page 25
Page 26 problem unless a threshold dose is exceeded. All substances would express toxicity at sufficiently high doses, but under the Lehman-Fitzhugh model, all such substances would be safe (i.e., pose no significant risks) unless the threshold dose was exceeded. The problem they attempted to solve was to identify the threshold dose for a large and variable human population. This threshold model was not applied to carcinogens. Exposure to carcinogens at any level above zero was thought to increase the probability of a carcinogenic process moving along toward completion. This gave rise to the phrase “no safe level” and the Delaney Clause, which required zero tolerance for any intentionally introduced food additive that could be demonstrated to cause cancer in lab animals or man. For this reason, regulatory agencies often avoided dealing with carcinogens, and either banned them where it was easy to diagnose, or ignored them, or resorted to criteria unrelated to health for decision-making. In 1973 FDA developed a model for the relationship between exposure and carcinogenic risk that assumed the absence of a threshold and a direct proportionality between dose and risks. Through the use of this model, the FDA made decisions that human health could still be protected at a very small predetermined level of risk and that scientific uncertainties would be based on conservative health protective assumptions. In 1983, in response to a Congressional request to set up a separate, nonfederal institution to conduct risk assessments to keep then “untainted” by the regulatory process, the National Academy of Sciences published a report titled Risk Assessment in the Federal Government. This report, for the first time ever, clearly elucidated a framework for both the risk assessment and risk management processes. An updated version of the report was published by the Academy in 1994 that further promoted the rise of explicit regulatory guidelines for risk assessments to ensure that risk assessments would not be manipulated, on a case-by-case basis, to achieve predetermined regulatory outcomes. The risk assessment and management processes were developed for two major reasons. One of the most important reasons is that, in almost all cases, it is beyond current technological capabilities to directly measure risks to large populations from chemical agents, pathogens, and other hazards. Without going through the risk assessment process, there is no scientific basis for regulatory decision-making. Another reason is that statues require premarket determinations of safety so that the level of risk of a substance to human health can be evaluated prior to exposure. Initial risk evaluation of an agent involves defining its characteristics, specifically its inherent hazardous properties. This includes describing the kind of toxicity or the type of illness it causes, as well as whether the information is derived from human, animal, or other studies. Further evaluation frames the dose-response assessment. This analysis defines how the severity or incidence (or both) of adverse effects change with exposure conditions. The final stage in the evaluation of an agent is the risk characterization process that estimates the risks involved as well as describes the potential uncertainties to the population being evaluated. This step defines the distribution of a population around a predetermined threshold or estimates the probability of an effect to the population over a period of time. It answers the question of how many people might be affected by this agent and to what degree. From this information, risk management decisions can be made about exposure levels that pose insignificant risks for large populations, taking into account not just the data, but its limitations and applicability to large populations. In analyzing and working with the data, two areas requiring special consideration are accounting for variability and identifying exceptions. Currently, adequate research is not available to provide data on distributions for either thresholds or effects in populations or to

OCR for page 25
Page 27 specify the effects of particular levels of exposure. Thus, variability is dealt with through the use of uncertainty factors that are typically factors of 10. Additionally, risk is still described as a function of dose for a range of exposures above zero by the use of linear models. A more common way that food safety decisions are currently made is to define risk goals and apply regulatory measures at specific hazard and dose response levels. Regardless of the method used, controversies arise in both these decision-making models over the amount of data needed to make these types of decisions. Risk assessment and management processes continue to be scrutinized and improved upon in order to create effective processes for incorporation into the public health or regulatory decision-making process. Currently, efforts at improvement are being focused on issues such as variability and risk distributions in a population rather than relying on point estimates. Despite some weaknesses in these methodologies, both risk assessment and risk management will continue to be valuable analytical techniques for organizing data on hazardous agents in order to make practical decisions. PROMISE AND PITFALLS OF RISK ASSESSMENT Presented by George M. Gray, Ph.D. Deputy Director, Harvard Center for Risk Analysis The role of risk assessment in food safety is growing. Evaluations of food-borne pathogens, pesticide residues, and genetically modified organisms inform and influence important policy decisions. Risk assessment has great promise for guiding food safety policy, but several pitfalls must be avoided. If these shortcomings are addressed we can be confident that risk assessment will help us make the best use of scientific information in food safety decisions. Years of risk assessment experience in engineering, environmental evaluation, and food safety have highlighted four pitfalls: 1. Ignoring Variability. Variability is important because everyone in a population is not at the same risk. The public understands sources of variability like differences in food consumption or water intake and expects risk assessors to reflect these facts. Quantification of variability can aid risk management in identifying high-risk groups or new mitigation strategies. Reporting risks as population averages hides too much information. 2. Ignoring Uncertainty. Risk assessments often must proceed in the face of incomplete data and knowledge. We may not know the true range of consumption of a particular food, for example. The presence of this uncertainty means that single point estimates of risk are insufficient. Risk assessors must quantify uncertainty to help risk managers and the public understand how well a risk is known and to guide future research and data gathering efforts. 3. Favoring Consistency Over Science. There are often concerns that risks are assessed on a case-by-case basis and a lack of standards will allow evaluations to be manipulated. On the other hand, science shows us that hazards are rarely similar and standard methods cannot reflect the diversity of sources of risk. An example comes from the world of environmental risk assessment. The standard and consistently applied methods of cancer risk assessment used by the Environmental Protection Agency (EPA) assume a dose-response function that is linear in the low-dose region and has no threshold. There is evidence that some agents, like certain types of radiation and directly mutagenic chemicals, may indeed have this type of dose-response relationship. However, many

OCR for page 25
Page 28 scientists believe the linear, no-threshold approach to risk estimation is inappropriate for many other chemicals, such as some that are not direct mutagens This means that when EPA applies standard procedures to all chemicals, regardless of how appropriate they might be for a given substance, the amount of conservatism in a risk estimate varies greatly. A risk estimate for a powerful direct mutagen may be quite close to the calculated “plausible upper bound” while for a nonmutagenic compound the estimate may be an extreme overestimate of plausible risk. Two risk estimates that are generated through consistent procedures may have very different levels of scientific plausibility. Risk assessment guidelines should be sufficiently flexible to reflect the science. 4. Not Evaluating the Influence of Assumptions. Risk assessors must choose specific data and models when undertaking an analysis. Often there are other scientifically plausible data or models that would have large effects on the results of an assessment. To avoid misleading risk managers and the public, risk assessors must present risk estimates characterized by alternative assumptions and methods. If possible, the choices with the greatest scientific support should be identified. Managing these pitfalls will require the development of strong connections and lines of communication between the scientific and risk assessment communities. Risk assessment is often dismissed in the scientific community when it is perceived to ignore relevant science. At the same time, many in the scientific community are not aware of the methodological developments of state of the art risk assessment. Risk assessors must reach out to scientists to aid with characterization of hazards and consequences and in constructing and interpreting models. Peer review of both the science and the methods of a risk assessment will improve the analyses and increase the credibility of the results. Transparency in the process is necessary to building trust with all stakeholders. Anyone should be able to recreate a risk assessment based upon the documentation of the study. As risk managers, scientists, and risk assessors begin to address these pitfalls, risk assessment will become a more useful and effective tool for food safety. USING RISK ASSESSMENT TO ESTABLISH FOOD SAFETY POLICY: SALMONELLA ENTERIDITIS Presented by Robert L. Buchanan, Ph.D. Senior Scientist, Center for Food Safety and Applied Nutrition Food and Drug Administration Risk assessment techniques are increasingly being applied to microbiological food safety hazards. These techniques are powerful tools for incorporating science, identifying priorities, reducing complexity, and evaluating strategies in the regulatory process. Their purpose is to provide the information necessary for decision-making. This information can include, but is not limited to, known and unknown factors, the level of uncertainty and variability, the amount of bias or constraints present, and methods for making the entire process transparent. One of the first quantitative microbial risk assessments undertaken in direct support of a regulatory decision-making process was with the case of the human pathogen, Salmonella enteriditis. Salmonella enteriditis is one of the most common serotypes associated with food-borne illness and can cause primary gastroenteritis, a potentially life-threatening illness in high-risk populations. From 1976 to 1995, there was an approximately eightfold increase in Salmonella

OCR for page 25
Page 29 enteriditis cases with outbreaks appearing regionally in both the United States and Europe. Due to the increased number of outbreaks and rate at which Salmonella enteriditis was being isolated in the environment, the Centers for Disease Control and Prevention and the Food and Drug Administration conducted detailed studies of this emerging food safety concern. From these studies, the agencies concluded that outbreaks were almost always associated with the consumption of undercooked, otherwise clean shell eggs and that the source of the contamination was primarily associated with an increased incidence of transovarian infection within chickens before the egg is formed. The cause of the infection, which is associated with infections of either the ovaries or the oviduct of the chicken, is under active investigation. In 1996, both in response to a risk assessment clause requiring the U.S. Department of Agriculture (USDA) to conduct a risk assessment prior to undertaking any new major regulatory action and to the increased incidence and illnesses related to Salmonella enteriditis, the USDA's Food Safety Inspection Service (FSIS) in conjunction with several other USDA and U.S. Department of Health and Human Services agencies initiated a microbial risk assessment. The assessment was also undertaken to evaluate the public health impact of a 1991 Congressional amendment mandating that shell eggs packed in containers destined for consumers be stored and transported at an ambient temperature not to exceed 45° F and that the containers be labeled to state that refrigeration is required. As a preface to the steps taken by the Salmonella enteriditis risk assessment team, it is important that several unique characteristics of the microbial risk assessment process be highlighted. Unlike a chemical risk assessment, a microbial risk assessment deals with a single cell or one unit of infection and the primary interest is in finding and/or developing appropriate mitigations to minimize risk rather than setting exposure limits or ranges. Microbial risk assessments are usually categorized as either risk ranking exercises or product pathogen pathway analyses. Risk ranking exercises prioritize multiple risks for resource allocations. Product pathogen pathway analyses assess the entire process from beginning to end and then elucidate the risk for an adverse reaction within a given population. This type of analysis is a method used to model specific combinations of pathogens and products in order to identify the risks and contributing factors associated with a particular hazard. In the case study on Salmonella enteriditis, the core team employed a product pathogen pathway analysis to assess the pathogen. The team divided the analysis into five modules to track the movement of contaminated eggs through 16 different pathways. Hazard identification and dose response were known and risk characterization was defined by using multiple endpoints. The product pathogen pathway analysis proved to be an extremely powerful tool, not only to identify data gaps and areas of research needs, but because it gave risk managers the ability to evaluate a variety of mitigation or risk reduction strategies quantitatively. For example, refrigeration temperature as mandated under the 1991 amendment was evaluated as an effective risk reduction strategy. Using the model, it was determined that there would be an approximate 8 percent decrease in human illness if shell eggs were handled under 45° F ambient temperature during distribution, reflecting the fact that under certain conditions Salmonella enteriditis multiplies in eggs at ambient temperatures of 50° F and above. Therefore, FSIS made the decision to use the ambient temperature of 45° F as the requirement for the distribution, display, and storage of shell eggs. FSIS is currently expanding the egg product model to develop scientifically sound guidance for the egg product producers. In summary, product pathogen pathway analysis is a powerful new tool for organizing information, evaluating potential risk reduction strategies, and identifying and prioritizing future

OCR for page 25
Page 30 research needs. Since models can be updated and expanded readily as new data become available, it is also an effective method for quantitatively linking regulatory programs to actual public health consequences and forecasting predictions. However, it is not an answer unto itself, and the weaknesses and possible pitfalls inherent in the technique need to be recognized. It should be stressed that the process must be as transparent and understandable as possible in order to be effective. Scientists, risk managers, policy makers, and the public need to work together to develop a food safety system that meets the level of tolerable risk and then continue to improve upon on it. RISK COMMUNICATION: DEFINING A TOLERABLE LEVEL OF RISK Presented by Susan L. Santos, Ph.D. Founder, Focus Group The National Research Council (NRC) in its 1996 in report, Understanding Risk, defined risk communications: “Risk communication is an interactive process or exchange of information and opinions among individuals, groups, and institutions. It involves multiple messages about the nature of risk, and other messages, not strictly about risk, that express concerns, opinions, or reactions to risk messages or to legal and institutional arrangements for risk management”. Risk communication is a process by which all stakeholders are given the access and information they need to understand and participate in an issue. To be effective, risk communication must be an interactive process involving not just the scientific aspects of a risk. The public needs access to information to gain more knowledge about the issues involved. Risk communication is often an emotional and value-laden process, and the dilemma facing many risk managers is developing ways to incorporate and balance the weight of social and scientific factors. The problem more simply stated is that technical experts tend to focus primarily on the science rather than societal concerns. How can this dilemma be avoided or solved? First, it is important to highlight and further elucidate the complex construct associated with risks. The risk assessment process involves both variability and uncertainty and risk characteristics are both objective and subjective, the nature of which is often reflected in the public's response. Risks are not one-dimensional and must be viewed in cultural, social, and political dimensions. Problems arise when known scientific facts, estimations and assumptions, and legitimate social and political factors are poorly differentiated for both scientists and stakeholders or when legitimate factors beyond science are not included in the assessment. The definition and assessment of risk must be both a scientific and social process and should include what the 1996 NRC report referred to as a broader “analytic-deliberative” process to fully define and characterize risks. It is the dichotomy between the scientific and social issues that sometimes confuse the risk assessment and communication processes and how they are translated for decision-making. Barriers to effective risk communication include not only a lack of understanding by the public of the technical issues involved in risk assessment, but also the public's lack of trust in science, particularly the government and industry. Scientists are not trained in communicating to lay audiences and thus complex and confusing messages are often produced in an effort to transmit information about an issue. Media distortion is also an obstacle to effective communication and the media often focuses on one side of the issue at the expense of another. How can the gaps between how the public and media view and discuss risk and the way

OCR for page 25
Page 31 scientists and decision-makers talk about it be understood and bridged? And how do scientists gain the trust of the public? One of the most useful techniques for overcoming these barriers to communication is by ensuring that the target audience or stakeholder group understands and is involved in defining the scope of issue analysis as well as the deliberation. To do this, risk communicators must first determine who is the target audience, that is, who will see the issue as relevant and salient and then, tailor a clear message to that particular audience using the most efficient delivery channels available. Risk communicators must be also transparent and open about the risk assessment and risk management decision-making process and explore opportunities to make risks less involuntary, to create a climate of trust, and to allow a better exchange of information with stakeholders. Risk managers must translate scientific findings into understandable terms that can be both given to the media and appropriately communicated to the public. Risk assessors and communicators need to adapt to and comprehend how stakeholders frame risk issues and risk managers must recognize the role of those qualitative dimensions in risk management. They must also acknowledge that the questions and concerns of the public and experts are likely to be different and that both are valid. In closing, risk communicators and managers need to analyze and better understand the ways the public receives information and improve their understanding of the public's concerns and information needs. Risk managers need to find ways of incorporating qualitative values and information into decision-making. A framework must be developed via a coordinated effort between the public and scientists that helps to outline what messages, what channels, and what spokespersons are used to communicate risk. The issue of trust and credibility must continue to be addressed along with transparency in decision-making. Defining a “tolerable level of risk” clearly requires good science and risk managers who are willing to open up to and value all stakeholder contributions to the process. JOINT FAO/WHO CONSULTATION ON RISK ASSESSMENT OF MICROBIOLOGICAL HAZARDS IN FOOD Presented by Lester M. Crawford, D.V.M. Director, Georgetown Center for Food and Nutrition Policy The fourth and final report in a series of World Health Organization (WHO) and Food and Agriculture (FAO) consultations on microbial risk assessment was recently released. The consultations were designed to institutionalize microbial risk assessment as a tool for international food safety. The first consultation was held in 1995 to examine whether or not microbial risk assessment was feasible internationally to solve the microbiological problems that beset international food trade and also international public health. Since there are many conflicts around the world with respect to food safety issues, it was thought that microbial risk assessment could be used as an international instrument to bring the nations closer together. The task of the final consultation was to determine how microbial risk assessment should be done. It was determined that the WHO and FAO under the rubric of Codex Alimentarius would be the providers of expert advice. In addition to providing expert advice, WHO and FAO would be the clearinghouse for individuals to work with individual nations. They would review and interpret the microbial risk assessments and provide advice on how to use these at the national

OCR for page 25
Page 32 level. One of the first things that will be done is to develop a model for a microbial risk assessment. Then they will have to tell nations how to use the microbial risk assessment and its advantages. One of the things that risk assessment does if it becomes the national and international instrument, is that it will identify risk managers and will train them on how to deal with risk assessments. The interface between risk assessors and risk managers will be the most difficult part of the process. The report calls for WHO and FAO to be the stimulus for regional and international risk training. One of the products of WHO and FAO activities will be a communications network that will flow from a central focus to countries that have and have not made progress in microbial risk assessment. It is hoped that WHO and FAO will also build a body of literature on an international level with cooperation from national institutions. In addition, regional offices such as the Pan American Health Organization and others should go out and promote microbial risk assessment. Most importantly, the WHO and FAO should develop a decision support tool or tools and offer technical cooperation. The report suggests that resources for these activities should come from national governments. In addition, bilateral agencies should be involved and the case studies should be collaborations between developed and developing nations. A bilateral agency such as the World Trade Organization (WTO) has been very important in dealing with conflicts between nations such as the ban by Europe on meat from animals treated with anabolic steroids or hormones. When the WTO overturned the European ban on hormone-treated meat, Europe demanded 15 months to do a risk assessment because this was done before Europe took action. Had the European Union analyzed the risks earlier, much embarrassment could have been avoided.