Skip to main content

Currently Skimming:

4. DATA EVALUATION
Pages 55-80

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 55...
... As the evaluation of the toxicity information progressed, additional documentation was added to each dossier until, in addition to the synopsis, it contained: · A summary of adequacy ratings for tests that were required for a substance's intended uses according to the standards adopted by the Committee on Toxicity Data Elements, as well as the tests required for occupational and environmental exposure, indicating which tests had been performed, the documents in which they were reported, and judgments of adequacy of the test protocols. · A summary of the amount and quality of all information in the dossier for assessment of the substance's potential hazard to human health.
From page 56...
... Results were used to estimate testing needs for the select universe. Answers to three fundamental questions describe the adequacy of the toxicity data base on a substance: · What toxicity tests are needed for the substance?
From page 57...
... To estimate quality, the report of each test was compared with a set of reference protocol guidelines. Finally, the information was judged to be sufficient to assess the health hazard, in which case further testing would not be needed, or insufficient, in which case further testing would be needed.
From page 58...
... Document and evaluate adequacy of information for specific use or exposure situations and types of tests Further testing needed. - Document and evaluate specific inadequacies of information for specific use or exposure situations and types of tests FIGURE 3 Outline of procedure for decision-making in evaluating adequacy of toxicity information on specific substance 58
From page 59...
... , rate of absorption Toxicity Summary of all available toxicity information (see Appendixes B through G)
From page 60...
... To the extent feasible, the committee selected tests with routes of exposure similar to routes of exposure of humans under various circumstances. The Committee on Toxicity Data Elements recognizes that duration of exposure, as well as route, is intrinsically important in the manifestation and intensity of toxicity in test species and in the prediction of hazards to humans.
From page 61...
... GUIDELINES FOR ASSESSING THE QUALITY OF INDIVIDUAL STUDIES BASIC CRITERIA FOR SCIENTIFIC METHODS The Committee on Toxicity Data Elements believes that it is not appropriate to judge the adequacy of past and future studies solely by matching them against protocols that are considered acceptable today. The committee suggests that a study be considered adequate for use in a health-hazard assessment if it meets the following basic criteria: · All elements of exposure are clearly described, including characteristics of the substance's purity and stability, and dose, route, and duration of administration.
From page 62...
... · Evidence of adherence to good laboratory practices improves confidence in the results. SELECTION OF REFERENCE PROTOCOLS The quality of individual toxicity tests may be assessed by answering the question: Does the quality of the information permit a 62
From page 63...
... A comparison of available tests with reference protocols, combined with the judgment of the committee relative to the basic criteria of scientific methods, enabled the categorization of substances with respect to the quality of toxicity-testing protocols. In selecting reference protocols for judging the quality of individual studies, the committee used various resource documents on short-term and long-term toxicity testing, with emphasis on those constructed through national and international collaborative efforts.
From page 64...
... There was a consensus in the OECD group that neurotoxicity testing should include ..~;~1 ~=ha`F;^r=1 ==c-.c~m~nt.n outside the laboratory holding facility and neuropathologic examination of various neural tissues after in situ perfusion. me Committee on Toxicity Data Elements agrees.
From page 65...
... The second set of criteria was established by the committee in the expectation that the data bases of only a few substances would meet all the requirements of the reference protocol guidelines, partly because much toxicity information was generated before the guidelines were developed. The committee expected that sufficient data might often be available for evaluation, even though some toxicity information would be missing and some data would be derived from experimental designs other than those prescribed in the reference protocols.
From page 66...
... If natural products were examined in a rote, rigid fashion, they might appear to be inadequately tested; however, a long history of widespread use without reported toxicity might suggest that no additional testing is needed, even though most recommended tests had not been conducted or had not been conducted according to the reference protocol guidelines. Alternatively, it is not always appropriate to assume that toxicity data are adequate and of satisfactory quality just because a substance is a natural product or has a long history of apparently safe use.
From page 67...
... For example, if the number of animals used in a study was short of that specified by the appropriate reference protocol guidelines, but the results were so definitive that the addition of more test animals would almost certainly not have affected the conclusion, the study was considered to be adequate for conducting a health-hazard assessment. This example illustrates that adequacy is necessarily judged in the context of results; the fact that many tests were judged to be adequate despite deviations from reference protocols does not mean that the reference protocols are unnecessarily rigid.
From page 68...
... Such a study was considered to be adequately conducted if it was a "limit" test -- showing a lack of developmental toxicity at a dose of at least 1,000 mg/kg of body weight -- so that no further testing of developmental effects may have been necessary. However, in the absence of a "limit" test, the study was considered to be inadequate for determining potential hazard to the conceptus if the highest dose reported did not produce an adverse effect on the dam or if the lowest could not be reasonably interpreted as a no-observed-effect level (NOEL)
From page 70...
... Recognizing that this was a large and important task, the committee established five working panels, each with a designated leader and two or three other committee members. Assignments were not considered effectively random, because several substances were selected by panel leaders who were familiar with the substances' toxicity data bases.
From page 71...
... Variations in the consistency of decisions were reduced first by judging a study's adequacy against the uniformly applied set of reference protocol guidelines. These standards were used for studies of the same test type across all substances and by all persons making the Judgments.
From page 72...
... The tabulations and interpretations of the evaluated data bases were used as a bridge for applying statistical inferences derived from the subsample to the select universe from which it was drawn. LIMI TATIONS OF THE DATA GATHERING PROCESS The approach developed to collect data on each substance included searches of the open literature through automated, on-line data files, such secondary sources as reference manuals and textbooks, government technical reports, files of the regulatory agencies (where available)
From page 73...
... Epidemiologic studies involve factors that are different from animal toxicologic protocols. Investigators must know not only the chemical and physical properties of substances and the quantitative and qualitative toxicity data from studies in animals, but also the extent of human exposure, its intensity, and other qualities of exposure that are needed to conduct an adequate study.
From page 74...
... Therefore, there was little information on many chemicals in commerce, and it was often impossible to determine whether a substance had minimal toxicity information. Substances are often selected for toxicity tests because there is a particular interest in them (e.g., because some toxic effect has been observed)
From page 75...
... CHARACTERIZING THE SAMPLE AND OPTIONS FOR DRAWING INFERENCES TO THE SELECT UNIVERSE In the second year of operation, the Committee on Toxicity Data Elements worked with the Committee on Sampling Strategies to identify encoded data in the dossiers that were critical to analysis of the sample and to extrapolation of the analysis to the select universe. Some of the basic critical data in the dossiers described the toxicity tests conducted and their quality.
From page 76...
... Some of these data, such as the frequency with which a given toxicity-test type could be found for each of the 100 substances in the subsample, were available from machine-readable files. Other information, such as the quality of reporting of toxicity tests or the ways in which the Committee on Toxicity Data Elements determined the adequacy of a given toxicity test, was not suitable for statistical analysis, but is presented in a qualitative form in the conclusions and recommendations.
From page 77...
... For example, an analysis of all substances on the list of drugs and excipients in drug formulations could include as many as 32 subcategories defined by being on that list but on or off the lists of pesticides and inert ingredients of pesticide formulations, food additives, cosmetic ingredients, or chemicals in commerce. Similarly, an analysis of the entire select universe from which the sample was drawn would include up to 63 subcategories.
From page 78...
... Thus, the computing formulas that the Committee on Sampling Strategies used for estimates based on the sample are ~ H ~-^ 1 h _E Nh ~ and r 2 ~ H NAh . ~ me sample sizes for any category are sufficiently large, with the estimated proportions not too near O or 1, that the sample distribution for a category is approximately normal.
From page 79...
... m e second factor is the estimated proportion, among substances that satisfy the screen, that also have the specified characteristic; it has a Bernoulli distribution, and lower and upper confidence limits have been calculated. Confidence limits for the product of the two factors were approximated as follows.
From page 80...
... A more complete roster of test types was available, and the test protocols for each test type deemed necessary for a substance's designated subcategories of intended use were evaluated for adequacy. Chemical and physical properties of each substance were sought, as well as its manufacturing process or processes, production volume, potential for exposure, and environmental fate.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.