Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
5 Possible Ways Forward This chapter focuses on the discussions from Sessions 4 and 5 of the workshop, where participants offered their thoughts on how the field of environmental health might best move forward in the sharing of data and reflections on the workshop. The final section of this chapter, âThe Bigger Picture,â is a summary of the remarks that one presenter provided during Session 5. WAYS TO INCREASE DATA SHARING Several attendees provided some principles and lessons that should be kept in mind when thinking about the topic of sharing environmental health data. Unless otherwise noted, all comments summarized in this section were made during the discussion after Session 4 of the workshop. Ensuring Quality Data Sharing Practices John Howard, director of the National Institute for Occupational Safety and Health (NIOSH), stated that there should be a balance between reanalyzing the old, existing data and looking at new data. âFrom the science perspective, we want people to continue to look at data with new data in mind and come to different conclusions so that it is a living organic piece of knowledge,â he said during the discussion after Session 3. âThat is why we promote and are trying to figure out a way to enable reanalysis. Hopefully, studies will not be looking at the data from 20 years ago without taking into consideration new data sets that have enriched the original data set.â Lynn Goldman, dean of the Milken Institute School of Public Health at George Washington University, noted that when raw data are released, they are generally not completely raw. Instead, they have been cleaned up to a certain degree. âData cleaning is ... very difficult to do even when 69
70 PRINCIPLES AND OBSTACLES FOR SHARING DATA it is your own work,â she said, so few researchers would want to have to clean other researchersâ data. âI am pretty sure National Center for Health Statistics does not release data that [have] not been cleaned,â she said. âThe quality control is done before it is made available to the public. But I do not think that that has been clearly articulated.â George Daston, Victor Mills Society Research Fellow at Procter & Gamble, expanded on that by noting the importance of quality control in both the collection of the data and the processing of the data. A major factor in whether researchers and others believe in data that have been collected, he said, is how much faith they have in âwhat sorts of quality control have been applied to both the data collection and the processing.â Thus, it would make sense to think about standardizing such quality control. Francesca Dominici, a professor of biostatistics and senior associate dean for research at the Harvard University School of Public Health, agreed with Daston that the field of environmental health needs these tremendous quantities of data if it is to keep advancing. âIn environmental health, we do not have low-hanging fruit anymore,â she said, mentioning the early work on the connection between smoking and lung cancer as an example of such low-hanging fruit, since the effect is so large that it is relatively easy to find. Now, she said, environmental health scientists are trying to assess whether or not various environmental contaminants are still harmful to human health at very low concentrations. âTo do that,â she said, âit requires terabytes and terabytes of very complex data. I think everybody will agree with that.â Daston replied that he agreed with her comment that the low-hanging fruit is gone and that the field is past the time of finding either one gene or one environmental agent that causes a disease. âIf we really want to make headway in public health,â he said, âit is going to be through sharing large data sets.â The way in which the data are collected is a crucial part of data sharing, said Edward Sondik, former director of the National Center for Health Statistics at the Centers for Disease Control and Prevention. He described hearing a debate about some Canadian breast cancer data that had recently come out. âThe issue between somebody who felt very strongly that the data were flawed and the Canadian investigator really had nothing to do with the data per se,â he said. âIt had to do with the way the data were actually collected. It went back to ... the actual structure of the study.â Thus, it is crucial that a studyâs metadata be shared because they contain information about how the data were
POSSIBLE WAYS FORWARD 71 collected, that is, about the structure of the study. âIf you agree on the structure, then you are dealing over the analytic methods,â Sondik said. âBut in many cases that I have seen, the issue is not the methods per se. It really is how the data were actually collected.â Ellen Silbergeld, a professor at the Johns Hopkins Bloomberg School of Public Health and editor-in-chief of Environmental Research, argued that while the sharing of data for the purpose of reanalysis may be necessary at times to increase trust in scientific results and the regulations that follow from them, reanalysis of data is not a particularly useful part of science. âI do not think science is advanced by reanalyzing data sets,â she said. âScience is advanced by people replicating studies in different situations with different populations that are completely independent.â âI think we are kidding ourselves,â she continued. âThis has very little to do with the quality of science. I do not think anything was advanced by extracting all the data from the Needleman studies,1 reanalyzing them 15 times, accusing him [Needleman] of fraud, and then moving on. What was advanced by having 20 more studies?â Science is not advanced when researchers go back to the Harvard Six Cities Study and look at their data again, she said. Instead, science advances through the accumulation of better analytic methods and statistical methods that allow one to provide a better answer to the same question that was being asked. âCould we stop saying that this is a matter of advancing science? ... I would challenge anybody to show me where an analysis of existing data was as earth shaking as doing a really fantastic study that looked at the same hypothesis and either replicated it or moved it forward into a different area.â 1 Herbert Needleman and colleagues conducted a study in 1979 that investigated the effects of lead exposure and toxicity in children. Children with elevated lead levels were found to be significantly impaired on intelligence quotient (IQ) tests and exhibited negative classroom behavior (Needleman et al., 1979). In the 1980s, they employed larger samples and more sophisticated statistical analyses to continue studying the issue, and three meta-analyses found that low-level lead exposure was associated with IQ deficits (Needleman and Gatsonis, 1990; Pocock et al., 1994; Schwartz, 1993). These studies played a critical role in the elimination of lead from gasoline and the lowering of the Centers for Disease Control and Prevention acceptable blood lead standard in children.
72 PRINCIPLES AND OBSTACLES FOR SHARING DATA Developing Common Language and Standards To move forward effectively on sharing environmental health data, it will be helpful if everyone is speaking the same language and adhering to the same standards, said Linda Birnbaum, director of the National Institute of Environmental Health Sciences of the National Institutes of Health (NIH), during her presentation in Session 4 of the workshop. âThis is a huge issue,â she said. âWe actually held a data ontology workshop a couple of months ago to begin to develop a common understanding of what we mean.... What do you mean by data? What do you mean by reanalysis or replication or reproducible? I think that a common language for environmental health would foster the interoperability of databases and promote the sharing, reuse, and reanalysis of data and therefore, hopefully, accelerate the pace of discovery.â Several workshop participants commented that the definitions that Goldman had offered earlier for âreanalysis,â âreplication,â and âreproductionâ (see Chapter 2) were good ones and should be promulgated. âYou were very close in your definition,â said Daniel Greenbaum, president of the Health Effects Institute. âI think it is important, and I do hope that the Roundtable ... could try and clarify that and make it pretty important.â In response, Goldman clarified that the definitions were from the Oxford English Dictionary and were not hers. Greenbaum did suggest a slight modification for one of the definitions: âIn order to reanalyze, i.e., to do all kinds of sensitivity analyses and other things, not just check the math, the first thing you have to do is replicate the original results,â he said. âYou have to ask: Can I do exactly what they did in this data set and come up with the same thing so I know I am working on the same data set? ... Reanalysis goes much further than just checking their math.â On a related topic, a few workshop participants suggested that the development of standards for how data are submitted and represented could make it much easier for multiple researchers to work with the same data. âShould we begin to work with the publishing community about ... standards for submitting studies for journal publication?â Birnbaum asked during her presentation in Session 4 of the workshop. âCould there be standardized formattingâfor example, for the methods and the key findingsâthat would help in reporting quality and make automatic curation more feasible?â Plenty of available text-mining software makes it possible to search thousands of different texts for information on a particular subject, and attaching a list of standardized key words to journal articlesâor using the standardized key words inside the
POSSIBLE WAYS FORWARD 73 articlesâcould greatly improve the efficiency and accuracy of the text- mining software. âThis could allow for the automated curation of published findings into the databases,â she said, âand when you have that, that could become a research tool that could be used again.â As part of those standards for data, some workshop participants suggested that it would make sense for a studyâs metadata to include information about conflicts of interest among the studyâs authors. âFor every single paper that is published in a peer-review journal, you can ask how much the investigator put in his pocket by producing this study,â Dominici said. âI think most of the academic world would have no problem in disclosing how much money they make when writing a single paper.â Birnbaum suggested that each author of a study should report how much he or she was paid for carrying out and writing up the research and where those funds came fromâwhether from a research grant; salary from a university, company, or government agency; consulting fees from some advocacy organization; or somewhere else. âI think it is absolutely essential that we begin to have that information fully available,â she said. âIf anyone says that who puts the food on your table does not have some kind of impact on your gestalt, I am going to find that hard to believe.â Planning and Time Limits for Data Availability Birnbaum pointed out that it is important to think about when data should be made available for others to work with. âI know our investigators have huge concerns about premature release of data or premature demand for release of data before [they are] fully analyzed,â she said, adding that by âfully analyzedâ she did not mean that the investigators had extracted every implication from the data that they could but, rather, that the investigators examined the data thoroughly enough to be comfortable that they were reasonably error free and ready for analysis by other parties. But while it is important to make sure that the data are cleaned up and vetted before they are released, it is also important that the cleaning-up process not unreasonably delay the release of the data. It is a balance that needs to be thought about, Birnbaum said. Greenbaum suggested that a basic principle should be that âearly on in the process, right as you go to fund a study, [you should be] expecting to build into the grant and into the proposal the plans for making data available at the other end and the dollars that are going to be necessary to do that.â That is not always done today, he said, but it should be done by
74 PRINCIPLES AND OBSTACLES FOR SHARING DATA anyone who is funding research in either the private sector or the public sector. Even when data are made available to other researchers, there are limits on how long the data will be or can be shared. âWe are almost going to have to declare some form of statute of limitations on how far back we can reach in reaching old data,â Greenbaum said. There are two issues, he said. The first is the practical issue of being able to read data that were stored in certain formatsâsay, on floppy disks or magnetic tapesâmany years ago. At a certain point it becomes so difficult and expensive to resurrect the ability to read data in such formats that it is simply not feasible. Thus, unless early data sets have been transferred into more recent forms of storage, researchers might not be able to access them. The second issue is that federal policies do not require data from scientific studies to be stored for a particularly long time. Greenbaum said that he believed that it is generally only 7 years. âI am not talking just about internal data retention policies,â he said. âI am talking about grantees getting money.â Thus, data that the federal government paid for may not be available for researchers who come along a decade later and need them for some new analysis. Thus, it will be important, Greenbaum said, to develop principles for dealing with such situations when the results of the studies are available but the data underlying them are not. âWe are just not going to get some of the data,â he said. âIt does not mean the study is invalid or cannot be used. That is an important thing.â Given this situation, he said, it will also be important going forward to think about how long data should be made available. Reducing Tensions Associated with Data Sharing Greenbaum discussed the roots of the tensions between researchers and some policy advocates and possible ways to approach lessening of those tensions. âThe reason there is so much attention to certain studies is usually because there are incredible policy and economic stakes involved in decisions that are based on those studies,â he said. In a democratic and often adversarial society, there should be ways for people who have concerns about government decisions to raise those concerns. Indeed, our system of government is structured to allow individuals and organizations with concerns to have their voices heard. âEPAâs [the U.S. Environmental Protection Agencyâs] rules get challenged all the time on a variety of factors, not just on the health basis,â he said.
POSSIBLE WAYS FORWARD 75 The challenges to agency decisions are often not based on whether the underlying science was done correctly, Greenbaum said, but rather on whether the science was considered in a thoughtful and rational manner by the agency making the decision. âSometimes EPA loses, and sometimes they do not. Sometimes OSHA [the Occupational Safety and Health Administration] loses, and sometimes they do not.â While acknowledging that he did not know whether there would be any way of easing tensions, Greenbaum suggested that one way of easing tensions would be to âconstruct rules of engagement that promoted a level of civil discourse that enabled people to actually produce quality science, have it challenged by scientists no matter who they worked for, but in a scientific manner, have dialogue and opportunity for dialogue, and then in the end know something more than we did before as a result of that.â Greenbaum said that the reanalyses of the Six Cities Study data that his group did had some of those characteristics, so he recognized the value of such scientifically based challenges, but, at the same time, he also recognized that it is not feasible to carry out such reanalyses for every single study that a government agency supports. âThere just arenât the resources to do that.â Greenbaum also suggested that the National Academies of Sciences, Engineering, and Medicine is the right kind of place to hold a discussion on how to develop such rules of engagement. Any such discussion would require a core group of people who understand what is acceptable and what is not acceptable practice, he said. For example, bringing a scientific challenge against a legitimate investigator just to intimidate that investigator crosses the line, he said. âOn the other hand, asking to see the underlying data to do reanalyses of it to understand what it is and to go through getting it published in the peer-review literature, even if you are advancing one of the adversarial sides, has the potential to be very positive if it is done with the right rules of engagement.â Greenbaum added that he would not trust the current Congress to determine what those rules would be, emphasizing that he was not talking about one party or the other. It would be better to have groups of scientists carry out such discussions and propose appropriate rules. âThere was a group [organized by the Bipartisan Policy Center] that Lynn [Goldman] and I were on a few years ago that wrote a report on advancing science for policy purposes that tried to lay out some of the principles,â he noted. âThere may be others.â Ultimately, Greenbaum said, science at its best is a highly adversarial undertaking ânot for the reasons we are talking about, but
76 PRINCIPLES AND OBSTACLES FOR SHARING DATA because different scientists have different views of what is the right answer and what is the wrong answer.â The goal should be to move the countryâs political adversarial process in a direction where it would be more like a scientific adversarial process. âWe have rabid debates among scientists over whether somebodyâs study is correct or not correct, and that goes on all the time,â he said. Could the disagreements over scientifically based policies be carried out in a similar manner? It is worth finding out. Providing Incentives to Share Data Hal Zenick of EPA stated the need to develop metrics to measure data sharing. âIf you look at the baby boomer cohort,â he said, data sharing âis a foreign concept. During our careers it was a very competitive type of atmosphere. Your publishing was your metric. Publications, presentations, funding were all based on competing. If we fast forward about 20 to 30 years, we have a whole new cohort. That cohort lives social media. All they do is data share back and forth. But the incentives for that have not emerged.â Most of the metrics used today to assess researchersâpublications, funding, and number of presentationsâ are rooted in the era of competition, he said, and there are few metrics to measure data sharing or the results of data sharing. It would be useful, Zenick said, to begin having discussions about the best way to measure data sharing by researchers and what that data sharing led to. That will be a necessary first step toward creating real incentives for researchers to share their data. Another sort of incentive to sharing dataâwhich would be particularly aimed at policy makersâwould be to objectively demonstrate the benefits of data sharing, said Daston. âThere arenât any 100 percent positive aspects of data sharing,â he said, âbut what we have to do is get to the point where we can do the calculus that shows that the benefits of the data sharing far outweigh whatever the costs are.â Furthermore, he suggested, it would make sense to pay attention to who bears the costs of data sharing and âif the costs are disproportionately borne by one individual or one sector, find ways to compensate them.â But it will be clear demonstrations of the benefits of data sharing that will carry the most weight in convincing people to push for more.
POSSIBLE WAYS FORWARD 77 Utilizing Secure Enclaves Goldman suggested that since there are already data enclaves for data collected by federal agencies, it would make sense to establish similar enclaves for data from investigators who are funded by the federal government. âIs it possible,â she asked, âthat that kind of an enclave could be a home where, when you submit your paper, you could submit your data and then you wouldnât have to worry about looking after this generation of technology 30 years from now? Somebody else would be responsible for keeping it as long as it ought to be kept, retaining it, keeping it up to date, keeping it available. And might that perhaps put that process at armâs length between whoever is trying to access the data and the investigator?â If feasible, it would be worth looking into. Greenbaum replied that it was an interesting idea. âOne of the issues that that raises is that if a federal agency itself has conducted a study, constructed the data sets, et cetera, and then chooses to share its data either through a data set or some other mechanism, there are a series of laws that say people have to sign data user agreementsâ to have access to the data. There are also rules for NIH data sharing more broadly, he said. âI do not know that that is so clear-cut for a federally funded study at Harvard or Stanford or for somebody else who has a data set. When I signed an agreement with Harvard, I did not have a federal lawâ that specified what the data user could and could not do with the data. Thus, Greenbaum suggested that if the federal government provided data enclaves for the sharing of data from federally funded studies, it could lay out the same sort of user agreements, and if a user broke the agreement and breached the confidentiality of people whose data were in the data set, the same sort of federal penalties, which are up to $250,000 or 5 years in prison, or both, could apply. âThat would help protect privacy.â OVERALL REFLECTIONS During the Session 5 discussion, workshop attendees provided reflections on the workshop as a whole.
78 PRINCIPLES AND OBSTACLES FOR SHARING DATA Take Advantage of What Is Already Out There Joseph Rodricks, principal of ENVIRON, pointed out that a number of groups, both in environmental health and in other areas, have spent time grappling with issues surrounding the sharing of data. âOne of my colleagues ... pointed out a very interesting document from the Oak Ridge National Laboratory on best practices for preparing environmental data sets to share and archive,â2 he said. âIt is a big document that goes into exquisite detail on what investigators should be doing to prepare to publicly release the data as they develop [them]. It is excellent guidance, I think.â It would make sense to pay attention to these sorts of documents to avoid reinventing the wheel, he suggested. On the technical side, Jerry Blancato, director of the Office of Science and Information Management at EPAâs Office of Research and Development, noted that a variety of technical approaches to sharing data without breaching confidentiality have already been developed. âThere may be ways that data can be shared publicly, but you still protect the pieces of data,â he said. âI do not think it is our business as researchers to reinvent the wheel. There are experts out there, industry experts who not only have the capacity to store tremendous amounts of data and do it reasonably cheaply but have the wherewithal to do it.... We have to be able to take advantage with those partnerships to get the data there.â Effects of the Coming Data Tsunami Latanya Sweeney, professor of government and technology in residence at Harvard University, suggested that the rapidly growing availability of personal data from a large number of sources is changing the equation about personal privacy and confidentiality and said that much of the discussion that she heard at the workshop seemed outdated because it was not taking these changes into account. One example, she said, is the Personal Genome Project, the goal of which is to collect and make public genome data and the detailed medical records of 100,000 volunteers to accelerate research into personalized medicine and personal genomes. Another example is the Fitbit, a device that keeps track of a wealth of activity on the people who choose to wear it, such as daily activity patterns and levels, calories burned, sleep patterns, and weight. 2 âBest Practices for Preparing Environmental Data Sets to Share and Archiveâ is available at http://daac.ornl.gov/PI/BestPractices-2010.pdf (accessed February 22, 2016).
POSSIBLE WAYS FORWARD 79 While taking part in the Personal Genome Project and wearing a Fitbit are personal choices, the world is moving toward gathering data about people in all sorts of ways, many of which they will not be aware of, said Sweeney. âThis will definitely change the way you will conduct research and grab data, who you will get the data from, and also privacy and other kinds of issues.â Realistic discussions about privacy and confidentiality will have to take into account the coming changes in the ways in which data are collected, the types of data that are collected, and the attitudes that people have about their data being collected. Organizing the Scientific Community to Take Action Frank Loy noted that the scientific community should be proactive in the debates between Congress and EPA around data sharing to achieve a reasonable balance between transparency and protection of data. âThe scientific, medical, and university communities have status in this country that, quite frankly, EPA does not. Therefore, I would like to see this treated as a science issue rather than as an EPA regulation issue.â Goldman agreed with this point and noted that as a scientific community, âWe have choices about what we can do.... We have been doing nothing actually to affirmatively address this issue. We have been placing ourselves in a position where therefore Congress or others are trying to impose things to take care of this.â She noted that it can be particularly challenging to engage the environmental health community in issues like this. âQuite frankly, we [environmental health researchers] are not a community. We are in many different professions, different professional organizations, and are not exactly organized in any way, shape, or fashion in the way that AAAS [the American Association for the Advancement of Science] and other large science organizations are organized.â Goldman stated that rather than being passive and watching these events occur, it would be better if there was a way to organize the environmental health community around data sharing. âI would like for us all to be thinking about ideas that could be moved forward to bring people together to try to address this. In other words, can we be in charge of our own destiny?â THE BIGGER PICTURE During her presentation in Session 5, Silbergeld offered a cautionary take on what she saw as basic underlying assumptions at the workshop.
80 PRINCIPLES AND OBSTACLES FOR SHARING DATA To begin, she noted that this workshop topic has potential costs to the scientific community, the excellence of science, and the creativity of science in the United States and is something that should be given much more thought than it has been given to date. She found it extraordinary that at the beginning of the workshop nobody defined âwhat do you mean by âdata,ââ making it difficult to understand what is going to be shared. Before moving forward, it would be good to define the term with some specificity or at least give a range of what it could mean, she said. Silbergeld offered some sobering comments that indicated what is really at stake if more thought is not given to this topic. âI have to say I come away from this meeting thinking that I probably will never do another environmental epidemiological study,â she said. âI cannot stand behind the guarantee that I have always felt to be important for research integrity [that I am able] to protect the confidentiality of the people who are brave and caring and patient enough to get involved in the studies that we do in environmental epidemiology.â Furthermore, she suggested, U.S. researchers may find it difficult to attract international collaborators if confidentiality cannot be ensured. âI will think twice about engaging my colleagues in other countries in joint ventures in which their data could become accessible to adversarial proceedings in the United States,â she said. There is a âbigger pictureâ that should be remembered. âIt seems to me,â Silbergeld said, âthat in my career of being involved in environmental sciences, it has always been about ensuring the highest- quality data [are] going into making of decisions that have both economic and public health impact. I am not sure that ensuring excessive and complete access to data sets is the way forward to reach that goal most consistently and expeditiously. I think there are paths forward that we could think about and certainly ones where we can learn from other disciplines.â In particular, she suggested, the model that should be kept in mind is evidence-based medicine. âThis has been an experiment of some 60 years in the making ... [and] was also meant to deal with a highly contentious topic, which was the advent of national health care in the United Kingdom.â As the United Kingdom was instituting its national health care program, an economist, Sir Stafford Cripps, posed the question of how one should decide which treatments should be paid for. Different doctors had different answers, and none were able to offer any objective evidence supporting their answers. Cripps set out to find a
POSSIBLE WAYS FORWARD 81 better way, and âthat was really the birth of evidence-based medicine,â Silbergeld said. The development of evidence-based method led to a particular methodology that can be applied to environmental health research, she noted. First and foremost, Silbergeld said, that methodology is transparent. âI understand we have a great deal of transparency,â she said, referring to people in the environmental health field. âAlthough sometimes we have different places where transparency begins and ends, ... it is very much based on the insights of several stakeholders. One is the scientists and the generators of knowledge, and the others are the people who use the knowledge (for example, in medicine) and the patient community (as well as their values and concerns).â Silbergeld stated, â[this] is where I come back to this issue about the respect that we owe our subjects in environmental epidemiologyâa respect that is in danger of being taken away.â Silbergeld then discussed how one should go about carrying out a systematic review in environmental health. âThe goal of a systematic review is to get as much information as possibleânot just set up barriers towards the admission of information but at the outset to have a fair amount of confidence that we have surveyed the entire body of available information,â she said. Researchers will establish certain boundaries, of course, âbut within those definitional boundaries, we have a pretty good degree of confidence that we know what that landscape looks like.â To that landscape the researcher applies a set of predetermined analysis criteria that are fixed ahead of time to the information that has been developed. âWe do not make it up as we go along,â she said. âNow I ask my question: What do you mean by data, and what do you mean by information? The whole goal of the systematic review is to translate information into evidence,â with âevidenceâ being âinformation in which we have confidence,â Silbergeld said. Such confidence arises by reducing bias to the greatest extent possible, by understanding the strengths and the limitations of the information, and by applying certain criteria, such as those set forth for good laboratory practices or the most extensive sets of criteria that have been developed for both clinical trials and observational epidemiology. âWe can then use those in an extremely transparent way to say, âThis is the information that we are going to consider converting into evidence.ââ Sometimes the process requires going to the researchers who have published the papers in question and asking them for some additional information to help clarify their use of their particular bits of information.
82 PRINCIPLES AND OBSTACLES FOR SHARING DATA âWe do that with a great deal of trust among the people who have generated that information and those of us who want to use it,â she said. Silbergeld continued, âAnd from there, we can move forward to something that I think would improve not only decision making but also gives the yardstick as to how much information you need. I do not think you need the raw data tables.â There may be particular cases in which there are reasons that the raw data are needed, she said, âbut in the general evaluation of both toxicologic and epidemiologic data, I do not think we need them.â Silbergeld stated that the larger goal of maintaining a steady supply of data through new innovative studies should be kept in mind when individuals choose to gather data from researchers. She worries that some of the actions that she heard talked about in the workshop could make it less likely that the supply of important new data will continue unabated. âTo suggest that scientists should start a study figuring out how they are going to make all their data available at the end will have a very interesting effect on how we train the next generation of scientists. I can tell you that. I think there is another path forward,â she said. REFERENCES Needleman, H. L., and C. Gatsonis. 1990. Low level lead exposure and the IQ of children. Journal of the American Medical Association 263(5):673â678. Needleman, H. L., C. Gunnoe, A. Leviton, R. Reed, H. Peresie, C. Maher, and P. Barrett. 1979. Deficits in psychological and class-room performance of children with elevated dentine lead levels. New England Journal of Medicine 300:689â695. Pocock, S. J., M. Smith, and P. Baghurst. 1994. Environmental lead and childrenâs intelligence: A systematic review of the epidemiological evidence. BMJ 309:11889â11897. Schwartz, J. 1993. Beyond LOELâs, p values and vote counting: Methods for looking at the shapes and strengths of associations. Neurotoxicology 14:237â 246.