National Academies Press: OpenBook
« Previous: Front Matter
Suggested Citation:"Letter Report." National Research Council. 2009. Evaluation of the Reference Manual on Scientific Evidence: Letter Report. Washington, DC: The National Academies Press. doi: 10.17226/12581.
×
Page 1
Suggested Citation:"Letter Report." National Research Council. 2009. Evaluation of the Reference Manual on Scientific Evidence: Letter Report. Washington, DC: The National Academies Press. doi: 10.17226/12581.
×
Page 2
Suggested Citation:"Letter Report." National Research Council. 2009. Evaluation of the Reference Manual on Scientific Evidence: Letter Report. Washington, DC: The National Academies Press. doi: 10.17226/12581.
×
Page 3
Suggested Citation:"Letter Report." National Research Council. 2009. Evaluation of the Reference Manual on Scientific Evidence: Letter Report. Washington, DC: The National Academies Press. doi: 10.17226/12581.
×
Page 4
Suggested Citation:"Letter Report." National Research Council. 2009. Evaluation of the Reference Manual on Scientific Evidence: Letter Report. Washington, DC: The National Academies Press. doi: 10.17226/12581.
×
Page 5
Suggested Citation:"Letter Report." National Research Council. 2009. Evaluation of the Reference Manual on Scientific Evidence: Letter Report. Washington, DC: The National Academies Press. doi: 10.17226/12581.
×
Page 6
Suggested Citation:"Letter Report." National Research Council. 2009. Evaluation of the Reference Manual on Scientific Evidence: Letter Report. Washington, DC: The National Academies Press. doi: 10.17226/12581.
×
Page 7
Suggested Citation:"Letter Report." National Research Council. 2009. Evaluation of the Reference Manual on Scientific Evidence: Letter Report. Washington, DC: The National Academies Press. doi: 10.17226/12581.
×
Page 8
Suggested Citation:"Letter Report." National Research Council. 2009. Evaluation of the Reference Manual on Scientific Evidence: Letter Report. Washington, DC: The National Academies Press. doi: 10.17226/12581.
×
Page 9
Suggested Citation:"Letter Report." National Research Council. 2009. Evaluation of the Reference Manual on Scientific Evidence: Letter Report. Washington, DC: The National Academies Press. doi: 10.17226/12581.
×
Page 10
Suggested Citation:"Letter Report." National Research Council. 2009. Evaluation of the Reference Manual on Scientific Evidence: Letter Report. Washington, DC: The National Academies Press. doi: 10.17226/12581.
×
Page 11
Suggested Citation:"Letter Report." National Research Council. 2009. Evaluation of the Reference Manual on Scientific Evidence: Letter Report. Washington, DC: The National Academies Press. doi: 10.17226/12581.
×
Page 12
Suggested Citation:"Letter Report." National Research Council. 2009. Evaluation of the Reference Manual on Scientific Evidence: Letter Report. Washington, DC: The National Academies Press. doi: 10.17226/12581.
×
Page 13

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

January 8, 2009 Dr. Jerome Kassirer Judge Gladys Kessler Co-chairs Committee on the Development of the Third Edition of the Reference Manual on Scientific Evidence National Research Council 500 Fifth Street, NW Washington, D.C. 20001 Dear Dr. Kassirer and Judge Kessler: With this letter report, the National Research Council’s Committee on the Evaluation of the Reference Manual on Scientific Evidence (Appendix A) seeks to provide information to your Committee as you develop the third edition of the Reference Manual on Scientific Evidence (Manual). This report is based on meetings and discussions with representatives of the Federal Judicial Center (developer of the first two editions), users of the Manual (litigators, law professors, and state and federal judges), evaluations from the Science for Judges program, and a small group of federal judges who shared their perspectives and experiences with the committee. Presentations and discussions at our committee’s meetings made it clear that the Manual has been an extremely effective tool for judges, practitioners, and scholars interested in gaining a better understanding of the scientific underpinnings of various scientific disciplines. The scope of this letter report is limited to an examination of the earlier versions of the Manual (particularly the second edition). The report offers recommendations for materials to be included in the third edition. 5

Background on the First and Second Editions of the Reference Manual on Scientific Evidence In the latter half of the twentieth century, it became increasingly evident that the judiciary would be called on to address unique and complex questions involving scientific and engineering issues. It also became clear that, unlike the executive and legislative branches of government, the judiciary did not have institutions to aid it when wrestling with these concerns.1 The Carnegie Commission on Science, Technology, and Government expressed concern about this lack of expert advice. It stated that: The courts’ ability to handle complex science-rich cases has recently been called into question, with widespread allegations that the judicial system is increasingly unable to manage and adjudicate science and technology (S&T) issues. Critics have objected that judges cannot make appropriate decisions because they lack technical training, that jurors do not comprehend the complexity of the evidence they are supposed to analyze, and that the expert witnesses on whom the system relies are mercenaries whose biased testimony frequently produces erroneous and inconsistent determinations. If these claims go unanswered, or are not dealt with, confidence in the judiciary will be undermined as the public become convinced that the courts as now constituted are incapable of correctly resolving some of the most pressing legal issues of our day. There may be calls to replace the current system with new institutions and procedures that appear to be more suited to the demands of science and technology.2 Recognizing these concerns, in the early 1990s the Judicial Conference of the United States called upon the Federal Judicial Center to undertake a study of how courts handle matters involving scientific and technological issues. These efforts led to the creation of the Science and Technology Resource Center (STRC) at the Federal Judicial Center and the initiation of a systematic approach to the examination of judicial 1 Carnegie Commission on Science, Technology and Government. 1993. Science and Technology in Judicial Decision Making: Creating Opportunities and Meeting Challenges. New York: Carnegie Commission on Science, Technology, and Government, p. 6. 2 Ibid, p. 11. 6

management of science and technology.3 The STRC was charged with developing educational programs for judges on science and technology, identifying the research and planning needed to improve the judiciary’s ability to handle scientific and technical information, and engaging the scientific and technical communities in these activities. In addition, the STRC was to develop a manual on science and technology for judges. The need for the STRC was made all the more urgent in 1993 when the Supreme Court issued a landmark ruling in Daubert v. Merrell Dow Pharmaceuticals, Inc.4 that instructed the trial judge to act as a “gatekeeper” who must screen scientific evidence to determine both its relevance and reliability. The Court suggested that in order for proffered evidence to pass the reliability requirement, the judge must consider if the evidence is the product of scientific reasoning and methodology. The Court provided a list of factors that a judge might consider when making a determination about scientific evidence. These included: 1) whether it could be tested and falsified; 2) whether it had been subject to peer review and publication; 3) whether there existed known or potential error rates and standards controlling the technique’s operation; and 4) whether it was generally accepted within the scientific community. As the Federal Judicial Center recognized, “Such a stand demands an understanding by judges of the principles and methods that underlie scientific studies and the reasoning on which expert evidence is based. … a task for which few judges are adequately prepared when they arrive on the bench.”5 The first edition of the Manual was published in 1994 with the expressed intention of providing “judges with quick access to information on specific areas of science in a form that will be useful in dealing with disputes among experts.”6 The first edition was divided into three parts: 1) management and admissibility of expert evidence; 2) reference guides on seven areas of expert testimony (epidemiology, toxicology, survey research, forensic analysis of DNA, statistical inference, multiple regression analysis, and estimation of economic loss); and 3) use of court-appointed experts and the use of special masters. The reference guides, individually authored by subject experts, did not instruct 3 Ibid, p. 8. 4 509 U.S. 579 (1993). 5 Federal Judicial Center. 1994. Reference Manual on Scientific Evidence, p. 3. Available at: http://air.f jc.gov/public/f jcweb.nsf/pages/16. 6 Ibid, p. 3. 7

judges on the admissibility of evidence or establish minimum standards for acceptable scientific testimony; rather, they provided a primer on the methods and reasoning of certain areas of scientific evidence and supplied a list of questions likely to be disputed among experts. The first edition was welcomed, although some complaints were expressed. Interestingly, the complaints were not targeted at the science chapters’ “reference guides” but rather at the explanation of the Supreme Court’s ruling in Daubert. Plaintiffs’ lawyers felt the section put a “pro-defendant” gloss on the Court’s decision.7 In 2000, the FJC issued a second edition of the Manual that revised and updated extant chapters, expanded the edition to include an introduction by Associate Justice Stephen Breyer, and added three new science chapters; one on how science works, a second on medical testimony, and a third on civil engineering. The second edition was released following two relevant Supreme Court decisions, General Electric v. Joiner (1997) and Kumho Tire Co., Ltd. v. Carmichael (1999) These two decisions, together with the Daubert decision, make up the Supreme Court’s “trilogy” on scientific evidence. The latter decisions re-affirmed a trial court judge’s authority to exclude scientific evidence that appeared to be too speculative and expanded Daubert’s reach to include engineering and other fields of expertise. The second edition of the Manual was well received and over 100,000 copies have been sold. After several years of discussion, the Federal Judicial Center approached the National Academies’ Committee on Science, Technology, and Law about entering into a collaborative arrangement to issue the third edition of the Manual. With financial support from the Carnegie Corporation and the Starr Foundation, the National Academies agreed to undertake this project. As the first step in this collaborative exercise, the Hewlett Foundation provided support for the Academies to undertake an assessment of previous editions and to gather information that would facilitate the development of the third edition. 7 Junda Woo. 1999. New Guide for Judges Tries to Clarify Scientific Issues, Wall Street Journal, Dec. 14, B1. 8

Description of the Assessment Process Under the auspices of the Committee on Science, Technology, and Law, the National Academies established the ad hoc Committee on the Evaluation of the Reference Manual on Scientific Evidence to provide input in developing the third edition of the Manual. Specifically, our committee sought information on: 1) which chapters were effective or ineffective in communicating scientific information; 2) which chapters should be kept, revised or eliminated; and 3) what topics should be added to the third edition. Our committee membership included a law professor, engineering professor, epidemiology professor, federal judge, and an expert on the use of science in the judiciary.8 We held two meetings in Washington, D.C., and heard from law professors who have used the Manual in their classes; state and federal judges familiar with the Manual; litigators who use the Manual; individuals knowledgeable about public and juror understanding of science; expert witnesses; and scientists (see appendix A for list of meeting participants). In addition, our committee was briefed on the Brooklyn Law School Science for Judges program and received comments from administrators of that program about issues most relevant to the Manual. We also solicited online comments from federal judges. Expertise from our committee members supplemented the Federal Judicial Center’s insights about its experience with the Manual and provided additional perspectives on the effectiveness of science education programs for judges. This report does not reflect a systematic evaluation of the earlier editions of the Manual using quantitative data, but rather the gathering of qualitative information from a variety of sources, supplemented by our committee members’ judgment and expertise. Experience of Federal Judicial Center The Federal Judicial Center introduced the Manual to federal judges through a series of workshops focusing on emerging issues in scientific evidence. The workshops included presentations by leading scientific and legal scholars. In later discussions, judges indicated the manner in which they used the Manual and offered suggestions for future editions. Most judges appeared to rely on the parties to frame the issues and 8 One member of the committee, Margaret Berger, authored a chapter on the Daubert Trilogy in previous editions of the Manual. None of the committee members wrote any of the scientific/technical chapters. 9

encouraged them to use the information in the Manual as a source for highlighting the strengths and weaknesses in proffered expert testimony. Some judges mentioned that they rely directly on Manual material in assessing assertions regarding scientific testimony in briefs and expert reports. During the workshops, several judges offered suggestions for strengthening the chapters on statistics and suggested new topics for inclusion in the Manual, some of which were added to the next edition. Over the next several years the Federal Judicial Center also prepared a series of video programs on scientific evidence that expanded on the materials in the Manual and explored evolving legal doctrines. These programs were broadcast to U.S. courthouses over the Federal Judicial Television Network and then made available on CDs. Experience of Brooklyn Law School’s Science for Judges Program Nine Science for Judges programs were held at the Brooklyn Law School between March 2001 and April 2007. These conferences, funded by the Benefit Trust, were established as a result of the Silicone Breast Implant Products Liability Litigation, and were attended by approximately 165 federal judges from every circuit and over 375 state court judges from more than 36 states and the District of Columbia. Many judges attended more than one program. The Benefit Trust paid for the judges’ and speakers’ transportation expenses, most meals, and provided honoraria for speakers who expanded their remarks for publication in the Brooklyn Law School’s Journal of Law and Policy. The Benefit Trust also paid for reprints of the Science for Judges’ articles and for the costs associated with distributing them to federal district court judges as well as for most of the expenses associated with producing the extensive briefing books distributed to participants. The Hon. Barbara J. Rothstein, Director of the Federal Judicial Center, attended most Science for Judges Programs with members of the Federal Judicial Center staff, including Dr. Joe Cecil, who served on the Advisory Board that planned the programs. The Federal Judicial Center selected the federal judicial invitees and designed, collected, and analyzed the questionnaires used to assess the programs. Many presentations in the programs dealt with issues directly related to the Supreme Court’s opinion in Daubert. In response to Daubert’s mandate that trial judges screen scientific expert testimony, many speakers focused on the issues that often arise in 10

Daubert hearings. For example, presentations were made that described the principles of epidemiology and toxicology, and the statistical issues related to proving causation in toxic tort cases.9 In light of time constraints and the complexity of these subjects, no attempt was made to teach the equivalent of academic courses. Rather, the programs’ objective was to provide judges with an overview of the field in question while equipping them with both the vocabulary to understand expert scientific testimony and the tools needed to meet the challenges of evaluating and understanding expert scientific testimony in legal settings such as different types of epidemiological studies, randomization, new developments in toxicology, and scientific updates on subjects that brought science into the courtroom (dioxin and asbestos). The programs also brought attention to other topics that relate to science in the courtroom. These ranged from conflicts of interest in academia, preemption, the availability of data to juries, the comprehension of scientific evidence, and scientific publications. Other programs examined the consequences of Daubert in alternate legal and regulatory contexts, such as the activities of the Food and Drug Administration and the Environmental Protection Agency and the judicial handling of expert testimony in criminal cases. The programs and the speakers received excellent evaluations. The written comments, conversations with participants, and the extensive questions addressed to session speakers following their presentations elicited a number of concerns. Several concerns that were raised repeatedly are relevant to the planning of the third edition of the Manual, including statistical issues, use of interactive programs, and the different interests of state and federal judges. Each is described below. 1. Statistical issues. These were the most troublesome for the attendees. Judges often have difficulty understanding statistical terminology and analyses, especially in presentations by epidemiologists. 9 The agendas of the programs and videos of some of the presentations are available at: http://www.brooklaw.edu/centers/scienceforjudges/events.php. 11

2. Interactive programs. Judges expressed interest in educational programs that would allow them to work through material encountered at Daubert hearings. A Science for Judges Program on Evidence-Based Medicine attempted to accommodate this desire by placing participants in small break-out groups to analyze studies from speaker presentations. Judges liked this format. The advantages of this model might be harnessed using interactive computer exercises dealing with, for example, statistical issues. 3. Federal vs. state judges’ interests. Federal and state judges had different opinions about desired program topics. Federal judges were primarily interested in Daubert and causation issues. A number of state court judges were interested in learning about topics relevant to issues they are called upon to address under their criminal laws, such as whether scientific techniques can be used to predict whether an individual charged with a crime will be dangerous if released pre-trial. State judges also were more interested in forensic issues in criminal cases, as these are much more likely to arise in the state courts where most criminal cases are handled. State judges suggested that the new edition of the Manual might be expanded to encompass scientific issues more commonly found in state courts. Online Comments from Judges As an aid to the assessment process, the National Academies and the Federal Judicial Center solicited online comments in 2007 from federal district court judges and magistrate judges using a list of questions developed by Professor Shari Diamond of Northwestern University School of Law. The judges were invited to provide their reactions to the chapters in the second edition of the Manual and to discuss how a new edition might improve upon earlier editions. Sixty anonymous federal judges provided comments which expanded on the critiques and suggestions offered by many of the speakers who made presentations to the committee. The respondents provided information on how they use the Manual, the chapters that present the most challenges to them, and subject areas that might be considered in a new edition of the Manual. 12

Findings On the basis of the testimony presented at our committee meetings, the evaluations from the Science for Judges Program, the online comments, and the experience and background of individual committee members, we found that: 1. The Manual remains a leading resource for judges and others. 2. The Manual has a reputation as a very effective tool. 3. The primary audience for the Manual should remain federal judges, but it should be expanded to include state judges as well. 4. The Manual should not be reoriented as a text for law students. 5. A number of chapters should be revised and updated. 6. There were a number of individuals of the opinion that the engineering chapter should be either deleted or be significantly expanded. 7. A number of new topics ought to be included as reference guides. The Manual remains a leading resource for judges and others. The Manual is one of the best selling reports of the Federal Judicial Center, with over 100,000 copies sold to date. Repeatedly, the committee heard from both judges and litigators that the Manual is consulted in cases involving complex scientific or technical information. In addition, a number of law professors told the committee that the text is used in evidence classes in law schools throughout the country. The Manual has a reputation as a very effective tool. Judges reported using the Manual when seeking general background information on a particular topic. They also stated that they use the Manual to explain scientific terminology and to assist in determining the admissibility of particular expert testimony. Judges also reported that they selectively read chapters of the Manual in response to specific cases, and that they consult the Manual to answer a particular question. 13

Litigators told the committee that if a trial judge uses the Manual, they are more likely to consult it as well so that they can shape their arguments to respond to the types of questions raised in the Manual. The primary audience for the Manual should remain federal judges, but it should be expanded to included state judges as well. The consensus of individuals who presented their perspectives to the committee led to the conclusion that the primary audience for the Manual should continue to be judges rather than other legal professionals, but that the Manual should be expanded to include state judges. Although the committee discussed revising the Manual to make it more accessible to other groups (e.g., litigators, law professors), the overwhelming feeling was that judges should continue to be the target audience of the Manual as they tend to be its primary users. This sentiment seems to be justified, as there are few resources geared toward science in the courtroom, and the Manual originally was intended as a tool for the judiciary. The Manual should not be reoriented as a text for law students. Although the committee heard from a number of law professors who use the Manual in class, none felt that it should be restructured as a law school textbook. There was, however, interest in revising some of the chapters to aid comprehension, but no suggested of a complete reorientation. A number of chapters should be revised and updated. There was general consensus that all the chapters should be updated with newer, more relevant case citations. In addition, it was felt that if a field of science has made substantial gains since the publication of the second edition (e.g., DNA, genetics), individual chapters should either be revised or new chapters added. Many judges complained about the difficulty of the statistics chapter and asked that it be revised to make it clearer and more comprehensible. 14

There were a number of individuals of the opinion that the engineering chapter should be either deleted or be significantly expanded. Some commentators suggested that the chapter on Engineering Practice and Methods should either be expanded to include information on fields outside of civil engineering or excised from the new edition. In lieu of the existing chapter, it was suggested that a new chapter be added to address the broader fields of engineering and engineering design. A number of new topics ought to be included as reference guides. There were numerous suggestions for new topics, such as: mental health and dangerousness, the forensic sciences, pharmacology, computer science, genetics, research design, neurology and brain function, specific and general causation, and exposure assessment. Recommendations Based upon our findings, we offer the following specific recommendations: 1. Retain the Current Format. There did not appear to be a desire for changes to the format of the Manual and chapters. The inclusion of relevant case citations and case studies is viewed as extremely helpful. 2. Retain Judges as the Primary Audience. There was no apparent support for broadening the primary audience beyond judges. There was genuine interest in making the Manual more relevant to state as well as federal judges. Including references to state cases was seen as one way to achieve this goal. 3. Revise and Update the Second Edition Chapters. 15

Chapters should be updated to include more recent case citations. The statistics chapter is viewed as least accessible and should be revised significantly. 4. Keep Most of the Second Edition Chapters. With the exception of an engineering chapter that would be broader than just civil engineering, there was little interest in removing any of the chapters. Some expressed the opinion that the engineering chapter could either be deleted or significantly expanded to include a much broader look at engineering. A new engineering chapter that expanded its coverage of the field would be of benefit to judges. 5. Add New Chapters. There was a call for the inclusion of new topics in the third edition. It was suggested that the new edition include chapters on the forensic sciences, genetics, pharmacology, and computer science. Additionally, it is clear from the experiences at the Brooklyn Law School’s Science for Judges programs that judges would like more information on both general and specific causation and on exposure science and its use in regulation and in post exposure assessments of harm. Although our committee does not have enough information to recommend which specific chapters should be added to the third edition, all of these topics would benefit by further discussion by your Committee as it develops the third edition. We encourage your Committee to add new chapters as it deems appropriate, using the suggestions garnered from our examination as a guide. The Manual continues to be an extremely important resource for judges handling cases in which scientific evidence plays a role. While no manual for general distribution can anticipate and address the particular questions a judge or jury may need to decide, the Manual appears to provide a general introduction that can help judges dealing with scientific issues pertaining to the subject of a dispute. The Manual has gained a reputation for providing judges with a frame of reference to approach such disputes with confidence and with a sufficient level of comfort to listen, learn, and ultimately make a decision on a matter involving scientific content. 16

We look forward to the third edition and are happy to assist your Committee in anyway possible. Most Sincerely, Margaret Berger Channing Robertson Co-chairs 17

Next: Appendix A Participants in Committee Discussions »
Evaluation of the Reference Manual on Scientific Evidence: Letter Report Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!