information, the error and bias inherent in individual perspectives can be minimized. In this way, the frontiers of understanding continually advance through the collective evaluation of new data and hypotheses.

Data producers, providers, and users are all involved in the collective scrutiny of research data and results. Data producers need to make data available to others so that the data’s quality can be judged. (Chapter 3 discusses the accessibility of research data.) Data providers need to make data widely available in a form such that the data can be not only used but evaluated, which requires that data be accompanied by sufficient metadata for their content and value to be ascertained. (Chapter 4 discusses the importance of metadata.) Finally, data users need to examine critically the data generated by themselves and others. The critical evaluation of data is a fundamental obligation of all researchers.

Completely and accurately describing the conditions under which data are collected, characterizing the equipment used and its response, and recording anything that was done to the data thereafter are critical to ensuring data integrity. In this report we refer to the techniques, procedures, and tools used to collect or generate data simply as methods, where a “method” is understood to encompass everything from research protocols to the computers and software (including models, code, and input data) used to gather information, process and analyze data, or perform simulations. The validity of the methods used to conduct research is judged collectively by the community involved in that research. For example, a community may decide that double-blind trials, independent verification, or particular instrumental calibrations are necessary for a body of data to be accepted as having high quality. Scientific methods include both a core of widely accepted methods and a periphery of methods that are less widely accepted. Thus, discussions of data integrity inevitably involve scrutiny of the methods used to derive those data.

The procedures used to ensure the integrity of data can vary greatly from field to field. The methods high-energy physicists use to ensure the integrity of data are quite different from those of clinical psychologists. The cultures of the fields of research are enormously varied, and there are no universal procedures for achieving technical accuracy. Some practices may be employed only within specific fields, such as the use of double-blind trials. Some of these field-specific methods may be embodied in technical manuals, institutional policies, journal guidelines, or publications of professional societies. Other methods are part of the collective but tacit knowledge held in common by researchers in that field and passed down to beginning researchers through instruction and mentoring.

In contrast to field-specific methods, some methods used to ensure data integrity extend across most fields of research. Examples include the review of data within research groups, replication of previous observations and experiments, peer review, the sharing of data and research results, and the retention of raw data for possible future use.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement