National Academies Press: OpenBook

Sharing Research Data (1985)

Chapter: Definitions, Products, Distinctions in Data Sharing

« Previous: Sharing Research Data in the Social Sciences
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 89
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 90
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 91
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 92
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 93
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 94
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 95
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 96
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 97
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 98
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 99
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 100
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 101
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 102
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 103
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 104
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 105
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 106
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 107
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 108
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 109
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 110
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 111
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 112
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 113
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 114
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 115
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 116
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 117
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 118
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 119
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 120
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 121
Suggested Citation:"Definitions, Products, Distinctions in Data Sharing." National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. doi: 10.17226/2033.
×
Page 122

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Definitions, Products, and Distinctions in Data Sharing Robert F. Boruch For simplicity's sake, data sharing here is defined as the voluntary provision of information from one individual or institution to another for purposes of le- gitimate scientific research. In practice there are, of course, a great many variations on this theme. Some of the variations are suggested by the factors that influence data sharing and its products. THE PURPOSES AND PRODUCTS OF DATA SHARING The products of data sharing can serve a variety of beneficial purposes, in- cluding: · verifying, failing to verify, or examining the conclusions of earlier ana- Robert F. Boruch is a professor in the Department of Psychology and the School of Education and codirector of the Center for Statistics and Probability, Northwestern University. Background research for this paper was supported by a stipend from the National Research Council and a grant from the National Science Foundation to Northwestern University, Center for Statistics and Probability. 89

go Robert F. Boruch lyses, as in public program evaluation or economic research on subsidy pro- grams; · facilitating education and training through active examples; · testing new hypotheses or theories using existing data, as in a good deal of economic research; · facilitating tests of new methods of analysis when the original data are well understood, as in attempts to better estimate cognitive ability using Rasch models or mortality using dual-system estimates; · using the data collected in one study or series to design other studies and programs, for example, in social programs or for physical or chemical con- structions in engineering; · combining several sets of data to facilitate syntheses of knowledge, deci- sion making, establishing limits or bounds on generalization, as in psycholog- ical and other research. The expected products of data sharing will not always appear, of course, and may not fulfill their purposes when they do. For example, poor research can often be identified from reports or tables, reducing the need for access to raw data. Replicating a study independently is often far more important than reanalyzing the results of the original effort, and this approach also reduces the need for access to raw records. Even when the information is pertinent to a scholar and is of good quality, the stress on sharing can be dysfunctional, for several reasons. The products may be pedestrian, for example, because it can be hard to reason ably and in original ways from data that have already been well analyzed. The process of sharing may lead to self-interested or inept as- saults on adequate work, as it has in the past. Perhaps more important, the stress on repeated analysis of observational data, from surveys for instance, can divert resources from the collection of better data, say, from field experiments, that could yield less ambiguous con- clusions. Data may be analyzed because they are available rather than be- cause they are interpretable and clearly material to a problem at hand, produc- ing work that is pedestrian or wrong repeatedly. And so on. In summary, it is reasonable to expect a variety of outcomes, positive and negative, from data sharing. The position taken here is that sharing in princi- ple is warranted simply because it is part of a durable scientific ethic to com- municate results in a way that permits confirmation. In practice, its appropri- ateness, feasibility, and utility depend on other factors. Voluntary Versus Involuntary Sharing There are cases of forced data sharing, in responses to demands for disclosure of information by a court, in the interest of assessing a scientist's claim.

Definitions, Products, and Distinctions 91 Time and resources are not sufficient to examine such sharing in detail here, but a couple of cases do deserve brief attention: the Longs' efforts to obtain data from the Internal Revenue Service for research purposes and Forsham v. Harris. Dr. Susan Long, a sociologist at Princeton, and Mr. Philip Long, head of a business, have for the past 10 years been involved in efforts to secure statisti- cal and other data from the U.S. Internal Revenue Service (IRS) for research purposes. Susan Long's professional interest lies partly in examining consol- idated administrative data and IRS procedures manuals to determine how ad- ministrative discretion is used in applying tax law, i.e., how rate of audits var- ies by geographic region, income level of the taxpayer, etc. (see Long, 1979, 1980a, 1980b). The Longs maintain that the information they request falls within the coverage of the Freedom of Information Act (FOIA). Moreover, since the records on individuals that they need are anonymous, acceding to their request violates no privacy statutes. The IRS has disagreed, refusing, for example, to disclose counts of audits by income category and internal doc- uments on operating procedures for audits. In different court cases dealing with the requests, the IRS has maintained that disclosure of the data tapes or procedures would help taxpayers avoid audits, that the FOIA is not relevant, and that the privacy law will be violated, so the information should not be dis- closed. The Longs have also attempted to obtain information on the sampling frmne and results of studies generated in IRS probability sample audits; this information has also been refused. They have brought a number of such cases to the courts, winning access to some data under the FOIA in the lower courts. In particular, federal circuit courts have ruled that data tapes were discloseable under FOLD and not subject to laws governing disclosure of IRS records (26 U.S. Code &6103) when identifiers are deleted and the risk of deductive disclosure cannot be shown to be appreciable. The Longs have tes- tified before Congress on the need to make such information more accessible (P. Long and S. Long, 1974a, 1974b, 1976; S. Long and P. Long, 19731. In Forsham v. Harris, which was heard by the Supreme Court in 1979-1980, researchers were trying to obtain and reanalyze data generated in the University Group Diabetes Project (UGDP). The project, supported by the National Institutes of Health (NIH), was designed as a randomized field test of alternative methods of treating diabetes and resulted in the conclusion that one of the drugs tested, a popular one, appeared to have had strong nega- tive effects. The study generated a good deal of controversy, and the results were debated by the companies that produce He drug, physicians using it for treatment, interest groups, and statisticians. The original investigator refused requests by independent investigators for the data. The requesters filed suit under the Freedom of Information Act, maintaining that the data were col- lected under a federal contract for the National Institutes of Health and so

92 Robert F. Boruch must be regarded as public, except for identification of individual participants in the study. In a 7-2 decision, the court ruled against disclosure. Writing for the majority opinion, Justice William Rehnquist maintained that the law applies to records actually in the government's hands. Because NIH had not asked for the data (at that time), the agency could not be used as a vehicle for getting the data. See Cecil and Griffin (in this volume) for details. Research Versus Administrative Functions of Data The emphasis in this paper is on sharing information for scientific research purposes. There is much less stress on sharing for commercial purposes, and no attention is given to data shared in the interest of making specific adminis- trative or judicial decisions about an individual. The distinction between re- search function and the administrative function of information here is impor- tant. It parallels one drawn by the Privacy Protection Study Commission (1977) and adopted in some recently proposed bills on privacy in research and statistics. The distinction is important since the rules that govern access to records for purposes of making a decision about an individual must differ from those go- verning access for research. For instance, access for administrative purposes can carry major consequences for an individual, as in credit reporting and cri- minal records. Access by researchers generally carries no such direct conse- quences. To judge from evidence obtained by the Privacy Protection Study Commission, abuses are more likely in the administrative use of data; thus, We focus of government rules and professional codes needs to differ depend- ing on who collected the records, who has access to them, and what the pur- pose of access is. Despite differences in function, administrative records can often be used for research purposes: see, for example, Chelimsky (1977) on the use of cn- minal justice records in evaluating crime control programs; Conger et al. (1976) and Peng et al. (1977) on using records to assay accuracy of response in educational surveys; Del Bene and Scheuren (1979) on statistical uses of administrative records from Social Security Administration and other govern- ment files for studies of disability; and Bishop (1980) on energy consumption experiments. The uses of administrative records in public health research are sufficient to justify an annual conference on records and statistics that is spon- sored by the National Center for Health Statistics and other agencies. Access to administrative records for research purposes can be at least as im- portant as sharing data originally collected for research purposes, but it raises different problems. The laws or rules governing confidentiality of adminis- trative records on individuals or institutions, for example, can impede re- searcher access unless special exemptions are created. Such exemptions do

Definitions Products, and Distinctions 93 appear for certain kinds of research in the Privacy Act of 1974, governing fed- eral records, and similar exemptions appear in the laws of other countries (Mochmann and Muller, 1979; Plaherty, 19801. However, the opportunity for access to addresses of taxpayers maintained by the IRS virtually disap- peared with the Tax Reform Act. Rules ill the commercial sector vary con- siderably and decisions about permitting access appear to be mostly ad hoc, systematic only for the larger companies. Because the situation for private companies is so poorly explored—very little data on access practice exist for administrative records most of the material here focuses on public adminis- trative or research records. Contract and Grant Requirements Two common funding mechanisms for publicly supported research are con- tracts and grants. Contracts can be and often are written to ensure that the products? a report and the information on which it is based, are provided to government at the contract's end. The idea of data sharing emerges most of- ten in contract work, where the data belong, at least in principle, to the gov- ernment agency that asked for them. In practice, the accessibility may be ex- plicit in contract provisions (Garner, 1981), but it may be debated in the courts regardless of such provisions. Research grants also result in data that can be shared, but there has been lit- tle stress on routine sharing of such data partly because the data have been tre- ated as property of the principal investigator. Another reason for less atten- tion to data sharing in grants is that most grants are for the support of laborato- ry research in which replication of the research rather than reanalysis of indi- vidual records is paramount. Precedents for contract requirements to share data are easy to find. The data used in the Coleman et al. (1981) analysis of the relative effectiveness of private and public schools are part of a national longitudinal study of high school students conducted for the National Center for Education Statistics (NCES) by the National Opinion Research Center (NORC). The contract be- tween NCES and NORC specifies that data would be turned over to NCES for storage and distribution. However, although the data were available to other analysts when controversy erupted over the Coleman et al. work, few critics had actually reanalyzed the raw data. Since then, other analysts have worked with the data (Antonoplos, 1981~. Of course, access alone will not resolve some policy arguments about the work. For example, measurement of family income was based on children's responses to multiple choice questions, a pro- cess that warrants special attention and defense. Analogous provisions were incorporated into Department of Energy re- quirements for 16 recent public utility demonstration projects on peak-load

94 Robert F. Boruch pncing. The data produced and their documentation must be furnished to the department (Federal Energy Administration, 1976) for synthesis and reanaly- sis. Provisions to ensure that information will be made available to the re- search community have also been incorporated into contracts by the National Institute of Education for the National Assessment of Education Progress, an annual survey of student performance conducted for the Education Commission of He States, and by the National Center for Health Services Research for Michigan State University's Data Center on Long Term Health Care (Katz et al., 1979), and others. THE NATURE OF SHARED INFORMATION AND VEHICLES FOR SHARING The nature of the information that is made accessible varies a great deal. Alloy phase diagram data are consolidated and made available to scientists and engineers through the National Bureau of Standards and the American Society for Metals. The Materials Properties Data Center stores and disse- minates machine-readable data on tests on metals and ceramics to govern- ment, commercial, and academic users through a facility at Battelle Laboratories, and analogous on-line facilities are under development by the Copper Development Association, the Materials Properties Council, and oth- ers (see National Research Council, 1980~. The National Bureau of Standards has a major brokerage role in these and in the Fundamental Particle Data Center, Diffusion in Metals Center, the Data Center for Atomic Transition Probabilities, and the Crystal Data Center, to which physical scien- tists and engineers contnbute. Videotapes of selected commercial and public television broadcasts are ac- cessible to communications researchers, historians, and others at the National Archives, in specialized libraries at George Washington University, Vanderbilt University, and elsewhere (Adams and Schreibman, 19781. Oral history tapes are maintained at Columbia University and elsewhere. Results of acoustic tests are shared, too. One of the dramatic recent examples of the latter involves Bell Telephone Laboratory's audio recordings, generated as part of research under Arthur C. Keller during the 1930s, which recorded, among others, the Philadelphia Orchestra under Leopold Stokowski. The audio products of these technical tests on stereophonic recording methods, amplification, processing, and the like are maintained at the Rogers and Hammerstein Archives at the New York Public Library. Educational data from large-scale surveys are often made available through a variety of private and public agencies, as are health statistics and welfare data from surveys and social experiments (see below). Such data have been used in basic sociological research to test ~eroretical models, but they are

Definitions, Products, and Distinctions 95 probably used more often in applied research to anticipate or estimate the ef- fects of changes in tax law, Social Security and welfare rules, and the like. The administrative vehicles for distribution of these data include general gov- ernment facilities, such as the National Archives (see Dollar and Ambacher, 1978), specialized ones, such as the Bureau of Labor Statistics, National Center for Health Statistics, and others (see the review by Duncan and Shelton, 1978), academic data banks at the University of Michigan, the University of California, the University of North Carolina, and elsewhere, and private distributors such as DUALabs. The U.S. National Oceanic and Atmospheric Administration (NOAA) op- erates a variety of agencies that facilitate or serve as a vehicle for sharing nu- mencal information internationally. At the National Oceanographic Data Center, for instance, routine observation data from private and public sources continually are pooled and updated. The National Geophysical and Solar Terrestrial Data Center archives and distributes data relating to solid earth physics, e.g., volcanoes and earthquakes, geothermics, etc. The National Geodetic Survey Information Center distributes mapping information in machine-readable and other forms to federal, state, and local agencies and . . sclentlsts. Whatever the administrative vehicles for sharing data and the nature of the shared data, the process can be remarkably interdisciplinary. For example, economists Cain and Watts (1970) have reanalyzed data produced by educa- tional researchers Coleman et al. (1966) to reach conclusions about the effec- tiveness of compensatory education. Criminal sociologists Bowers and Pierce (1975) have rebutted Ehrlich's (1975) econometric analyses of the ef- fect of capital punishment on homicide rates, based on publicly available da- ta. Anthropologists have used satellite photos that were initially archived for agricultural and geophysical research to understand herd migration and the ef- fect of new wells in Norm Africa. The productivity of cross-discipline con- versations is also reflected in reanalyses of meteorological experiments (e.g., Braharn, 1979, and his critics). Of course, the feasibility of storing and distributing data depends on the information's character. It seems fair to say that machine-readable numerical data tapes are more suitable for routine sharing and that more is understood about efficiency in their production and distribution than for some other kinds of information, such as videotapes, pardy because experience with others is more recent. The problem of ensuring individual privacy and confidentiality has received more attention and appears to be more tractable for statistical re- search data than for other information. For example, blocking out faces is possible in videotape research on behavior of children or adults in classrooms, but it is difficult. Voiceprint analysis and other methods may make identifi- cation possible in analysis of videotapes and audio-taped oral histones.

96 Robert F. Boruch Because of the diversity of the kinds of data that research on for scientific pur- poses have to recognize major differences in the nature of information that is shared. Source Lists There is of course no universal list of the ~nforTr~ation that is routinely made available for scientific analysis, although archives that handle machine- readable data often issue regular reports on the data maintained. For in- stance, the National Technical Information Service (NTIS) and the Office of Statistical Policy and Standards (OFSPS) have regularly issued a Directory of Federal Statistical Data Files to assist users in locating what they need. Similar lists are issued by operating agencies for special user groups, e.g., the Directory of Federal Agency Education Data Tapes (Mooney, 19791. The problem of maintaining useful inventories of data tapes that can be shared is complicated and severe enough to have received the attention of President Carter's Reorganization Project on the Federal Statistical System. At least one commercial directory is available, the Encyclopedia of Information Systems and Services (pizzas and Sullivan, 1978), which covers bibliograph- ic as well as numerical machine-readable data, but it is not as thorough in cov- erage as the government listings noted above. Such lists pertain to data that are stored and distributed by standing archives rather than by individual scientists. To identify new data Mat may eventually be shared, formally or informally, by scientists or institutes, the annual re- ports of research supported by private foundations or public agencies can be helpful. Catalogs of applied research and evaluation projects are issued regu- larly by the U.S. Department of Education and the U.S. Department of Heals and Human Services (for example, 1983), the NTIS, and others. The U.S. General Accounting Office issues the Federal l~formation Sources and Systems (for example, 1976, 1980b) describing about 1,000 federal systems bearing on fiscal, budgetary, and program-related data, and Federal Evaluations (for example, 1980a), covering over 1,700 reports on specific programs. Either of these reports can be used to guide searches to numerical data that can be reanalyzed by independent researchers. The final broad class of sources includes statistical reports issued by the government or commercial vendors. The federal government, for instance, serves as a broker in consolidating statistics from disparate sources in such periodicals as Copper: Quarterly Report, Forest Products Review, Printing and Publishing Quarterly Report, Condition of Education, and others. Some of the statistics in these publications are based on microrecords Hat are avail- able from government agencies, such as the Social Security Administration, and from commercial sources, such as Dun and Bradstreet and McGraw-Hill

Definitions, Products, and Distinctions 97 Information Systems Company. No formal research appears to have been pu- blished on the utility of directories such as these, nor have there been any pu- blished critiques of the documents. International Aspects Data sharing is not confined to researchers in the United States, of course. Danish and German data archives, for instance, serve European social scien- tists with an interest in accessing and storing social data from field studies (see, e.g., Kaase et al., 1980~. New professional organizations such as the American Society of Access Professionals, the International Association for Social Science Information Service and Technology, and the International Federation of Data Organizations have helped to consolidate social scientists' interests in analyzing machine-readable data (Mochmann and Muller, 19791. The International Federation of Television Archives was created by represen- tatives of the broadcasting companies' television archives, and membership is extended to university-based and other TV archives (Schreibman, 1978~. International organizations such as the Organization for Economic Cooperation and Development and UNESCO have begun to try to establish guidelines on data sharing. International exchanges are not uncommon in en- gineenng, to judge from the American Society for Metals/National Bureau of Standards joint effort on data sharing for construction of alloy phase dia- grams. A collaborative 12-country effort to assay academic achievement of students, the International Educational Assessment (Jaeger, 1978; Postlewaite and Lewy, 1979), illustrates a similar cooperative effort in educational re- search. At We level of the individual researcher, examples of sharing across nation- al boundaries are not difficult to find. The randomized field experiments on nutrition and educational enrichment in Colombia are, for instance, some- thing of a milestone in demonstrating effects of such programs (McKay et al., 1978), and a small group of Colombian and U.S. researchers continue to reanalyze machine-readable results. Exchanges and cooperative projects are not as frequent as they ought to be in engineering, according to the National Research Council (1980) because of problems in nonuniform nomenclature and testing and reporting methods, quality of input, and language. Similar problems doubtless affect sharing in the social and behavioral sciences. Aside from single projects such as the international Educational Assessment and sporadic individual sharing, the stress in social, behavioral, and educa- tional research is on one's own county data. Rules governing international information flows are generally designed for commercial record systems, but they may also apply to scientific data (see Boruch and Cordray, below).

98 Robert F. Boruch Consolidation Level of Statistical Data The level of consolidation of the data that are shared also varies. In educa- tion, for instance, some archives store individual (and anonymous) student re- sponses to items in achievement tests and make the data available for reanaly- sis along with over information: for example, Gomez (1981 ) on tests of abili- ty measures for children of Colombian barrios. More commonly, however, test data on individuals are consolidated to produce a total score. Such totals for achievement tests, indices of functional or social mobility, or in- dices of income—have typically been available for reanalysis in educational, psychological, and sociological research. In the archives that make institutional data available, on banks for example, the data may be aggregated in such a way as to prevent analysis of individual banks, since disclosure of confidential information on the institution may be illegal or unethical. Rather, the independent analyst has access only to sum- mary data on a sample of small clusters of banks, as in the Wisconsin Income and Assets File (Bauman et al., 1970), or on data aggregated to regional or state level, as in most published reports of the U.S. Census Bureau. In still other cases, the data may be made available as summary statistics, obtained from a facility that analyzes the raw data according to prescription of the data requester, e.g., some research on Social Security Administration files (Alexander and Jabine, 1978) and on Internal Revenue Service files under the Tax Reform Act of 1976 (Alexander, 1981~. Much less fine-grained data are customarily available as the summary sta- tistics published in research reports or journal articles, and a good deal can be learned from these. Indeed, what is learned may eliminate or reduce the need for access to the raw data. To the extent that tabulated statistics are designed to exploit all the information in a sample and one is willing to trust that the analysis is appropriate and carried out as described, there may be no need for the raw data from a particular study. 'That journal publication of even crude details of analysis can be useful in detecting errors in analysis and Mat some errors will be important and warrant obtaining original data is clear, however: see, for example, Good (1978) and Wolins (1982) for lessons, based on jour- nal articles, about mistakes in analysis and inference. There are no generally accepted guidelines on what to publish and, partly as a consequence, practice is not uniform.2 In the interest of ensuring Mat read- ers can understand the original analysis and can verify it or not, at least super- ficially, suggestions on what to publish have been developed by Krusk¢al (1978) for science indicators, Mosteller et al. (1980) and Chalmers et al. (1981) for journal editors, and the U.S. General Accounting Office (1978) for federal evaluation reports. Such guidelines stress including information

Definitions, Products, an~Distinctions 99 about the nature of samples and randomization, statistical power and signifi- cance levels for tests, confidence intervals, the model underlying analysis, and so on. PRIVACY AND CONFIDENTIALITY AND PROPRIETARY INTERESTS Two issues in data sharing are debated often. They bear on confidentiality of information and privacy of individuals on whom records are shared and pro- pneeary interests in capitalizing on data. The value of the data themselves, less often debated, is at least as important as are other matters treated in the re- mainder of the report. Privacy and Confidentiality If the information shared for scientific purposes bears on individuals or insti- tutions, then privacy may be a critical issue. Partly as a consequence, a good deal of work has been done on understanding when information on identifi- able individuals should remain confidential and how to ensure confidentiality. The work is international, having been undertaken in the United States, Canada, Germany, Sweden, and elsewhere. It spans disciplines, solutions to related problems being developed by statisticians, lawyers, social and be- havioral scientists, and others. The following sketch of some developments is based on Boruch and Cecil (19791. General strategies for ensuring confidentiality can be classified into three broad categories: statistical, procedural, and legal. Statistical strategies in- clude those used in initial data collection, e.g., randomized response, contam- ination, response aggregation methods, and so ameliorate problems of later data distribution. They also include methods used in the data distribution process to protect against deductive disclosure of information about identifi- able individuals based on nominally anonymous records. Deductive disclo- sure here refers to the possibility of deducing that a particular record, stripped of identifiers, belongs to a particular known individual, or deducing that iden- tif~ed individuals have certain characteristics from published statistical tables (or public-use tapes) and collateral information on the individual. Staff of census bureaus in the United States, He United Kingdom, and Sweden, for in- stance, have developed algorithms to determine if deductive disclosure is possible in releases of series of statistical tables. The strategies developed to reduce the likelihood of such disclosure include special numerical rounding techniques, error inoculation, and repeated subsampling. Procedural strategies generally involve nontechnical approaches to reduc- ing privacy or confidentiality problems. The simplest include not obtaining

100 Robert F. Boruch identification at all at the data collection stage or eliminating identifiers from records at the data distribution stage. More elaborate strategies have been developed to permit linking records (of the same individuals or institutions) from different archives without violating promises of confidentiality made to the individuals on whom records are maintained or laws or rules governing ac- cess. Such strategies have been used in small and large linkages, in the pri- vate and public sectors, to produce linked records that are more useful for re- search than the individual files. Applications have been made in marketing, law and sociology, psychology, education, welfare, criminal justice, and oth- er research. Legal strategies generally focus on the problem of ensuring that research data on identifiable individuals are used only for research purposes. They in- clude so-called testimonial privilege statutes that prevent the courts and ad- ministrative agencies from appropriating research records on individuals for the purpose of legal prosecution, and some court decisions are oriented in the same way. Most existing statutes apply to records on individuals, not to rec- ords on institutions, though protection of institutional records is in fact older in census law. The new bills in this genre (Privacy of Research Records Act, Confidentiality of Statistical Records Act) would be helpful to a researcher with interest in analyzing another researcher's data, permitting access to rec- ords on identifiable individuals under specified conditions. There are major gaps in the existing legal protection for privacy and in the associated provisions for data sharing. As noted above, most laws apply to individuals, not institutions. Consequently, a researcher working on police departments could offer no statutory assurance that research data on individ- ual departments would be used only for research purposes. Similarly, hospi- tals that cooperate in epidemiological research have no general protection against the problem of an outsider suing He government to obtain data for nonresearch purposes. An exception involves work covered by the Public Health Service Act. More important perhaps, the current laws are fragmented, covering special areas such as criminal justice or mental health research; the Census Bureau, the Social Security Administration, and a few other agencies have different specialized statutes. Bills to ensure individual privacy and researcher access, such as He proposed Privacy of Research Records Act, would help to make the law more uniform, but not much work is being done on them. A third ma- jor limitation in existing laws is that they usually apply only to federally sup- ported research.

Def nitions, Products, and Distinctions Proprietary Interests 101 Two kinds of proprietary interests are important. The first concerns indi- vidual scientists and the "right" to analyze data, especially data collected by oneself. The second concerns institutional interests in a particular data set and the "right" to control who analyzes it. In some research, individual interests are often negligible. For instance, individual proprietary interest is now unimportant in a good deal of economic research because the work often relies on public-use tapes or published statis- tics. The reanalysis of National Bureau of Economic Research studies by Feldstein (discussed below) illustrate the type. One might argue, however, that publication has become routine because proprietary interests have in the past prevented access to individual records on competitive institutions. And exceptions do occur. For instance, it was not possible for some analysts to reanalyze Ehrlich's work on the impact of capital punishment on homicides because consolidated files of the public data he actually used were unavail- able; the file had to be reconstructed by Bowers and Pierce (below). Individual interests are less important when government requirements, represented in contracts or in statements of regulations about grants, specify that the government is entitled to the data. This broad class does not apply to all government agencies, but is material to some important ones. Both the National Institute of Justice (NIJ) and the National Science Foundation (NSF) maintain provisions that require grantees to make data available to other scientists, though conditions of disclosure differ a bit. A number of agencies regularly include provision for construction of public-use data tapes in con- tracts for surveys, e.g., the National Center for Education Statistics (see be- low). And other agencies have similar contract provisions for irregular spe- cial research, e.g., the graduated work incentive experiments of the U.S. Department of Health and Human Services and the energy consumption ex- periments of the U.S. Department of Energy. Individual interests are also less material for research areas in which knowl- edge is advanced better through replication of experiments or reanalysis of summary statistics (e.g., covanance matrices) than through reanalysis of raw records. Much laboratory research in psychology falls into this category (see the Journal of Experimental Psychology); the same is true, though not to the same degree, for X-ray crystallography in chemistry. Individual proprietary interests emerge more often in research supported by public or private sponsors that have neither policy on access nor consistent practice. More important perhaps, the ability of a researcher to analyze data he or she collects before anyone else does so is regarded as a privilege or right by scientific custom. This tradition has been reiterated explicitly by, among

102 Robert F. Boruch others, the Committee on Scientific Freedom and Responsibility of the American Association for the Advancement of Science and its chair, John Edsall (Dickson, 19801. Indeed, a tradition of not sharing or of sharing very selectively seems to be not uncommon in the history of science, and secrecy has not always served only selfish interests. During the seventeenth century, for instance, John Graunt wondered in Bills of Mortality whether it is wise to make statistical data on health known generally, though the interest in advancing a new quantitative political science is clear. Earlier, of course, Copernicus and Galileo were catapulted out of a "py~agorian privacy of research" (DeSantillana, 1955), a privacy that had some implications for self-advancement and self-preservation as well as for the advancement of science. This earlier custom has changed, partly because of research sponsors' inter- ests in the products of their investment, as the NSF and NO policies suggest. The occasional but dramatic episodes of fraud may also be pertinent. The change too may stem from a gradual enlargement of what scientists view as an adequate level of communication in science, an ethical matter for Pigman and Carmichael (1950), among others. Jeremy Bernstein (1978), for example, appears to be astonished that Rosalind Franklin and her assistant, Gosling, did not publish their work on DNA structure: "They simply treated it as private and personal data" (p. 1541. The idea of making more information available, including raw data, is reflected as well in recent editorial policies for some professional journals and some codes of conduct. There is in We professional literature a demarcation between individual pro- prietary interests before and after a report is issued. That is, the privilege of first analysis ends win publication of findings in a scholarly journal. After publication, it is argued Rat scientists have an obligation to submit results to confirmation, openness to criticism being implicit in publication of a scientif- ic article. Some forms of confirmation or criticism are simply not possible if based on the published material alone. It is partly for this reason Rat some professional codes and journal policies that encourage data sharing hinge on publication. Data sharing is acceptable, even encouraged, for some research supported by nongovernment organizations. The American Chemical Society journals, which include many articles by authors in the private sector, make data supplements available to permit independent appraisal of conclusions in pu- blished articles. Contributions by commercial laboratories to cooperative ef- forts, such as the American Society for Metals/National Bureau of Standards alloy phase diagram project, reflect Me same spirit. There is not enough evidence on sharing of scientific data in the commercial sector to make any generalization about its frequency. In social science research, private foun- dations such as Be Russell Sage Foundation have supported secondary ana-

Deft nitions, Products, and Distinctions 103 lyses of data (e.g., from evaluations of"Sesame Street") and so encourage data sharing in some measure. But most private foundations appear to ignore the matter entirely. Whether data are shared or indeed can be shared when publication is based on business-supported research varies a good deal. As noted above, some data from independent laboratories, including commercial ones, are pooled for common use in the alloy phase diagram project of the American Society for Metals/National Bureau of Standards, in the Materials Properties Data Center, in some American Chemical Society journals, and others. On the other hand, evidence on toxic chemicals, radiation, pollution risks, and other sensitive topics have often been difficult to obtain. Even reports containing only summaries of evidence are at times impossible to extract (see von Hippel and Primack, 1972; Pigman and Carmichael, 19501. Some of the difficulty in getting data concerns unpublished work or administrative data. But this does not make them any less useful for research, especially when such data are labeled as scientific evidence in public hearings (National Research Council, 19771. Other difficulties involve potential disclosure of institutional imperfections or what could be exploited in commercial competition. For instance, at least a few contributors to the Materials Properties Data Center are wary about subsequent disclosure of the fact that they supplied certain information be- cause it may put the materials they sell in a bad light. Finally, institutional interests may only be a smoke screen. For example, if data prepared by a company's research unit on quality of work life experiments are found to be poor by independent analysts, individual careers may be negatively affected. In any case, it is a burden to supply information, and the benefits to an enter- pnse may not be worm the trouble. ENCOURAGING DATA USE It is obvious that merely making data accessible does not guarantee that they will be used. Scholars or other potential users may need instruction in how to obtain access to the information and how to use and evaluate it. They may need guidance about exemplary uses and critical review of their own analyses. Incentives may be needed to encourage better exploitation of the data. Most of these problems have been identified elsewhere by specialists in machine- readable data archives (e.g., Robbin, 1981a), engineering (Mindlin and Kovacs, 1979), and the social and behavioral sciences and education (Boruch et al., 1981b). The following remarks illustrate a few approaches to en- couraging data use. The National Assessment of Educational Progress has for the past 10 years conducted annual surveys of student proficiency in conventional academic

104 Robert F. Boruch subjects such as arithmetic and reading and less conventional ones such as music and visual arts. The achievement tests and sample or which the sur- veys are based are well designed, judging from commentary on the project. The information generated has been used at local, state, and national levels to understand student performance. But until recently, the raw data on student responses to tests, though available, have not been exploited well by academ- ic researchers. Partly to understand the utility of the data, the National Science Foundation has supported cooperative institutional research on the topic. So, for instance, exemplary analyses have been undertaken by well- known researchers to provide models of what can be done. Workshops have been organized for interested researchers to learn about the data and about new methods of analysis. The most recent round of such workshops in 1981 included participation by science educators, economists, educational re- searchers, psychologists, and others. The workshops are set up so that parti- cipants in the first round prepare their own analyses and present the work for criticism in a second, and the better papers are published in an edited mono- graph (see Walberg et al., 1981a, 1981b, for details). Variations on the workshop approach have been tried by the National Opinion Research Center, which developed workshops in 1980 on analysis of longitudinal data available from itself and elsewhere. Short courses on ob- taining and analyzing machine-readable data files have been developed by the University of Wisconsin's Data Center (David et al., 1978), the University of Essex Data Archive (SSRC Data Archive Bulletin, January 1983), and by oth- er institutions. An approach to encouraging data use, supporting research that involves reanalysis of existing data, is natural for many foundations. No special ef- forts to focus solely on the topic seem to have been undertaken, but support under more general competitive grant programs and in special contracted re- search are not difficult to find. Illustrations include: work on verifying pro- gram evaluations in education, e.g., Wortman et al. (1978) on the voucher ex- penments in Alum Rock and Boruch and Wonman (1979) more generally, supported by the National Institute of Education; research supported by Me Agency for Children, Youth, and Families that involves pooling different data sets in the interest of understanding child and family support systems; grants for secondary analyses of publicly supported research by private foundations such as the Russell Sage Foundation, e.g., Rossi and Lyall (1976) on the New Jersey negative income tax experiments and Cook et al. (1975) on We children's television program "Sesame Skeet." A third set of approaches applies to public policy research and other en- deavors for which replication is difficult or impossible and the need for inde- pendent competing analyses takes precedence over proprietary interests. So, for example, a recent report to the Congress and We U.S. Department of

Deft nitions, Products, and Distinctions 105 Education recommended that major policy research data be subject to simul- taneous independent analysis in the interest of balanced information (Boruch et al., 1981a). The controversy sometimes produced by primary analysis in policy research can itself influence reanalysis, as shown in the efforts to ac- quire and analyze data used by Coleman et al. (1981) in their work on private and public schools (Antonoplos, 19811. The fours approach is to depend on professional societies for reporting re- search. Journals can establish policies that ensure that data are accessible and that capable reanalyses are published (see Boruch and Cordray, in this vol- ume). Indeed a fair number of journals in economics do publish competing analyses of the same data. Other disciplines stress original data collection more heavily, however, and are less inclined to publish reanalyses that con- firm already published findings, even in short notes. Journals can carry notes on availability of new data sets and can legitimately require full citations when the set is used as a basis for an article. Government publications that sum- marize data, such as Condition of Education, Social Indicators, and Science Indicators, can also do better in informing interested readers which agency maintains the data so as to encourage reanalysis (see Kruskal, 19781. WELL-PUBLICIZED EXAMPLES OF DATA SHARING AND NOT SHARING Some cases of data sharing or failure to share have been dramatized in the po- pular and professional press. The following briefly describes a few cases and the lessons Hat might be drawn from them. Sociology and Education: Public and Private Schools High School and Beyond is a longitudinal study of students based on a nation- al probability sample, conducted for the National Center for Education Statistics (NCES) by the National Opinion Research Center in Chicago and directed by sociologist James Coleman. Begun in 1980, the project's main purpose is to follow the progress of young people during the critical transition from high school to work, college, and family, with follow-up data being col- lected every 2 or 3 years. It is a massive undertaking, involving over 50,000 adolescents and 1,000 schools in the initial sample, with oversampling of spe- cial groups, such as Hispanics. NCES makes resulting data accessible to educational, economic, and other researchers (National Center for Education Statistics, 1981a). This includes storage and distribution of raw microdata tapes, tape files that are tailored to commonly used statistical analysis pack- ages, and files that are constructed for special uses. Coleman and his colleagues issued a draft report in April 1981 containing

106 Robert F. Boruch analyses of private and public schools, based only on the first survey wave (he., a cross-section), that generated considerable controversy (National Center for Education Statistics, 1981b; Coleman et al., 19811. The draft re- port suggested that private schools have a greater impact on student perfor- mance even after one accounts for differences in background characteristics of students, geography, and over obvious influences. Arguments against the analyses were made in the popular press, e.g., the New York Times and the Washington Post, as well as in professional conferences sponsored by the National Institute of Education (Antonoplos, 1981) and the National Research Council (19811. The data on which these analyses were based were made available in March 1981 by NCES. Until the controversy emerged, however, no major analyses had been undertaken. The controversy appears to have spurred faster partial analyses of published statistics, notably on adequacy of sample size, on meas- ures of academic achievement, and at least one competing analysis of the raw data (Page and Keith, 19811. There seems to be good argument for con- tracting for several simultaneous competing analyses for such policy-sensitive cases. Economics: Feldsten, and the Effects of Social Security Feldstein (1974) concluded that the Social Security system discourages house- hold savings considerably, thereby decreasing the money available nationally for investments. The report, published in a premier journal, was called "one of the most influential" of such works, "part of the conventional wisdom" of social security economics, and "important" by popular and professional wnt- ers. The work was widely publicized and cited by economists and influenced federal policy. Other analyses of similar data had been undertaken, of course, some making similar conclusions about direction though not magni- tude of effect (e.g., Darby, 1979) and others finding no effect (e.g., Munnell, 1974, of the Federal Reserve Bank of Boston). Several years after the publication of Feldstein's work, D.R. L'eimer and S.D. Lesnoy (1980) of the Social Security Adm~nis~ation undertook a critical reanalysis. They initially planned to examine what they regarded as implaus- ible assumptions in the complex set of models that Feldstein used as a basis for analysis of time-series data on savings3. To initiate their examination of the sensitivity of the Feldstein models to alternative assumptions, Leimer and Lesnoy attempted to replicate the original analysis, and they discovered a pro- gram~ung error in the original analysis an error whose correction dramati- cally changed the nature of the estimated relationship. The error involved the definition and numerical computation of gross social security wealth, an indi- cator of retirement benefits anticipated by present and future beneficiaries.

Mentions, Products, and Distinctions 107 Subsequent analyses of corrected time series suggested the effect of gross and net worth on savings is negligible for the data available 193~1974 and the models tested, including Feldstein's Leimer and Lesnoy also suggested that their conclusions remain unchanged if the original models are modified to in- corporate alternative benefit and tax perceptions, are applied to different time penods, or use different models and indicators of Social Security wealth. Feldstein, who was chairing the American Economic Association meetings session at which the results were presented, concurred that an error had been made but made no statements that suggested a change in his beliefs about direction of the influence of social security on savings. Feldstein assisted in discovery of the error by making available both pu- blished and unpublished reports to Leimer and Lesnoy, as indeed he should, and provided advice and reactions to the authors' questions about why their results differed from his (Leimer and Lesnoy, 1980~. The authors also went a couple of steps beyond Feldstein's original analysis with corrected data to en- sure that the analyses are not sensitive to plausible alternative conclusions, and those steps are distinctive. They also recognized that other model spJeci- fications may yield still different results, though they present no options. Zoology and Genetics: The Kammerer Affair During 191~1920, zoologist Paul Kammerer issued a series of reports, sum- marized later in book form, on experiments that purported to show that ac- quired characteristics could be indented. Particular experiments involved production of midwife toads, a species that normally does not have thumb pads, but did appear to inherit them following Kammerer's techniques, and salamanders with other charactenstics. The work was challenged by William Bateson, who tried between 1917 and 1926 to examine Kammerer's specimens. According to Zirkle (1954), he was not successful in doing so until 1926. Upon succeeding, his finding Mat "the acquired characteristics, which Kammerer claimed to have made heredi- tary, turned out to be India ink" (p. 189) was eventually published in Nature (cited in Zirkle, 1954~. Kammerer eventually published retractions of his claims, maintaining that the specimens had been altered by an assistant. The controversy appears to have clouded some wnters' vision in that they maintain the Kammerer work was legitimate but not replicated. It fed politi- cized science in the sense of lending support to Lysenko and other Soviet geneticists for their views. The important consequence for science is identi- fying what was not true, i.e., that there was no evidence for the contention that species could be made to inherit characteristics acquired by their antece- dents.

108 Robert F. Boruch Pathology and Experimental Biology: Hodgkin's Disease Cell Cultures Researcher John Long claimed success in establishing cell cultures from pa- tients with Hodgkin's disease in work at the Massachusetts General Hospital (cited in Dickson, 1981~. Recent work has concluded that the drug cultures are not authentic, and are almost certainly derived from owl monkey tissue. The importance of the claim and its subsequent rejection lies partly in the need to develop such cultures to understand how to treat the disease, in the failures of other laboratory attempts to establish the culture, and the frequent problem of contamination, i.e., original cells being supplanted by a contaminant. Long has said that he believed the cell lines authentic but now believes they were contaminated. The problem is not uncommon, to judge from frequent contamination of cells by HeLa cells, and identification of the change is diff~- cult. Complicating the matter, however, is the contention that the original in- vestigator has forged data, and admitted to forgery, in a major grant applica- tion (see Dickson, 1981~. The discovery that cells were contaminated was made after four major pa- pers on He topic were published, the papers being cited frequently in the pro- fessional literature. Discovery was possible in part because samples could be and indeed were available for independent analysis. In particular, the head of the hospital pathology department at which the investigator worked sent sam- ples to UCLA's cell culture laboratory, and their work was later confirmed by the New England Regional Primate Center. The contaminant had indeed been used for virus research in the same laboratory. An independent audit of the work undertaken by the hospital research staff also confirmed that three of the four cell lines were nonhuman, the third being human but not clearly linked to Hodgkin's disease tumors (Harris et al., 19811. The results are im- portant in understanding that cell lines have not yet been established. But it is not yet clear how much theory, constructed on the basis of spurious data from work with the cell lines, will be affected. TYPICAL EXAMPLES OF DATA SHARING AND THEIR PRODUCTS The controversies are interesting but do not reveal much about how data shar- ing is accomplished or about the product of the effort. The cases discussed briefly here have been selected for their diversity, including the size of the ori- ginal project and disciplinary area and the lessons they teach. (The topical categories here overlap win Hose of Cecil and Griffin, in this volume, but ex- amples and substance differ.)

Definitions, Products, and Distinctions Education and Training 109 Of all purposes of shared data, the pedagogical one is perhaps the most ob- vious. Datta's (1977) history of research on Head Stan preschool programs, for instance, presents persuasive evidence that both original analyses and reanalyses of the original evaluations have been used heavily in graduate training. The idea is not new, of course. Judging from the use of small sets of raw data from actual studies in classical textbooks by Kendall and Stuart and by Snedecor and Cochran, reanalysis of data Is commonplace in college and university training. But there are no statistics on frequency of use. Very limited evidence from experience in a Northwestern University program sug- gests that half of the published papers on reanalysis are done by graduate stu- dents or postdoctoral fellows (see Boruch and Wonman, 1978~. From lists of papers catalogued as products of reanalysis in the Labor Department's longi- tudinal surveys of labor markets, in the NCES's national longitudinal studies of high school students (Pen" et al., 1977), and in Project Talent, at least 10 percent have appeared as graduate theses or dissertations. Some special training programs and short courses are built around a particular data set: e.g., NSF has sponsored competing analyses of data from the National Assessment of Educational Progress (Walberg et al., 1981a, 1981b). Those efforts that have involved student and faculty collaboration and have resulted in published products include the Moynihan and Mosteller ( 1972) edited volume on reanal- ysis of the Equality of Opportunity Surveys and the Cook et al. (1975) reana- lyses of data generated in field tests of the children's television program "Sesame Street." Verification, Disconfirmation, and Robustness Analyses Partly because weather control experiments are expensive and time con- suming, partly because the inferences drawn can have dramatic implications for social policy, at least some of these projects have been subjected to inten- sive reanalyses. Among others, Project Whitetop (Braham, 1979) has re- ce~ved considerable attention because original work suggested that silver iodide seeding has negative effects on precipitation. Some reanalyses (e.g., Dawkins and Scott, 1979) appear to confirm this and illuminate the reasons for it. Others are skeptical that the effect is real. This particular work is due in no small measure to remarkable record-keeping in the original experiments and agreement among the original investigators to share the data. In economic research, the Leimer and Lesnoy (1980) examination of Feldstein's (1974) original work on the effect of Social Security on capital

110 Robert F. Boruch stock (discussed above) was initiated to determine if relaxing certain implaus- ible assumptions had any effect on conclusions. These analyses relied on data available from published statistical abstracts, as did reanalyses of Ehrlich's (1975, 1981) work on capital punishment by Bowers and Pierce (1975, 1981), and others. In educational research, Moskowitz and Wortman (1981) have reanalyzed the Riverside School desegregation data on reading achievement of Mexican- Amencan children, and their results agreed with the original analyses, despite multiple analyses with more sophisticated methods. Bejar and Rezmovic (1981) reexamined data generated in the Call, Colombia, randomized experi- ments in education for impoverished preschool children to corroborate ongin- al findings (by McKay et al., 1978) that enrichment programs did indeed exert substantial influence on children. In manpower economics, Director (1981) among others has reanalyzed early work by Mangum (1973) and the U.S. Department of Labor's 1975 studies to argue that gains exhibited by disadvan- taged enrollees are almost certainly due to regression to the mean, rather than to the programs, a view that differs notably from the original analyses. Methodological Studies Recent Colombian experiments involve field tests of different levels of an educational enrichment program for impoverished children, augmented by a nutritional supplement program (McKay et al., 1978~. The original analyses were based partly on standardized ability tests adapted to the Spanish- speaking children. The properties of resulting statistical estimates of ability are not completely understood, though it is clear Rat the treatments have a dif- ferential impact on performance as registered by the tests. In order to under- stand whether newer methods of estimating ability could be more informative, Gomez (1977, 1978, 1981) exploited the original data using so-called Rasch models, a mathematical representation positing that observed test scores for any individual are a function of latent ability and test item difficulty, each in- dependently estimable. The model appears to yield estimated ability scores on an interval scale and with reasonable statistical properties. It does not change substantive conclusions on the remarkable effects of the education program. Shared data from large-scale surveys and social experiments, and perhaps also from physical and engineering studies, are a natural vehicle for studies in reliability and validity of reporting, calibration, and the like. Some of these are enumerated in Peng et al. (1977) for educational research, and Bielby et al. (1977) for manpower "raining. Judging from the Peng et al. report, the re- lative frequency of such methodological papers is notable: 15-30 percent of all published work, depending on one's definition of methodological study.

Definitions Products, an~Distinctions 111 Use in Design of Studies, Programs, or Constructions The tradition of exploiting data in handbooks is a sturdy one in the physical sciences and engineenng, and there has been recent broadening of the inter- est, for metals and alloys at least, in pooling data for fast-moving technology. The interest is reflected in the report of the Panel on Matenal Properties of Data (National Research Council, 1980) and creation of vehicles for sharing, such as the Material Properties Data Center and the American Society for Metals/National Bureau of Standards alloy phase diagram project, discussed above. The panel's 1980 report suggests that there is strong interest among indus- tries, government, and academic research institutes in having access to data on mechanical properties of metals and ceramics. The interest appears to be strongest for materials used in aerospace projects, nuclear, solar, and other energy production, transportation, and copper use. The data are used in ma- terials selection, design of configuration and size of components, manufactur- ing and fabrication, life estimation, life testing, and failure analysis. There are significant efforts already under way to disseminate such data according to the report (see below), but "a broad need [still] exists for coordinated up to date reliable data bases that are accessible to different types of users through the various classical and modern methods of dissemination" (National Research Council, 1980:91. In the developing areas, there are still substantial problems of coordination (including cooperation), standardization of methods for soliciting, reporting, and accepting data, and quality control. One specialized effort in this area is the Mechanical Properties Data Center (MPDC), designed to acquire and distribute information about properties of matenals, especially aerospace matenals. When possible, raw data are en- tered into the system based on test results supplied by private contractors and laboratories and publicly supported research, along with information on the nature of tests that lead to the data and metal processing history and composi- tion. Results of 1.5 million tests are said to be available, and some 200 "new specimens" are added each month (Battelle Columbus Laboratories, 1980a, 1980b, 1980c). There is considerable attention to making the data available to users. The vehicles include a computer-based retrieval system based on al- loy condition and form, the type of test (e.g., for compression or tensile strength and testing variables such as temperatures and load rate. The prod- uct data are supplied in a variety of fonns including statistical summaries and graphs, individual test results, and reports, and some are consolidated in handbooks and proceedings that are updated periodically. The generic problems in the project are startlingly similar to those encoun- tered in similar projects in social and behavioral data archives. According to

112 Robert F. Boruch Mindlin and Kovacs (1979), the difficulties include: (1) obtaining access to data, especially in view of propnetary interests; (2) reformatting input data to accord with output criteria; (3) instructing potential users about the system; (4) user suspicion of data that were not generated by the user's agency; and (5) marketing. The alloy phase diagram program is a joint venture of the American Society for Metals and the National Bureau of Standards. It is dedicated to acquiring, evaluating, and distributing data on microstructural change in alloys as a func- tion of temperature and alloy composition, the data being summarized in stan- dardized phase diagrams. Both private and publicly supported research la- boratories supply the basic data. Cooperation among a variety of institutions is necessary since it is impractical for any single institution to undertake pro- duction of data on all types of alloys. The effort is international, involving research units in the United States, Germany, Japan, and other industrialized counties. The phase diagram program stresses distribution heavily. Diagrams are published as final or provisional in a journal, Bulletin of Phase Diagrams, whose editorial board is international. The journal also carries information on how to use the diagrams, references to source articles, and reports and re- l~ted information (see Bennett, 1980; National Bureau of Standards, 1980a, 1980b). Combining Studies Pooling data on Me same topic from several sources or examining several stu- dies simultaneously can be an effective vehicle for better understanding of the topic, Cough technical problems can be severe (e.g., nonindependence of We separate data). In Me simplest case, of course, a review of literature consti- tutes one kind of common pooling. The more numerically oriented combina- tions take several forms (Glass, 19761. In some research, for example, one level of combination addresses only statistics available in published articles, not raw microrecords. The approach has been used by Gilbert et al. (1977) to understand likelihood of success in innovative surgery, by Light and Smith (1971) to reconcile conflicting results, by Smith and Glass (1977) to assay distribution of successful and unsuccess- ful methods of psychotherapy, by Gordon and Morse (1975) in examining likelihood of success and failure in public programs, and elsewhere. There are many such routine uses in engineering. As described above, data on properties of materials and phase diagrams are constructed from data supplied by a variety of sources (National Research Council, 1980~. Combining raw data on individuals from surveys and social experiments with records from administrative archives is not common, but it has become

Definitions, Products, and Distinctions 113 more so over the past 10 years, partly because the results are illuminating. Some of the effort is designed to understand the structure of error in adminis- trative records or survey responses or both. The interagency linkage study conducted by the Census Bureau, Social Security Administration, and Internal Revenue Service illustrates the type, though sharing is confined to the federal agencies; the same is true for some program evaluations in health care and social welfare (Boruch and Cecil, 19791. In social research, the purpose of combining data sets often is for policy research. Michigan's Archive on Long TerTn Care, which acquires data on long-term care field experiments, puts it into uniform format, and makes the files available for policy analyses (Katz et al., 1979), and Columbia's Housing Survey Project (Beneridge and Dhrymes, 1981) fall into this category. So do recent contracts of the U.S. Department of Energy with Research Triangle Institute for compilation and standardized analysis of state utility demonstration projects on peak-load pric- ing that were analyzed earlier in nonuniform, different ways by the individual state utilites (Research Triangle Institute, 1978~. EVALUATION OF DATA-SHARING EFFORTS While the idea of data sharing in principle is agreeable to many scientists, at least for publicly supported research, what good the sharing does is not often assayed systematically. To be sure, peer review constitutes a kind of immediate evaluation when plans for large-scale sharing are drawn up and projects that hinge on data shar- ing are proposed. But these reviews He often neither open to scrutiny nor, more importantly, directed at the utility of the product. The more arrogant directors of a data collection effort may not say that the worth is self-evident, but the implication is there insofar as very little hard evidence on utility of the information Is published. The problem of evidence has become more crucial for federally supported work as a consequence of restrictions in resources for collecting new information and increasing congressional and administrative interest in evaluating basic and applied research programs. Apart from politi- cal incentives, the problem of understanding how to evaluate the product of data-sharing systems, how to improve them, and when to encourage or ter- minate them seems a reasonable intellectual problem. The state of the art in evaluation of information collection efforts, including the product data sharing is underdeveloped. Systematic theory on cost/benefit analysis of data has only recently been developed (Spencer, 1980) and only for social survey data used in allocating resources by the Congress. Use of data, much less its value, is difficult to measure even when mission- oriented research Is carried out and the resulting data subjected to competing analyses (Boruch and Cordray, 19801. Nonetheless, some crude methods are

114 Robert F. Boruch available, and they could be applied and refined. The documents issued as a result of analyses of shared data constitute one indicator of productivity of an archive. But frequency counts of publications and bibliographies that summarize the products and why they are important are not common. Exceptions include Peng et al. (1977) and Taylor et al. (1981) on NCES's national longitudinal studies, Bielby et al. (1977) on the Department of Labor's national study of labor supply, Postlewaite and Lewy (1979) on the international educational assessment, and related products issued by the NRC medical follow-up study, Project Talent, Northwestern's Project on Secondary Analysis, etc. Logs sometimes maintained by data- sharing institutions, such as those of the National Assessment of Educational Progress and National Center for Education Statistics, on requests for tape files, documentation, etc., constitute a major vehicle for tracing further prod- ucts and their utilization (see Peng et al., 19771. Frequency counts are at least partly corruptible and insufficient. The corruptibility is fair game for measurement research. Sufficiency might be achieved with other indicators. Quality of the product is important, but systematic research on this is even less common. Exceptions are confined to a few evaluations of medical and oceanographic research programs, and of the use of peer ratings and citation counts as bases for judging the adequacy of institutional work (National Research Council, 1981~. The strategies developed in those approaches are generalizable perhaps to products generated by data archives but have not been applied. The process of sharing data, as well as products such as reports, can also be evaluated in some sense. The questions that might be addressed include: How easy is it to find out about data? How easy and efficient is the process of acquisition or distribution? What are the costs and are they reasonable? How well are data updated, corrected, documented? And so on. Managerial questions such as these are examined at times within archives. But the ex- penence itself is not often discussed in published papers, seems to be less ord- erly than it might be, and probably would profit from more concerted atten- tion. There are a sufficient number-of efforts to develop standards of docu- mentation by Robbin (1981b) and others to make some evaluations of this sort possible. But evaluations of processes of other sorts and especially of prod- uct utility are likely to be more difficult. Vehicles for simple routine monitoring of extent and nature of sharing are sometimes available. For instance, the American Chemical Society's jour- nals department head, Charles Birch, maintains records, for articles published since 1974, on the provision of supplements by authors (e.g., raw data) to the Journal of the American Chemical Society. The supplements in microfiche form are available through subscription or ad hoc requests, and estimates of rates of requests for venous ACS journals are available (see Borsch and

Deft nitions, Products, and Distinctions 115 Cordray, in this volume). Not all journals have a data supplement service of this type, and a monitoring system for those that simply require authors them- selves to make data available would have to be invented. Establishing the impact of sharing regardless of quality or number of the physical products and regardless of the process is likely to be most difficult. Most management decisions based on such data, e.g., at the level of the Assistant Secretary for Policy in the U.S. Department of Health and Human Services, are likely to be barely visible and tangled with other information. Consequently, making an inference about whether the data actually influ- enced the decision is risky. Design decisions in engineering and experimentation He typically small and forgettable, and utility of infonnation hard to obtain. Deciding whether a scholarly paper, published on the basis of shared information (or for that matter on unshared information), is a distinctive contribution to scholarship is frustrating, difficult, and will be impossible for some. The whole matter be- comes much more difficult with multiple users, of course, when users are barely identifiable. In summary, formal evaluations of data-sharing efforts are not common, the state of the art in evaluation is underdeveloped, formal evaluation may be warranted to understand the worth of the activity, and a variety of types of evaluation may be possible. NOTES 1. Some formal research on levels of accessibility of administrative records has been done by Gordon and Heinz (1979) and Sasfy and Siegel (1982) to understand the influence of practice and policy of government agencies and the nature and source of demand for information. 2. The quality of reporting summary data and other aspects of research seems to have im- proved considerably since Pigman and Carmichael (1950) identified good reporting as an ethical obligation of scientists (p. 644): '`Even casual inspection (showed) that many articles are not writ- ten so that the work can be repeated." 3. The time-genes data underlying Feldstein's work and used by Leimer and Lesnoy are ac- cessible in published statistical abstracts, e.g., Annual Statistical Supplement to the Social Security Bulletin, Handbook of Labor Statistics, Current Population Reports of the Census Bureau, and others (see Leimer and IRsnoy, 1980, Appendices D and E). REFERENCES Adams, W., and Schreibman, F.C., eds. 1978 Television Nenvork News: Issues in Current Research. Washington, D.C.: School of Public and International Affairs, George Washington University. Alexander, L. 1981 Proposed Legislation to Improve Statistical Research Access to Federal Records. Unpublished report. Social Security Administration, U.S. Department of Health and Human Services, Washington, D.C.

116 Robert F. Boruch Alexander, L., and Jabine, T. 1978 Access to Social Security rnicrodata files for research and statistical purposes. Social Security Bulletin 41:~17. Antonoplos, D., ed. 1981 Proceedings of the National Institute of Education Conference on Conflicting Research Results. National Institute of Education, Washington, D.C Battelle Columbus Laboratories 1980a Metals and Ceramics Information Center List of Technical Publications. Columbus, Ohio: Battelle. Battelle Columbus Laboratories, Mechanical Properties Data Center 1980b Descriptive brochure. Battelle, Columbus, Ohio. Battelle Columbus Laboratones, Metals and Ceramics Information Center. 1980c Descriptive brochure. Battelle,Columbus,Ohio. Bauman, R.A., David, M.H., and Miller, R.F. 1970 Working with complex data files: the Wisconsin assets and income studies archive. Pp. 112-136 in R.L. Biscoe, ea., Data Bases, Computers, and the Social Sciences. New York: Wiley-Interscience. Bejar, I., and Rezmovic, V. 1981 Assessing educational and nutritional findings in the Call experiment. In R.F. Boruch and D.S. Cordray, eds., Reanalyzing Program Evaluations. San Francisco: Jossey- Bass. Benendge, A.A., and Dhrymes, P.J. 1981 Annual Housing Survey Project. Center for the Social Sciences, Columbia University. Bennett, L. 1980 Editor's coiner. Bulletin ofAlloyPhaseDiagrams 1(1):5. Bernstein, J. 1978 Experiencing Science. New York: Basic Books. Bielby, W.T., Hawley, C.B., and Bills, D. 1977 Research Uses of the National Longitudirml Surveys of Labor Market Experience. Madison, Wisc.: Institute for Research on Poverty. Bishop, L. 1980 Consideration in Analyzing and Generalizing from Time of Use Electricity Pricing Studies. Paper presented at the Electric Rate Demonstration Conference, Denver. Boruch, R.F., and Cecil, J.S. 1979 Assuring the Confidentiality of Data in Social Research. Philadelphia: University of Pennsylvania Press. Borsch, R.F., and Cordray, D.S., eds. 1980 An Appraisal of Educational Program Evaluations: Federal, State, and Local Agencies. Report to the Congress. Office of the Assistant Secretary for Management, U.S. Department of Education, Washington, D.C. Boruch, R.F., Cordray, D.S., Pion, G., and Leviton, L. 1981a A mandated appraisal of evaluation practices: digest of recommendations to the Congress and to the Department of Education. Educational Researcher 10(April): 1~13,31. Boruch, R.F., Worunan, P.M., and Cordray, D.S., eds. 198 lb Reanalyzing Program Evaluations. Son Francisco: Jossey-Bass. Boruch, R.F., and Wortman, P.M. 1978 An illustrative project on secondary analysis. New Directions for Program Evaluation 4:8~1 10. 1979 Implications of educational evaluation for evaluation policy. In D. Berliner, ea., Review of Research in Education 7:309-361.

l~efinitzons Products, awl Distinctions 117 Bowers, W.J., and Pierce, G.L. 1975 The illusion of deterrence in Isaac Ehrlich's research on capital punishment. Yale Law Journal 85: 18~209. 1981 Capital punishment as deterrent: challenging Isaac Ehrlich's research. Pp. 237-261 in R.F. Boruch, P.M. Wortman, and D.S. Cordray, eds., Reanalyzing Program Evaluations. San Francisco: 30ssey-Bass. graham, R.R. 1979 Field experimentation in weather modification. Journal of the American Statistical Association 74:57-104. Cain, G.G., and Watts, H.W. 1970 Problems in making policy inferences from the Coleman report. American Sociological Review 35:22~242. Chalmers, T.C., Smith, H., Blackbum, B., Silverman, B., Schroeder, B., Reitman, A. and Ambroz, A. 1981 A method for assessing the quality of a randomized controlled trial. Controlled Clinical Trials 2(1):31-50. Chelimsky, E. 1977 The need for better data to support crime control policy. 1(3):439~74. Evaluation Quarterly Coleman, J.S., Campbell, E.Q., Hobsen, C.J., McPartland, J., Mood, A., Weinfeld, F.D., and York, R.L. 1966 Equality of Educational Opportunity. Washington, D.C.: U.S. Govemment Printing Office. Coleman, J., Hoffer, T., and Kilgore, S. 1981 Private and Public Schools: Report to the National Center for Education Statistics. National Opinion Research Center, Chicago. 1976 Reliability and Validity of National Longitudinal Study Measures. Research Triangle Park, N.C.: Research Triangle Institute. Cook, T.D., Appleton, H., Conner, R., Schaffer, A., Tomkin, G., and Weber, S.J. 1975 Sesame Street Revisited. New York: Russell Sage Foundation. Darby, M.R 1979 The Elects of Social Security on income and the Capital Stock. Washington, D.C.: American Enterprise institute Datta, I-.E. 1977 The impact of the Westinghouse/Ohio evaluation on the development of project Head Start: an examination of the immediate and long-te'~ effects and how they came about. In C.C. Abt, ea., The Evaluation of Social Programs. Beverly Hills, Calif.: Sage. David, M., Robbin, A., et al. 1978 Instructional Materials for Microdata Collection Methods in Economics. Economics and Library Science Data and Computation Center, University of Wisconsin, Madison. Dawkins, S.M., and Scott, E.L. 1979 Comment. Journal of the American Statistical Association 74:7~77 . Del Bene, L., and Scheuren, F., eds. 1979 Statistical Uses of Administrative Records with Emphasis on Mortality arm Disability Research. Social Security Administration, Office of Research and Statistics. Washington, D.C.: U.S. Department of Health and Human Services. DeSantillana, G. 1955 The Crime of Galileo. Chicago: University of Chicago Press. Dickson, D. 1980 Research data: private property or public good. Nature 284:292. 1981 Contaminated cell lines. Nature 289:227-228.

118 Robert F. Boruch Director, S. 1981 Examining potential bias in manpower training evaluations. Pp. 35~361 in R.F. Boruch, P.M. Wortrnan, and D.S. Cordray, eds., Reanalyzing Program Evaluations. San Francisco: Jossey-Bass. Dollar, C.M., and Ambacher, B.I. 1978 The national archives and secondary analysis. New Directions for Program Evaluation: Secondary Analysis 4:1~. Duncan, I.W., and Shelton, W.C. 1978 Revolution in United States Government Statistics. Washington, D.C.: Office of Statistical Policy and Standards, U.S. Department of Commerce. Ehrlich, I 1975 The deterrent effect of capital punishment: a question of life and death. American Economic Review 65:397~17. 1981 Capital punishment as deterrent: challenging reanalysis. Pp. 262-282 in R.F. Boruch, P.M. Worunan, and D.S. Cordray, eds., Reanalyzing Program Evaluations. San Francisco: Jossey-Bass. Federal Energy Administration, Regulatory Institutions Office 1976 Experiment Guidelines for Electric Utility Demonstration Projects. Unpublished me- morandum, November 8. U.S . Department of Energy, Washington, D.C. Feldstein, M. 1974 Social security, induced retirement, and aggregate capital accumulation. Journal of Political Economy 82(5):905-926. Flaherty, D.H. 1980 Privacy and Government Data Banks: An International Perspective. London: Mansell. Garner, J. 1981 National Institute of Justice access and secondary analysis. Pp. 43~9 in R.F. Boruch, P.M. Worunan, and D.S. Cordray, eds., Reanalyzing Program Evaluations. San Francisco: Jossey-Bass. Gilbert, J.P., McPeek, B., and Mosteller, F. 1977 Progress in surgery and anesthesia: benefits and risks of innovative therapy. Pp. 124-169 in J.P. Bunlcer, B.A. Barnes, and F. Mosteller, eds., Costs, Risks, and Benefits of Surgery. New York: Oxford University Press. Glass, G.V. 1976 Primary, secondary, and meta-analysis of research. 5(10):~8. Educational Researcher Gomez, H. 197? Evaluating Longitudinal Data with the Use of Rasch Model. Paper presented at the 41st Session of the International Statistical Institute, December =15, New Delhi, India. 1978 The Analysis of Growth. Ph.D. dissertation, psychology department, Northwestern University. (Available from University Microfilms, Ann Arbor, Mich.) 1981 Reevaluating educational effects in the Call experiment. Pp. 28~295 in R.F. Boruch, P.M. Worunan, and D.S. Cordray, eds., Reanalyzing Program Evaluations. San Francisco: Jossey-Bass. Good, I.J. 1978 Statistical fallacies. Pp. 337-349 in W.H. Kruskal and J.M. Tanur, eds., international Encyclopedia of Statistics (Vol. 1). New York: Free Press. Gordon, A.C., and Heinz, J.P., eds. 1979 PublicAccesstoInformation. New Brunswick,N.J.: Transaction.

Definitions, Products, and Distinctions Gordon, G., and Morse, E.V. 1975 Evaluation research. Annual Review of Sociology 1:339-362. Harris, N.L., Gang, D.L., Quay, S.C., Poppema, S., Zarnecnik, P.C., Nelson-Rees, W.A., O'Bnen, S.J. 1981 Contamination of Hodgkin's disease cell cultures. Nature 289:228-230. Jaeger, R.M. 1978 About educational indicators: statistics on the conditions and trends in education. Review of Research In Education 6:27~315. Kaase, M., Krupp, H., Pflanz, M., Scheuch, E.K., and Simitis, S., eds. 1980 Datenzugang and Datenschutz. Mannheim, Germany: Athenaum. (In Gennan) Katz, S., Hedrick, S.C., and Henderson, N. 1979 The measurement of long-term care needs and impact. Health and Medical Care Services Review 2(1):1-21. Krusk~, W.H. 1978 Taking data seriously. In Y. Elkana et al., eds., Toward a Metric of Science: The Advent of Science Indicators. New York: John Wiley & Sons. Kruzas, A.T., and Sullivan, L.V., eds. 1978 Encyclopedia of Information Systems and Services 3rd ed. Detroit: Gale Research Co. Leimer, D.R., and Lesnoy, S.D. 1980 Social Security and Private Savings: A Reexamination of the Time Series Evidence Using Alternative Social Security Wealth Variables. Paper presented at the 93d Meeting of the American Economic Association, September 6, Denver, Colo. Light, R.J., and Smith, P.V. 1971 Accumulating evidence: procedures for resolving contradictions among different re- search studies. Harvard Educational Review 41:429~71. Long, P., and Long, S. 1974a Statement before the Senate subcommittee on administrative practice and procedure, February 28. Washington, D.C. 1974b Statement at hearings before the Senate subcommittee of the Committee on Appropriations for the Treasury, April 10. Washington, D.C. 1976 Statement at hearings before the Senate subcommittee of the Committee on Appropriations for the Treasury, April 22. Washington, D.C. Long, S.B. 1979 The Internal Revenue Service: Examining the Exercise of Discretion in Tax Enforcement. Paper presented at the Annual Meeting of the Law and Society Association May 11. San Francisco. 1980a The Internal Revenue Service: Measuring Tax Offenses and Enforcement Response. Washington, D.C.: U.S. Department of Justice. 1980b Measunng White Collar Cnme: The Use of He "Random Investigation Method for Estimating Tax Offenses." Paper presented before the Annual Meeting of the American Society of Criminology, November 5. San Francisco. Long, S., and Long, P. 1973 Statement before the Senate subcommittee of the Committee on Appropnations for the Treasury, February 28. Washington, D.C. Mangum, G.L. 1973 A Decade of Manpower and Development and Training. Salt Lake City: Olympus. McKay, H., Sinistell", L., McKay, A., Gomez, H., and Llorenda, P. 1978 Improving cognitive ability in chronically deprived children. Science 200:27~278. Mindlin, H., and Kovacs, G.J. 1979 The Mechanical Properties Data Center and numeric on line information systems. Pp. 119

120 Robert F. Boruch 1~22 in J.A. Graham, ea., Use of Computers in Managing Material Property Data. MPG-14. New York: American Society of Mechanical Engineers. Mochmann, E., and Muller, P.J., eds. 1979 Data Protection and Social Science Research. Franl~urt, Germany: Campus Verlag. Mooney, E.D. 1979 Directory of Federal Agency Education Data Tapes. Center for Education Statistics. Moskowitz, J., and Worunan, D.M. 1981 Reassessing the impact of school desegregation. Pp. 322-340 in R.F. Boruch, D.M. Worunan, and D.S. Cordray, eds., Reanalyzing Program Evaluations. San Francisco: Jossey-Bass. Mosteller, F., Gilbert, J.P., and McPeek, B. 1980 Reporting standards and research strategies for controlled teals: agenda for the editor. Controlled Clinical Trials 1:37-58. NIoynihan, D.P., and Mosteller, F., eds. 1972 On Equality of Educational Opportunity. New York: Vintage Books. MuMell, A. 1974 The Effect of Social Security on Personal Savings. Cambridge, Mass.: Ballinger. National Bureau of Standards, Alloy Data Center 1980a ASM/NBS Alloy Phase Diagram Program. Unpublished memo. Washington, D.C. 1980b Alloy Data Center Publications. National Bureau of Standards, unpublished bibliogra- phy, September. Washington, D.C. National Center for Education Statistics 1981a Policy seminar sponsored by NCES in conjunction with the Horace Mann Learning Center. NCES Announcement, April. 1981b NCES data tapes now available for High School and Beyond. NCES 81-226a. NCES Announcement, February. National Research Council 1977 Perspectives on Technical Information for Environmental Protection. Steering Committee for Analytic Studies for the U.S. Environmental Protection Agency. Washington, D.C.: National Academy of Sciences. 1980 Mechanical Properties Data for Metals and Alloys: Status of Data Reporting, Collecting, Appraising, and Disseminating. Panel on Mechnical Properties Data for Metals and Alloys, Numencal Data Advisory Board Washington, D.C.: National Academy Press. 1981 Synopsis of the Ad Hoc Meeting on Private and Public Schools. Unpublished report. Committee on National Statistics, National Academy of Sciences, Washington, D.C. Page, E.B., and Keith, T.Z. 1981 Effects of U.S. private schools: a technical analysis of two recent claims. Educational Researcher 10:1-7. Peng, S.S., Stafford, C., and Talbert, R.J. 1977 Review and Annotation of Study Reports: National Longitudinal Study. Washington, D.C.: National Center for Education Statistics. Pigman, W., and Carmichael, E.B. 1 950 An ethical code for scientists. Science 111 :643~45. Postlewaite, T.N., and Lewy, A. 1979 Annotated Bibliography of IEA Publications (1962-1978). Stockholm: International Educational Assessment, University of Stockholm. 1977 Personal Privacy ire an Information Society. Supt. Doc. No. 052 003 00395-3. Washington, D.C.: U.S. Government Printing Office. Washington, D.C.: National

Deft nitions, Products, and Distinctions 121 Research Triangle Institute 1978 Project Pooled Analyses: Feasibility of Combining Data from Several Electric Utility Rate Demonstration Projects. Report prepared by the U.S. Department of Energy' Office Utility Systems. Research Triangle Institute, Research Tnangle Park, N.C. Robbin, A. 1981a Strategies for improving utilization of computerized statistical data by the social scien- tificcommunity. SocialScienceInformationStud~es 1:~109. Robbin, A. 1981b Technical guidelines for preparing and documenting statistical data for secondary ana- lyses. In R.F. Boruch, P.M. Worunan, and D.S. Cordray, eds., Reanalyzing Program Evaluations. San Francisco: Jossey-Elass. Rossi, P.H., and Lyall, K.C. 1976 Reforming Public Welfare: An Evaluation of the New Jersey Income Maintenance Experiment. New York: Russell Sage. Sasfy, J., and Siegel, L. 1982 A Study of Research Access to Conf dential Criminal Justice Agency Data. McLean, Va.: Mitre Corp. Schreibman, F.C. 1978 Television news archives: a guide to major collections. Pp. 89~110 in W. Adams and F. Schreibman, eds., Television Network News: Issues in Current Research. Washington, D.C.: School of Public and International Affairs. Smith, M.L., and Glass, G.V. 1977 Meta-analysis of psychotherapy outcome studies. American Psychologist 32:752-760. Spencer, B.D. 1980 Ben~fit-Cost Analysis of Data Used to Allocate Funds. New York: Springer-Verlag. Taylor, M.E., Stafford, C.E., and Place, C. 1981 National Longitudinal Study of the High School Class of 1972 Stay Reports Update: Review andAnnotation. Washington, D.C.: National Center for Education Statistics. U.S. Department of Health and Human Services 1983 Compendium of HAS Evaluation Studies. Washington, D.C.: U.S. Department of Health and Human Services. U.S. General Accounting Office 1976 Federal Information Sources and Systems: A Directoryfor the Congress. Washington, D.C.: U.S. General Accounting Office. 1978 Assessing Social Program impact Evaluations: A Checklist Approach. Washington, D.C.: U.S. General Accounting Office. 1980a Federal Evaluations: A Directory Issued by the Comptroller General. Washington, D.C.: U.S. General Accounting Office. 1980b Federal Information Sources and Systems: A Directoryfor the Congress. Washington, D.C.: U.S. General Accounting Office. van Hippel, F., and Primack, J. 1972 Publicinterest science. Science 177:116~1171. Walberg, H., Anderson, R.E., Miller, J.D., and Wright, D.3. 1981a Policy Analysis of National Assessment Data. University of Illinois, Chicago Circle. 1981b Probing a model of educational productivity in science with national assessment sam- ples of early adolescence. American Educational Research Journal 18(2):23~249. Wolins, L. 1982 A Critical Commentary on Research in the Social and Behavioral Sciences. Ames, Iowa: Iowa State University Press.

122 Robert F. Boruch Woranan, P.M., Reichardt, C.S., and St. Pierre, R.G. 1978 The first year of the education voucher demonstration: a secondary analysis of student achievement scores. Evaluation Quarterly 2:19~214. Zirkel, C. 1954 Citation of fraudulent data. Science 120:189-190.

Next: Justifications for and Obstacles to Data Sharing »
Sharing Research Data Get This Book
×
Buy Paperback | $65.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!