6.  

The calling sequence of a program is essentially the information that must be passed to the program for it to be run. For example, the calling sequence of a subroutine must specify its arguments and parameters; the calling sequence for a program must specify its location (e.g., its home directory). The calling sequence can be specified in many ways and is a matter of convention dictated by the structure of the computing environment in which the program will run.

7.  

One such standard, ASN. 1 (for Abstract Syntax Notation version one), was developed by the Organization for International Standardization (ISO) for the syntactic exchange of data. At the National Library of Medicine, ASN. 1 is used in the National Center for Biotechnology Information toolkit to support semantics for several common biology types and is being adopted in a number of analysis software packages in molecular biology as a common data interchange format (Ostell, 1992). In other contexts, notably those involving high-performance computing, ASN. 1 does not offer satisfactory performance.

8.  

A range of quality control processes exists for different purposes, from moderating (checking for topic) conference proceedings or newsletters to editing (checking for accuracy) journals or books. For journal literature, the peer review process ensures quality. For data generated by instruments operated by a single investigator or team, as is commonly the case in space physics and oceanography, the investigator is typically responsible for quality control and thus performs both data contribution and checking. In genome projects, with more distributed data collection than in oceanography and space physics projects, contributions to the archives have traditionally only been moderated and not edited, although large databases now have a curator whose responsibility it is to review submissions and maintain quality control for the information that will ultimately reside in the database.

9.  

A forthcoming Computer Science and Telecommunications Board report will examine this issue in the context of academic careers for experimental computer scientists.

10.  

See Appendix D, a reprint of Appendix 3 of the final report of an NSF-sponsored workshop, Training Computational and Mathematical Biologists, held at the Banbury Center of the Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, December 9-11, 1990.

11.  

The SUMEX-AIM project, begun in 1974, brought together scientists and technologists to facilitate research on the applications of artificial intelligence to medical (AIM) research. The project used the Arpanet and Tymnet to link together the Stanford University Medical Experimental computer (SUMEX) and a group of researchers from around the country. Although the computing resources available were primitive by today's standards, the project used electronic mail and an electronic bulletin board. The result of this collaboration was an encyclopedia of artificial intelligence tools, the AI Handbook. According to one description of the project,

[S]uch a resource offers scientists both a significant economic advantage in sharing expensive instrumentation and a greater opportunity to share ideas about their research. This is especially timely in computer science, a field whose intellectual and technological complexity tends to nurture relatively its own line of investigation with limited convergence on working programs available from others. The complexity of these programs makes it difficult for one worker to understand and criticize the constructions of others, unless he has direct access to the running programs. In practice, substantial effort is needed to make programs written on one machine available on others, even if they are, in principle, written in compatible languages. In this respect, computer applications have demonstrated less mutual incremental progress from diverse sources than is typical of other sciences. The SUMEX-AIM project seeks to reduce these barriers to scientific cooperation in the field of artificial intelligence applied to health research. (Lederberg, 1978)

12.  

Some support services are as informal as the graduate student who figures out and shares a trick for working with a particular program, system, or database.

13.  

Although information infrastructure is now receiving increased attention as a matter of public policy and private enterprise, current efforts provide scientists with limited access to specialized network-based computing tools. The NSF supports the supercomputing centers, and to a lesser extent the science and technology centers, to assist scientists in using specialized, state-of-the-art hardware and software. NSF has also sponsored the development of the NSFNET backbone network and tributary, intermediate-level networks. It has maintained a modest program to assist institutions in acquiring the capital equipment needed to link to the Internet. Other government agencies such as the DOE, NASA, (D)ARPA, and NOAA provide assistance to their scientific communities, but overall access to network-based computing tools is still limited.

14.  

The anticipated interagency task force on information infrastructure might address this issue, although its focus is expected to be meeting the needs of the general public (industry, nonprofit organizations, and individuals).



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement