The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
MAKING IT BETTER: EXPANDING INFORMATION TECHNOLOGY RESEARCH TO MEET SOCIETY'S NEEDS
2. The recommendations of PITAC regarding federal funding for IT research can be found in PITAC (1999). Additional information on the Clinton Administration's IT2 initiative is available online at <http://www.ccic.gov/it2/>.
3. Numerous press accounts from the mid- and late 1990s reported on the reorientation of research labs in the IT industry and resulting concerns about long-term, fundamental research. See, for example, Uchitelle (1996), Carey (1997), Ziegler (1997), and Buderi (1998).
4. A new CSTB study on the fundamentals of computer science will emphasize the science aspects, including the interaction of computer science with other sciences. Additional information on this project is available on the CSTB Web site at <www.cstb.org>.
5. Microsoft, in particular, has been noted for tapping top academic talent for its research lab. See Leibovich (1999).
6. The Government Performance and Results Act of 1993, for example, called for explicit attempts to measure the results of government programs, including research programs. The act therefore added to the pressure for near-term and mission-oriented work.
7. In scientific computing, IT is used to model, emulate, and simulate various scientific and technical processes. Scientific computing has received considerable attention from the IT research community from the earliest days, for several reasons. First, IT researchers are technically oriented and can appreciate and understand the issues related to scientific computing. Second, scientific computing has been critical to the scientific and engineering enterprise and to major government programs such as nuclear energy and weapons and the military and so has been admirably supported by the government. Scientific computing continues to be important to the future of computing, science, many fields of engineering, and the military enterprise and should not be deemphasized.
Because it has been strongly supported by funding agencies and the research community, scientific computing is an inspirational example of the interrelationship and synergy between application and technology. Scientific computing applications have been a major driver of high-performance computing technologies and parallel programming techniques. In turn, scientific computing has influenced engineering and science, providing not only substantial benefits but also approaches to problem solving —such as the characteristics of numerical algorithms that yield to parallelism. The very methodologies of science have been substantially affected in ways that make scientific progress more rapid as well as cost effective. Scientific computing has benefited greatly from its long-term association with the research environment.
Scientific computing also demonstrates the importance of long-term and fundamental research on technology in application contexts. There are fundamental gaps in understanding and unexploited opportunities that can be addressed by taking a long-term perspective, one that is not generally pursued by industrial organizations with their more short-term profit motivation. Much of the benefit of research outcomes will accrue not to individual private firms but to industry, government, and society in general. The government is a major user and developer of scientific computing applications and, in many ways, its challenges are much greater than those of the private sector. If government-funded research in any facet of technology is justified, then surely research in the more effective application of technology to the needs of government and its citizens is even more strongly justified.
8. As recalled recently by Leonard Kleinrock: “In the early days of ARPA [the Advanced Research Projects Agency], even when we had only three machines, we were able to uncover logical errors in the routing protocol, or the flow-control protocol, that would cause network failures. Those errors are hard to detect ahead of time, especially as the systems get more complex. It's easy to detect the cause of a massive crash or degradation. But complex systems have latent failure modes that we have not yet excited. There is always potential for deadlocks, crashes, degradations, errant behavior. . . . As systems get more complex, they crash in incredible and glorious ways. What you want to have is a self-