National Academies Press: OpenBook

Evolving the High Performance Computing and Communications Initiative to Support the Nation's Information Infrastructure (1995)

Chapter: E Accomplishments of National Science Foundation Supercomputer Centers

« Previous: D Current High Performance Computing and Communications Initiative Grand Challenge Activities
Suggested Citation:"E Accomplishments of National Science Foundation Supercomputer Centers." National Research Council. 1995. Evolving the High Performance Computing and Communications Initiative to Support the Nation's Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4948.
×
Page 108
Suggested Citation:"E Accomplishments of National Science Foundation Supercomputer Centers." National Research Council. 1995. Evolving the High Performance Computing and Communications Initiative to Support the Nation's Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4948.
×
Page 109
Suggested Citation:"E Accomplishments of National Science Foundation Supercomputer Centers." National Research Council. 1995. Evolving the High Performance Computing and Communications Initiative to Support the Nation's Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4948.
×
Page 110
Suggested Citation:"E Accomplishments of National Science Foundation Supercomputer Centers." National Research Council. 1995. Evolving the High Performance Computing and Communications Initiative to Support the Nation's Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4948.
×
Page 111
Suggested Citation:"E Accomplishments of National Science Foundation Supercomputer Centers." National Research Council. 1995. Evolving the High Performance Computing and Communications Initiative to Support the Nation's Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4948.
×
Page 112
Suggested Citation:"E Accomplishments of National Science Foundation Supercomputer Centers." National Research Council. 1995. Evolving the High Performance Computing and Communications Initiative to Support the Nation's Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4948.
×
Page 113
Suggested Citation:"E Accomplishments of National Science Foundation Supercomputer Centers." National Research Council. 1995. Evolving the High Performance Computing and Communications Initiative to Support the Nation's Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4948.
×
Page 114
Suggested Citation:"E Accomplishments of National Science Foundation Supercomputer Centers." National Research Council. 1995. Evolving the High Performance Computing and Communications Initiative to Support the Nation's Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4948.
×
Page 115
Suggested Citation:"E Accomplishments of National Science Foundation Supercomputer Centers." National Research Council. 1995. Evolving the High Performance Computing and Communications Initiative to Support the Nation's Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4948.
×
Page 116
Suggested Citation:"E Accomplishments of National Science Foundation Supercomputer Centers." National Research Council. 1995. Evolving the High Performance Computing and Communications Initiative to Support the Nation's Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4948.
×
Page 117

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

E 108 E Accomplishments of National Science Foundation Supercomputer Centers INTRODUCTION The National Science Foundation (NSF) Supercomputer Centers Program preceded the High-Performance Computing and Communications Initiative but has become an integral and important part of it. The centers were established to provide access to high-performance computing—supercomputers and related resources—for the broad science and engineering research community. The program has evolved from one comprising independent, competitive, and duplicative computer centers to a cooperative activity, one that has been characterized as a MetaCenter. In 1992 the four NSF supercomputer centers (Cornell Theory Center, National Center for Supercomputing Applications, Pittsburgh Supercomputer Center, and San Diego Supercomputer Center) formed a collaboration based on the concept of a national MetaCenter for computational science and engineering: a collection of intellectual and physical resources unlimited by geographical or institutional constraints. The centers' first mission was to provide a stable source of computer cycles for a large community of scientists and engineers. The primary objective was to help researchers throughout the country make effective use of the architecture or combination of architectures best suited to their work. Another objective was to educate and train students and researchers from academia and industry to use and test the limits of supercomputing in solving complex research problems. The best and most adventurous proposals for using an expensive and limited resource were sought. In 1994, the scientific computing division of the National Center for Atmospheric Research joined the MetaCenter. In addition, the NSF established the MetaCenter Regional Affiliates program, under which other institutions could pursue projects of interest in collaboration with MetaCenter institutions. The MetaCenter thus became a unique resource and a laboratory for computer scientists and computational scientists working together on shared tasks. IMPORTANT TECHNOLOGY ACCOMPLISHMENTS Originally set up in 1985 to provide national access to traditional supercomputers, the NSF centers have evolved to a much larger mission. The centers now offer a wide variety of high-performance architectures from a large array of U.S. vendors. Today work at the centers is dominated by research efforts in software, in collaboration with computer scientists, focusing on operating systems, compilers, network control, mathematical libraries, and programming languages and environments.

E 109 Supercomputer Usage at NSF Centers FIGURE E.1 Total historical usage of all high-performance computers in the NSF MetaCenter. This graph shows the total annual usage of all high-performance computers in MetaCenter facilities. Particularly striking is the growth since 1 992, when microprocessors in various parallel configurations began to be employed. All usage has been converted to equivalent processor-years for a Cray Research Y-MP, the type of supercomputer that the NSF centers first installed in 1985-1986. Table E.1 shows the growth in the number of users and in the availability of cycles at the NSF supercomputer centers from 1986 to 1994. See also Figure E.1. The increase in capacity in 1993 was owing mainly to the introduction of new computing architectures. The slight decrease in the number of users reflects the centers' effort to encourage users able to meet their computational needs with the increasingly powerful workstations of the mid-1990s to use their own institutional resources. TABLE E.1 Supercomputer Usage at National Science Foundation Supercomputer Centers, 1986 to 1994 Fiscal Year Active Users Usage in Normalized CPU HoursA 1986 1,358 29,485 1987 3,326 95,752 1988 5,069 121,615 1989 5,975 165,950 1990 7,364 250,628 1991 7,887 361,037 1992 8,578 398,932 1993 7,730 910,088 1994 7,431 2,249,562 A Data prior to May 1990 include the John von Neumann Center.

E 110 Architectures and Vendors The national research community has been offered access to a wide and continually updated set of high- performance architectures since the beginning of the NSF Supercomputer Centers Program in 1985. The types of architectures and number of vendors are now probably near an all-time high (Table E.2), allowing the science and engineering communities maximum choice in selecting a machine that matches their computational needs. A short list of architectures offered through the NSF centers program includes single and clustered high-performance workstations or workstation multiprocessors, minicomputers, graphics supercomputers, mainframes with or without attached processors or vector units, vector supercomputers, and single instruction multiple data (SIMD) and multiple instruction multiple data (MIMD) massively parallel processors.

E 111

E 112 Current vendors whose top machines have been made available include IBM, DEC, Hewlett-Packard, Silicon Graphics Inc., Sun Microsystems, Cray Research, Convex Computer, Intel Supercomputer, Thinking Machines Corporation, and nCUBE, plus a number of companies no longer in existence, such as Alliant, Floating Point Systems, ETA, Kendall Square Research, Stellar, Ardent, and Stardent. Access and New Architectures In the 1960s, only a few universities had access to state-of-the-art supercomputers. By the early 1990s, some 15,000 researchers in over 200 universities had used one or more of the supercomputers in the NSF MetaCenter. This increased use led to new concepts and innovation: • Achieving production parallelism. Cornell Theory Center became the first member center to achieve production parallelism on a vector supercomputer. • Migration to the UNIX operating system. In 1987, the National Center for Supercomputing Applications (NCSA) became the first major supercomputer center to migrate its Cray supercomputer from CTSS to UNICOS, a UNIX-based operating system developed at Cray Research for its supercomputers. • Access to massively parallel computers. NCSA introduced massively parallel computing (MPP) to the research community with the CM-2 in 1989, followed by the CM-5 in 1991. NCSA has worked closely with national users and the computer science community to create a wide range of 512-way parallel application codes that can in 1995 be moved to other large MPP architectures such as the T3D at PSC, the Intel Paragon at SDSC, or the IBM SP-2 at CTC. • Heterogeneous processors. In 1991, the Pittsburgh Supercomputer Center was the first site to distribute code between a massively parallel machine (TMC-CM2) and a vector supercomputer (Cray Y-MP), linked by a high-speed channel (HiPPi). • Workstation clusters. The NSF supercomputer centers were among the first to experiment with clusters of workstations as an alternative for scalable computing. The first work was done in the 1980s with loosely coupled clusters of Silicon Graphics Inc. workstations to create frames for scientific visualizations. With the introduction of the IBM RS6000, several centers moved to study tightly coupled networks and developed job control software. Clusters from DEC, Hewlett-Packard, and Sun Microsystems are now available as well. Storage Technologies, File Format, and File Systems With the vast increase in both simulation and observational data, the MetaCenter has worked a great deal on problems of storage technologies, with the greatest progress in software. The creation of a universal file format standard, a national file system with a single name space, and multivendor archiving software are some of the results of MetaCenter innovation and collaboration.

E 113 NSFNET and Networking The 56-Kbps connection between the NSF supercomputer centers, established in 1986, was the beginning of the NSFNET. Based on the successful ARPANET and the TCP/IP protocol, the NSFNET rapidly grew to provide remote access to the NSF supercomputer centers by the creation of regional and campus connections to the backbone. Although started by the pull from the high end, the NSFNET soon began to provide ubiquitous connectivity to the academic research community for electronic mail, file transport, and remote log-in, as well as supercomputer connectivity. As a result, the NSFNET backbone of 1995 has 3,000 times the bandwidth of the backbone of 1986. The centers have also developed prototypes for the high-performance local area networks that are needed to feed into the national backbone as well as the next generation of gigabit backbones. Visualization and Virtual Reality The NSF centers were instrumental in bringing the concepts and tools of scientific visualization to the research community in the 1980s. Center members developed new approaches to understanding large datasets, such as a three-dimensional grid of wind velocities and direction in a thunderstorm, by ''visualizing" or creating an image from the data. This led scientists to consider visualization as an integral part of their computational tool kit. In addition, the centers worked closely with the preexisting computer graphics community, encouraging them to create new tools for scientists as well as for entertainment. Desktop Software, Connectivity, and Collaboration Tools The history of the centers has overlapped greatly with the worldwide rise of the personal computer and workstation. It is, therefore, not surprising that software developers focused on creating easy-to-use software tools for desktop machines. These tools have had a major influence on the usefulness of supercomputer facilities to remotely located scientists and engineers, as have tools such as NCSA's telnet, which brought full TCP connectivity to researchers using IBM and Macintosh systems, significantly broadening the base of participation beyond UNIX users. Collaboration tools have provided the capability to carry on remote digital conferencing sessions between researchers. Both synchronous and asynchronous approaches have been explored. Development of the nation's information infrastructure requires many software, computing, and communications resources that were not traditionally thought to be part of high-performance computing. In particular, tools need to be developed for organizing, locating, and navigating through information, a task that the NSF center staffs and their associated universities continue to address. Perhaps the most spectacular success has been the NCSA Mosaic, which in less than 18 months has become the Internet "browser of choice" for over a million users and has set off an exponential growth in the number of decentralized information providers. Monthly download rates from the NCSA site alone are consistently over 70,000. ACCOMPLISHMENTS IN EDUCATION AND OUTREACH Each of the supercomputer centers has developed educational and outreach programs targeted to a variety of constituencies: university researchers, graduate students, undergraduates,

E 114 educators at all levels, and K-12 students and teachers. Another aspect of outreach is the effort to identify and serve local and regional needs of government, schools, and communities. Activities range from the tours given at all MetaCenter installations through the hosting of visits by national, regional, and local officials and commissions, to full-scale partnerships. Table E.3 summarizes participation in these various activities. TABLE E.3 Supercomputer Centers' Educational Activity Support Summary Educational Activities FY 1991 FY 1992 FY 1993 High school/K- 12—Attendees 715 1,370 1,985 Research institutes—Attendees 262 377 390 Training courses and workshops—Attendees 1,700 2,400 2,100 Monthly newsletter circulation 234,986 247,692 165,176 Visitors 13,506 16,380 16,392 Researchers and Students One- or two-day workshops offered by MetaCenter staff to researchers on-site and at associated institutions cover introductions to the computational environments, scientific visualization, and the optimization and parallelization of scientific code. In addition, special workshops have been offered throughout the MetaCenter on the use and extension of computational and visualization techniques specific to various disciplines. MetaCenter institutions have contributed to the research projects of hundreds of graduate students through the provision of fellowships or similar appointments, stipends, access to resources, and relationships with MetaCenter researchers. Programs providing research experiences for undergraduates bring in students to work for a summer or a school semester or quarter on specific projects devised by MetaCenter researchers and/or faculty advisors. In many instances such projects have resulted in presentations at meetings and publications. K-12 Educators and Students Training of high school teachers and curriculum development are among the many MetaCenter educational efforts. Several programs have been initiated, such as ChemViz to help students understand abstract chemistry concepts; a visualization workshop at Supercomputing '93; and SuperQuest, a program involving MetaCenter sites that brings teams of teachers and students from selected high schools to summer institutes to develop computational and visualization projects that they then work on throughout the following year. Broad Outreach Outreach is also accomplished by the publications programs of the MetaCenter, the production of scientific videos and/or multimedia CD-ROMs, and a collaborative program for

E 115 maintaining a lively and informative presence on World Wide Web servers, which make information on the MetaCenter's programs easily accessible over the nation's information infrastructure. A number of interactive simulation programs are now being tested in classrooms across the country and around the world. Students can change initial conditions and watch a simulation evolve as the parameter space is explored. The educational programs of the MetaCenter made available to high schools around the country demonstrate the power of the nation's information infrastructure to provide new educational resources. SCIENTIFIC COMPUTATION AND INDUSTRIAL DEVELOPMENT Partnerships between the MetaCenter and industry are collaborations with major and large industrial firms, as well as small companies and venture start-ups. Most of these partnerships exist because MetaCenter expertise has been essential to the introduction of new ways of using the resources of supercomputing: the algorithms, visualization routines, and engineering codes are being combined in ways that result in such advances as high-end rapid prototyping of new products. Commercialization of the software developed at the MetaCenter is being undertaken by a number of companies. For example, NCSA telnet has been commercialized by Intercon, and Spyglass has commercialized NCSA desktop imaging tools, as well as its Mosaic program. CERFnet, a California wide area network for Internet access, has pioneered in supplying access to library holdings and other large databases, and DISCOS/ UniTree, a mass storage system, is in use at more than 20 major computer sites. A new molecular modeling system, called Sculpt, developed at the San Diego Supercomputing Center, is being commercialized by a new company, Interactive Simulations. Sculpt enables "drag-and-drop" molecular modeling in real-time while preserving minimum-energy constraints; its output was featured on a May 1994 cover of Science. IMPORTANT SCIENCE AND ENGINEERING ACCOMPLISHMENTS Selected areas and problems, summarized below, indicate the range of projects currently being undertaken by nearly 8,000 researchers at over 200 universities and dozens of corporations and the span of disciplines now using this new tool. Quantum Physics and Materials Science The great disparity between nuclear, atomic, or molecular scales and macroscopic material scales implies that vast computing resources are needed to attempt to predict the characteristics of bulk matter from fundamental laws of physics. Since the beginning of the NSF centers program, researchers in this area have been among the most frequent users of supercomputers. Materials scientists have often been among the first to try out new architectures that promise higher computational speeds. Listed below are some examples of research areas important to the study of properties of bulk matter in extreme conditions, such as occur in nuclear collisions, the early universe, or the core of Jupiter; new materials such as nanotubes and high-temperature superconductors; and more practical materials used today such as magnetic material and glass. • Phase transitions in quantum chromodynamics • Phase transitions of solid hydrogen • New nanomaterials predictions

E 116 • Theory of high-temperature superconductors • Magnetic materials Biology and Medicine Living creatures exhibit some of the greatest complexity found in nature. Therefore, supercomputers have made possible unprecedented opportunities to explore these complexities based on the fundamental advances made in biological research of the last 50 years. These activities include using the data from x-ray crystallography to study the molecular structure of macromolecules; learning how to use artificial intelligence to fold polypeptide chains, determined from genetic sequencing, into three-dimensional proteins; and determining the function of proteins by studying their dynamic properties. New fields of computational science, such as molecular neuroscience, are being enabled by academic access to MetaCenter computing and visualization resources and staff. Corporations are using supercomputers and advanced visualization techniques in collaboration with the NSF MetaCenter to create new drugs to fight human diseases such as asthma. New insights into economically valuable bioproducts are being gained, for instance, by combining molecular and medical imaging techniques to create "virtual spiders" that can be dissected digitally to understand the production of silk. Finally, high-performance computers are becoming powerful enough to enable researchers to program mathematical models of realistic organ dynamics, such as the human heart. Examples of projects include the following: • Crystallography • Artificial intelligence and protein folding • Protein kinase solution • Molecular neuroscience—serotonin • Molecular neuroscience—acetylcholinesterase • Kinking DNA • Antibody-antigen docking • Tuning biomolecules to fight asthma • Virtual spiders and artificial silk Engineering Man-made devices have become so complex that researchers in both academia and industry have turned to supercomputers in order-to be able to analyze and modify accurate models in ways that complement traditional experimental methods. High-performance computers enable academic engineers to study the brittleness of new types of steel, improve bone transplants, or reduce the drag of flows over surfaces using riblets. Industrial partners of the individual supercomputer centers within the MetaCenter are using advanced computational facilities to improve industrial processes such as in metal forming. Better consumer products, such as leakproof diapers or more efficient airplanes, are being designed. Even state agencies are able to use the MetaCenter facilities to improve traffic safety or find better ways to use recycled materials. Some 70 corporations have taken advantage of the MetaCenter industrial programs to improve their competitiveness. Examples of engineering-related problems include the following: • Heart modeling • Ultrahigh-strength steels

E 117 • Continuous casting of steel • Beverage-can design • Designing a leakproof diaper • Bone transplant bioengineering • Improving performance with riblets • Designing better aircraft • Crash-testing street signs Earth Sciences and the Environment The resources of the NSF MetaCenter are being used to compute and visualize the complexity of the natural world around us, from the motions of Earth's convective mantle to air pollution levels in southern California. The U.S. Army is working with academics to determine how to practice tank maneuvers without endangering the breeding habits of the sage grouse. Pollution is a difficult coupling of chemical reactions and flow dynamics that must be understood in detail if corrective measures are to be efficacious. High-performance computers also act as time machines, allowing for faster-than-real-time computation of severe storms. Finally, to improve global weather or climate forecasts, supercomputers allow researchers to study the physics of such critical processes as mixing at the air-ocean interface. Among the related problems being addressed are the following: • Detoxification of ground water • Storm modeling and forecasting • Los Angeles smog • Upper-ocean mixing • Simulating climate using distributed supercomputers Planetary Sciences, Astronomy, and Cosmology As was evident in the recent impact of Comet Shoemaker-Levy 9 with Jupiter, observatories on Earth and in space have become intimately linked. Supercomputers are being integrated into observational facilities, like the Grand Challenge Berkeley Illinois Maryland Association millimeter observatory, and into observational programs such as the ones that have led to the discovery of new millisecond pulsars or the first extra solar-system planet. The ability of numerical methods to solve even the most complex of fundamental physical laws, such as Einstein's equations of general relativity, is increasing understanding of the dynamics of strong-field events, such as the collision of black holes. In perhaps the grandest-scale challenge possible, the universe itself is a subject of investigation by several Grand Challenge teams using resources of the MetaCenter to discover how the large-scale structures in the universe evolved from nearly perfect homogeneity at the time of the formation of the microwave background. • Comet collision with Jupiter • Discovery of first extra solar-system planet • Pulsar searching and discovery • Black hole collision dynamics • Cosmological simulations

Next: F Individuals Providing Briefings to the Committee »
Evolving the High Performance Computing and Communications Initiative to Support the Nation's Information Infrastructure Get This Book
×
Buy Paperback | $45.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Maintaining the United States' strong lead in information technology will require continued federal support of research in this area, most of which is currently funded under the High Performance Computing and Communications Initiative (HPCCI). The Initiative has already accomplished a great deal and should be continued. This book provides 13 major recommendations for refining both HPCCI and support of information technology research in general. It also provides a good overview of the development of HPCC technologies.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!