National Academies Press: OpenBook

Renewing U.S. Mathematics: A Plan for the 1990s (1990)

Chapter: B Recent Research Accomplishments and Related Opportunities

« Previous: A Executive Summary of the 1984 Report
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 87
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 88
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 89
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 90
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 91
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 92
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 93
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 94
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 95
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 96
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 97
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 98
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 99
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 100
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 101
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 102
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 103
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 104
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 105
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 106
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 107
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 108
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 109
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 110
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 111
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 112
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 113
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 114
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 115
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 116
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 117
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 118
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 119
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 120
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 121
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 122
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 123
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 124
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 125
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 126
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 127
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 128
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 129
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 130
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 131
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 132
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 133
Suggested Citation:"B Recent Research Accomplishments and Related Opportunities." National Research Council. 1990. Renewing U.S. Mathematics: A Plan for the 1990s. Washington, DC: The National Academies Press. doi: 10.17226/1598.
×
Page 134

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Appendix B Recent Research Accomplishments and Related Opportunities This appendix describes research achievements in the mathematical sciences and outlines prospects and opportunities that these achieve- ments open up. The richness and diversity of these achievements, which span core mathematics and a wide range of applications, point to the vigor, creativity, and depth and breadth of current research in the mathematical sciences. The unification and cross-fertilization of areas within core mathematics, increaser! reaching out to applications (which often uncovers unusual and unexpected uses of mathematics), and the growing role of the computer are all themes that are illus- trated in these descriptions. It should be emphasized that this list is only a selection that is not intended to be complete or comprehensive, nor is it intended to be an agenda for the future. Many important achievements and opportuni- ties are not ~liscussed for lack of space. If past patterns continue, a number of new achievements that we cannot visualize now will open up yet newer opportunities. It is interesting and significant to note how many of the achievements described in this appendix were not even suggested in the appendix "Ordering the Universe: The Role of Mathematics" in the 1984 Report. The committee would like to thank the following individuals for their assistance in preparing this appendix: W. Ballhaus, N. Breslow, R.L. Bryant, D.M. Burns, S. Childress, W. Cleveland, R.R. Coifman, G.B. Dantzig, H. Flaschka, J. Geanakoplos, J.G. Glimm, L. Gordon, L.F. Greengard, J. Harris, T. Hoist, W-C. Hsiang, 87

APPENDIX B A.M. Jaffe, A. Jameson, N. Jewell, D.S. Johnson, R.M. Karp, H. Kocak, A. Kupianinen, H.B. Lawson, F.T. Leighton, C.E. Leith, G.L. Lieber- man, A. Mazda, A. Marden, B. Mazur,tW. Murray, F.M. Odeh, C.S. Peskin, P. Seymour, L.A. Shepp, T.C. Spencer, P. Switzer, M.S. Water- man, S. Winograd, and I.A. Yorke. The following topics are discussed: 1. Recent Advances in Partial Differential Equations 2. Vortices in Fluid Flow 3. Aircraft Design 4. Physiology 5. Medical Scanning Techniques 6. Global Change 7. Chaotic Dynamics 8. Wavelet Analysis 9. Number Theory 10. Topology 11. Symplectic Geometry 12. Noncommutative Geometry Computer Visualization as a Mathematical Tool Lie Algebras and Phase Transitions 15. String Theory 16. Interacting Particle Systems 17. Spatial Statistics 18. Statistical Methods for Quality and Productivity 19. Graph Minors 20. Mathematical Economics 21. Parallel Algorithms and Architectures 22. Randomized Algorithms 23. The Fast Multipole Algorithm 24. Interior Point Methods for Linear Programming 25. Stochastic Linear Programming 26. Applications of Statistics to DNA Structure 27. Biostatistics and Epidemiology SYNOPSIS OF TOPICS In the first section, "Recent Advances in Partial Differential Equa- tions'" the items discussed are formation of shocks in non-linear waves, recent advances in elliptic equations, free boundary problems, and finally some remarkable advances in exactly solvable partial differen- 88

APPENDIX B fiat equations. "Vortices in Fluid Flow', (Section 2) continues some of these general themes to discuss vortex motion in fluid flow, a phe- nomenon of great importance in many applications, including the accurate tracing of hurricanes, the study of blood flow through the heart, the efficient mixing of fuel in internal combustion engines, air- craft flight, and the manner in which radiotelescopes sense distant galaxies through the motion of galactic jets. "Aircraft Design" (Section 3) illustrates the use of computational fluid dynamics, a technique that has matured so that it is now seen as the primary aerodynamic design tool for any problem. Analogous com- puter models are described in Section 4, "Physiology," which dis- cusses computational fluid models of the heart and other organs. Modern medical scanning techniques using X-rays or nuclear mag- netic resonance depend critically on algorithms deriving from the mathematics of the Radon transform. Recent progress in emission tomography is based on some newly developed algorithms of a very different sort that have probabilistic elements; these developments are described in Section 5, "Medical Scanning Techniques." Finally, "Global Change" (Section 6) discusses the key role played by computational fluid dynamics in global circulation models that are used in the analy- sis of climate change on a worldwide scale. Section 7, "Chaotic Dynamics," shows how ideas of Poincare on aperi- odic orbits for ordinary differential equations, complemented by ideas from topology, differential geometry, number theory, measure theory, and ergodic theory, plus the ability of modern computing facilities to compute trajectories, have led to a body of core mathematics that has many interesting and important applications. "Wavelet Analysis" (Section 8) outlines how classical ideas growing out of Littlewood-Paley and Calder6n-Zygmund theory have been developed within core mathematics and then have led to new and very efficient numerical tools for analysis of a wide variety of prob- lems. Algorithms based on wavelet analysis promise to significantly speed up communications and signal-processing calculations. The discussion titled "Number Theory" (Section 9) centers on a classic area of core mathematics that is actively and vigorously moving for- ward, spurred on in part by the resolution of the Mordell conjecture in the early l980s. Advances of great significance for the future include new results on the local-to-global problem in number theory and in arithmetic algebraic geometry, and significant progress on Fermat's 89

so APPENDIX B last theorem. Section 10, "Topology," notes important advances in major problems in low-dimensional topology, including remarkable connections with Yang-Mills theory, and recent advances in knot the- ory that involve a striking and unexpected connection with van Neu- mann algebras and mathematical physics. Section 11, "Symplectic Geometry," is devotecl to important recent developments in that field, including the use of nonlinear elliptic equations to establish a form of the Heisenberg uncertainty principle, the discovery of new kinds of symplectic structures, and a basic ad- vance in the understanding of regions of stability for area-preserving maps of the plane. "Noncommutative Geometry" (Section 12) describes a broad spectrum of very interesting developments involving a link between analysis and geometry and how the ideas of differential geometry extend to a noncommutative setting. This is an excellent example of cross-fertili- zation between areas within core mathematics and the building of an internal unification. The availability of powerful computers is stimulating research in core mathematics. Section 13, "Computer Visualization as a Mathematical Tool," indicates how computer graphics can be used as a tool in mini- mal surface theory and other areas of geometry to enhance under- standing and provide experimental evidence stimulating conjectures. The exposition in Section 14, "Lie Algebras and Phase Transitions," displays the rich and deep interaction between this topic in statistical mechanics and a number of areas of mathematics, including especially the Kac-Moody Lie algebras. "String Theory', (Section 15) discusses another topic in physics that relies heavily on developments from core mathematics. Section 16, "Interacting Particle Systems," indicates how systems similar to those discussed in the context of phase transitions can have applications in the study of biological systems, image proc- essing, and for medical and defense purposes. "Spatial Statistics" (Section 17) describes an area that addresses some overlapping prob- lems and uses new statistical tools for handling data in multidimen- sional arrays. Section 18, "Statistical Methods for Quality and Produc- tivity," discusses problems, opportunities, and new methods for ad- dressing important problems of national interest and significance. "Graph Minors" (Section 19) surveys some recent results in graph theory, which open up new avenues for research especially important

APPENDIX B in the design of algorithms. Section 20, "Mathematical Economics," describes some important recent developments and discusses how several parts of core mathematics, especially differential topology, have played key roles in the analysis of general equilibrium theory for incomplete markets, a new departure that is a better model for real markets than the now classic model for complete markets. The next group of topics have as a common general theme the search for new and efficient algorithms. "Parallel Algorithms and Architec- tures" (Section 21) concerns the design of algorithms to take advan- tage of parallel architectures, a problem not only for computer scien- tists but also for mathematicians working in large-scale computation. Here the idea is to see how problems that seem to be inherently se- quential can be parallelized. Section 22, "Randomized Algorithms," describes recent progress in the development of these new kinds of algorithms. Such algorithms are useful in primality testing, with re- sulting consequences for cryptography, in sorting and searching algo- rithms, in the design of distributed computing systems, and in many other areas. The subject of Section 23, "The Fast Multipole Algo- rithm," is a new, very efficient algorithm for computing interactions in many-particle systems. This algorithm will have many applications in the modeling of high-powered electron beam devices and in mo- lecular dynamics, which affects theoretical studies of chemical kinet- ~cs. The next two sections discuss recent advances in algorithms for nu- merical optimization; Section 24 is devoted to the new and very im- portant interior point methods for linear programming, which pro- vide an alternative to the classic simplex methods and are beginning to have a significant practical impact in the design of telecommunica- tions networks and the solution of large-scale logistics planning and scheduling problems. Section 25 discusses yet another approach- stochastic linear programming, a technique that allows one to include non-deterministic elements in the formulation and solution of a prob- lem. Thereby real problems that involve uncertainties in future be- havior or availability of resources can be better modeled. Sections 26 and 27 discuss an array of applications of mathematics in various additional areas. "Applications of Statistics to DNA Struc- ture" includes as an application the statistical analysis of options for cutting the DNA sequence to aid in the mapping processes) and ana- lyzing the evolutionary process at the genetic level. "Biostatistics and Epidemiology" is devoted to the use of statistics in epidemiology, 91

APPENDIX B including survival analysis, analysis of incidence rate and relative risk, and deconvolution techniques for estimating infection rates and incubation periods from observed data. 1. RECENT ADVANCES IN PARTIAL DIFFERENTIAL EQUATIONS An important trend of the last 15 years has been the great progress made in understanding nonlinear partial differential equations (PDEs). Many physical phenomena are described by partial differential equa- tions, e.g., fluid flow, electromagnetic fields, gravity, and heat. Roughly speaking, linear partial differential equations govern small vibrations or small disturbances from equilibrium, while nonlinear equations govern large disturbances. The real world is nonlinear. Since the mid-1970s, understanding of nonlinear equations has grown much deeper. Finally, in the last few years, some of the most important equations from geometry, physics, and engineering have been suc- cessfully studied. Many other equations are still too hard, and much more work is needed. Among the important problems solved recently are the following. Formation of Shocks in Nonlinear Waves In one space dimension, a small, smooth initial disturbance will be propagated by any truly nonlinear wave equation into a shock after a finite time. In more than four space dimensions, such shocks do not have to form. In three dimensions, "most" wave equations lead to shocks, but only after an exponentially long time. Moreover, an important class of equations (those satisfying a natural geometric property called the "null condition") do not build up shocks. Very recently there have been significant advances for one of the most important nonlinear equations, Einstein's equations for gravitational waves. At large distances and after a long time, one has a detailed picture of how gravitational waves behave. Very difficult and inter- esting questions remain in the study of Einstein's equations. Of spe- cial interest is the formation of black holes. Elliptic Equations Another important class of partial differential equations arises in geometry, when one tries to construct surfaces with prescribed curva- ture. These equations are called nonlinear elliptic differential equa- 92

APPENDIX B lions. A general theorem on regularity of solutions of elliptic equa- tions with boundary conditions was recently proved, making it pos- sible to treat boundary conditions that arise inevitably in real prob- lems. This result is a basic step forward in the analysis of partial differential equations. Important progress has been made also on some singular elliptic equations, namely those having "critical nonlinearity." If the- nonlin- ear term in such an equation were made slightly weaker, then the equation could be regarded as a small perturbation of a linear prob- lem, but at the critical nonlinearity this becomes impossible. An out- standing equation of this kind occurs in the Yamabe problem, in which one is asked to deform a curved manifold until it has constant (scalar) curvature. A complete solution to this problem has recently been established. Free Boundary Problems An iceberg melting in the sea, the flow of oil and water through a reservoir, and crystal growth are examples of free boundary problems governed by partial differential equations. For the melting iceberg, the temperature flow in the iceberg is governed by one parabolic partial differential equation, the temperature flow in the water around the iceberg by another, and the boundary between ice and water is given by a third equation. The three equations are coupled. What makes the problem very hard is the fact that the domains where the differential equations are satisfied keep changing with time, and are not known ahead of time. Recently proposed techniques have already led to new regularity theorems for free boundary problems and prom- ise further results. Exactly Solvable Partial Differential Equations Remarkably, a number of nonlinear PDEs can be solved exactly. These equations admit stable solutions (solitons) that persist even after inter- action with other solitons. Recently, equations for solitons have been used to solve the Schottky problem, an outstanding open problem in the theory of Riemann surfaces. The method used to solve soliton equations may be illustrated by the case of the Korteweg-deVries (KdV) equation, which describes the propagations of water waves in a long, narrow channel. At a single 93

APPENDIX B instant of time, we imagine the shape of the water wave to be frozen and rigid. We then bombard the rigid shape with imaginary quan- tized test particles. By studying how the test particles are scattered, one can reconstruct the shape of the wave. Thus, the scattering data provide an alternative description of the wave at a fixed time. Instead of asking how the shape of the wave changes with time, we can there- fore ask how the scattering data evolve with time. When rewritten in terms of scattering data, the K6V equation becomes amazingly simple, and the complete solution may be written down by inspection. In particular, the highly stable behavior of solitons is explained for the case of the K6V equation. More recently, a number of physically interesting PDEs have been solved completely by analogous methods, including the Kadomtsev- Petviashvili (K-P) equation for weakly two-dimensional water waves, and the sine-Gordon and nonlinear SchrBdinger equations. Explicit solutions of the K-P equation successfully predicted the results of experiments in water tanks, and a combination of theoretical and numerical analysis has been applied to model the behavior of a Jo- sephson junction. Remarkable connections have been discovered be- tween explicit solutions for nonlinear waves, exact solutions of statis- tical mechanics problems in two dimensions, and the Jones polynomi- als for knots, some of which are discussed below in sections on phase transitions and topology. 2. VORTICES IN FLUID FLOW Intense swirling or vortex motion is a primary feature of many prob- lems, including the accurate tracing of hurricanes and studies of blood flow through the heart, efficient fuel mixing in carburetors, aircraft flight, and the manner in which radiotelescopes sense distant galaxies through the motion of galactic jets. Governing much of this flow is a complicated set of nonlinear partial differential equations called the Navier-Stokes equations; these equa- tions are derived from Newton's laws of motion and include the fric- tional effects of viscosity. Intuition suggests that this frictional effect is extremely small in air or rapidly moving water, and this is con- firmed by experiments. The simpler partial differential equations obtained when this coefficient vanishes are called Euler equations. These are accurate enough for studying the movement and accumula- tion of vortices. 94

APPENDIX B Recent ingenious large-scale computer simulations using these equa- tions reveal unexpectedly that the sheets where vorticity typically accumulates clump and concentrate in an amazing fashion. In re- sponse to these discoveries, a new mathematical theory of "oscilla- tions and concentrations" has developed using potential theory and fractal (Hausdorff) measures. New kinds of "solutions" for the Euler equations are being introduced. One outgrowth of this theory is an explicit criterion to check whether numerical calculations for vortex sheets actually converge to solutions of the Euler equations. Conver- gence has been verified for many calculations of importance. The vortex sheets in the applications just described involve fluid moving at rapid speed but still much less than the speed of sound. Com- pletely different phenomena transpire when the vortex sheets are supersonic, as they are for the new space planes and for galactic jets in astrophysics. One recent success in the alliance between large-scale computing and modern mathematical theory is the discovery of a new mechanism of nonlinear instability for supersonic vortex sheets. Recent large-scale simulations have demonstrated that all supersonic vortex sheets exhibit nonlinear instability, belying the predictions of stability made in the 1950s and 1960s. One of the most important problems in fluid dynamics, an extension of the study of vortices, is the understanding of turbulence, which occurs when the frictional effect is extremely small but not negligible. Understanding turbulence requires the mathematical analysis of solu- tions of the Euler and the Navier-Stokes equations in the limit of small viscosity. This analysis is ongoing. 3. AIRCRAFT DESIGN Within the last five years, full simulations of a whole aircraft have appeared. Such a computation usually starts with steady Euler equa- tions that accurately describe the flow outside the boundary layer. Such flows are smooth until the Mach number, M, comes close to 1. For Mach numbers in the transonic range that is, less than but close to 1 small shocks are generated from a typical airfoil that dramati- cally increase the drag. It is a mathematical theorem that in almost all cases such shocks cannot be avoided. Since the cruising efficiency of a plane is roughly proportional to ML/D, where L is lift and D is cirag, it is imperative for manufacturers to design aircraft that minimize shocks. Of course if M exceeds 1, there is no avoiding or even minimizing 95

APPENDIX B shocks, and we have the inefficiency of the Concorde. In the past 15 years, great effort has been put into designing two-dimensional airfoil cross-sections that at some cruising speed or range of cruising speeds with M less than 1 have minimal shocks. When a wing cross-section is chosen, the flow at design conditions is computed and compared with wind tunnel results. To extend the computation to the whole aircraft, new computational capabilities have been added. The complex geometrical configura- tions demand new methods not only for discretizing the equations but also for handling the enormous volume of data. Currently the chal- lenge is to resolve higher-dimensional shocks and vortex sheets to predict viscous effects as described in the previous section. The most useful end product of simulations is a determination of how surface pressure varies with such parameters as Mach number and the angle of attack. Varying parameters on the computer is much more eco- nomical than doing enormous numbers of experiments in a wind tunnel. The simple model provided by the Euler equations is remarkably good, airplane flight being basically stable. But key elements are missing. Despite considerable effort, there is still no good mathematical model for the turbulent boundary layer, and when one is found it will in- crease the size of the computation at least as much as by adding a dimension. An ultimate goal of design is to pick an optimal pressure distribution and then find the aircraft shape that corresponds to it. Such inverse problems also increase drastically the computational needs. The hope is that computer hardware speedups and algorithmic im- provements will combine to make these goals achievable. One area of particular note is in the design of aircraft engines. A typical example is a turbomachinery compressor simulation where instantaneous temperature contours are calculated. This computation is based upon time-dependent Navier-Stokes equations. Simulations show viscous wakes created by the blades and how some of the blades chop or break these wakes into different pieces, creating an extremely complex flow pattern. This flow pattern would be difficult or impos- sible to describe and adjust without dependable mathematical models coupled with computer simulations. For very-high-altitude high-speed conditions, numerical simulations are also being used for vehicle design. At these altitudes, air can dissociate into its atomic constituents and even eventually ionize, 96

APPENDIX B creating a situation that is extremely difficult to simulate in ground- based experimental facilities. As a result, numerical flow simulations, with the appropriate air chemistry models added, are currently being used as an integral part of the design process for many high-speed or atmospheric entry vehicles. The best summary of the situation has been given by Goldhammer and Rubbert: The present state-of-th~art has progressed to the point where the design engineer no longer considers Computational Fluid Dynamics (CFD) to be an irritant imposed on him by a seldom seen researcher, but rather CFD is re- garded as the primary aerodynamic design tool for any problem, and the wind tunnel is treated as more of an evaluation and confirmation tools 4. PHYSIOLOGY In the realm of physiology mathematical modeling has come into its own over the last ten years. Today there are computational models of the heart, the kidney, the pancreas, the ear, and many other organs. Many of these models rely on fluid dynamics. Physiological fluid dynamics has a long and illustrious history. Le- onardo da Vinci first described the vortices that form behind heart valves and that enable the valves to avoid backflow by closing while the flow is still in the forward direction. Leonhard Euler first wrote down the partial differential equations for blood flow in arteries. With the recent flowering of computer technology and numerical algorithms, there is unprecedented opportunity to simulate the fluid dynamic functions of the human body at a level of detail sufficient to be of use in the understanding and treatment of disease. For instance, blood flow in the heart is governed by coupled equations of motion of the muscular heart walls, the elastic heart valve leaflets, and the blood that flows in the cardiac chambers. Computer solutions allow one to study both normal and diseased states, and lead to the design of prosthetic devices such as artificial valves and artificial hearts. The methods used have a very general character since they are applicable to any problem in which a fluid interacts with an elastic medium of complicated geometry. Among these are the flow of sus- pensions, blood clotting, wave propagation in the inner ear, blood flow in arteries and veins, and airflow in the lung. Like much of computational fluid dynamics, this work pushes computer technology 97

APPENDIX B to its limits, and future progress is strongly tied to the further devel- opment and availability of supercomputers. Early research began with the development of a two-dimensional computer model of the left side of the heart. The model was designed for computer experiments on the mitral valve, which has the appropri- ate symmetry for two-dimensional studies. The computer experiments were successfully compared with physiological experiments, such as those studying the optimal timing of the atrial contraction in relation to that of the ventricular contraction. The computer model was trustwor- thy enough to use for parametric studies leading to optimal designs of prosthetic cardiac valves. With supercomputers, it has become possible to extend this work to three dimensions. This raises the prospect of additional applications such as flow through the aortic valve, the mechanical consequences of localized damage to the heart wall, interactions of the right and left ventricle, flow patterns of blood in the embryonic and fetal heart, the fluid dynamics of congenital heart disease, and the design of ventricu- lar-assist devices or even total artificial hearts. A general-purpose three-dimensional fiber-fluid code has already been developed that solves the equations of motion of a viscous incom- pressible fluid coupled to an immersed system of elastic or contractile fibers, using the vector architecture of the Cray. The fiber-fluid code has been tested on problems involving an immersed toroidal tube composed of two layers of spiraling fibers. In one of these tests, the fibers were contractile (i.e., muscular) and peristaltic pumping was achieved by sending a wave of muscle contraction around the tube. With a sufficiently strong contraction, a small region of entrained fluid was seen being convected along at the speed of the wave. A three-dimensional fiber-based model of the four-chambered heart and the nearby great vessels is now under construction for use with the general-purpose fiber-fluid code described above. It includes the right and left ventricles, realistic aortic and pulmonic valves complete with sinuses and the beginnings of their respective arteries, and pre- liminary versions of the mitral and tricuspid valves. 5. MEDICAL SCANNING TECHNIQUES Significant progress in inverse problems in medicine has occurred in 98

APPENDIX B the last five years. CAT scanning itself is no longer a research topic but is an established, fixed technology. The new advances have oc- curred in magnetic resonance imaging (MRI) and in emission tomogra- phy, which are similar to CAT scanning from a superficial mathemati- cal viewpoint in that they each involve indirect measurements of a three-dimensional quantity or image of interest and then use mathe- matical inversion of the measured quantities to reconstruct the actual image. In MRI a large magnet and surrounding coil measure the resonating magnetic field inside the patient, due to an unknown density of hydro- gen atoms, which act like little spinning magnets. The mathematics used to reconstruct the hydrogen density uses the inverse Fourier transform applied to the measured signal. This allows the determina- tion of the density of magnetic spins, or the concentration of hydrogen atoms inside the patient, which in turn gives an image of the interior tissue similar to but much better than a CAT scan. Bones appear black instead of white because, while they have a high X-ray attenuation density, they have a low hydrogen density, being largely calcium. Just as in CAT scanning, mathematics is one of the chief technologies in MRI, the main feature being fast and accurate inversion of the Fourier transform. The new mathematics of emission tomography (ET) is very different from that of either CAT or MRI and involves a nonlinear inversion procedure. In ET a compound such as glucose is introduced into the body with the carbon atoms in the glucose being radioactive isotopes of carbon. Counts of radioactivity are measured in a bank of detectors surrounding the body. One mathematically inverts the detected count data and reconstructs the emission density; i.e., one finds where the radionuclide was deposited by the body's metabolism. A beautiful and elegant new algorithm produces an emitter-density that to a first approximation maximizes the probability of seeing the actual observed counts. This statistically based maximum likelihood algorithm has the great advantage that it addresses the main limitation of ET, namely that it is count-limited. The mathematics involves no Fourier trans- forms, but instead the convergence of a nonlinear iteration scheme. Given the universality of mathematics, it should not be surprising that the algorithm is new only to ET: it is a known algorithm that first arose in the 1960s in a problem in decryption of Soviet codes. Emis- sion tomography has so far been mainly used not as a clinical tool, but to study metabolism in the human being. 99

APPENDIX B 6. GLOBAL CHANGE Of the many environmental issues that have received public attention in recent years, perhaps the most far-reaching is the possible effect of human activity on the earth's climate. The greenhouse theory holds that recent modification of the atmospheric gaseous composition will result in a gradual warming of the earth's surface as well as a cooling of the upper atmosphere, leading to an unprececlented (in historical times) modification of the earth's climate. However, natural climatic changes can mask this increase, and there is a critical need to study quantitatively the magnitude of the greenhouse effect within a global climate model under various scenarios, past, present, and future. The basic theoretical principles rest on the notion of an equilibrium climate, where incoming solar radiation, which is absorbed in the atmosphere and at the surface of the earth, must equal the thermal energy radiated out into space. This balance determines the average temperature at the surface of the earth. The greenhouse effect occurs when the outgoing radiation is partially absorbed by particles and molecules in the upper atmosphere, or troposphere, principally the top 10 to 15 kilometers. Three-dimensional general circulation models provide a means of simulating climate on time scales relevant to studies of the green- house effect. These models, which numerically solve a nonlinear system of partial differential equations, are being used to compute differences between a climate forced by increases in greenhouse gases and a con- trol or current climate. The underlying equations are Euler's equa- tions, with simplifications to take into account the thinness of the atmosphere relative to the distances across the surface of the earth. Long-term predictions must account for thermal adjustment of the oceans, over a time scale of decades, and models need to be devised that are suitable for this purpose. It is important to track not only mean surface temperatures, but also spatial and temporal changes in temperature variability, which can have equally important conse- quences. These studies will require accurate codes and precise esti- mates of sensitivity to forcing by the various greenhouse gases on a variety of time scales. As in other geophysical flow calculations, reliable turbulence models are needed in order to estimate turbulent transport. 100 At a more theoretical level, a basic goal should be to identify the "minimal" dynamical description of the atmosphere-ocean-land that

APPENDIX B could, on the time scale of decades, provide reliable estimates of cli- matic change. Methods from dynamical systems theory (see the next section) can be used to reduce the dimension of the system and thereby isolate an "attractor" involving only the dynamical variables essential to greenhouse studies. Detailed but economical calculations of climate sensitivity might then be accessible, giving a new understanding of the influence various incremental combinations of greenhouse gases have on the equilibrium climate. 7. CHAOTIC DYNAMICS The early observations of trajectories of celestial objects appeared to indicate periodic or, at worst, quasiperiodic behavior that could easily be accounted for in mathematical terms. But at the turn of the twen- tieth century, Poincare realized that the behavior of trajectories of celestial bodies could be immensely complicated, displaying a "cha- otic" motion, forever oscillating yet irregular and aperiodic. More- over, Poincare identified a crucial property of systems with chaotic trajectories sensitive dependence on initial data, which is of particu- lar importance for scientists because very small errors in the measure- ment of the current system state would result in very unrealistic long- term predictions. In 1963, a detailed numerical examination of a specific system of dif- ferential equations from meteorology revealed unexpected chaotic trajectories. This work not only pointed out the presence of chaotic trajectories in a specific non-Hamiltonian system but also suggested new directions of research in the theory of dynamical systems. Mathe- maticians and scientists have come to recognize that the amazingly complicated behavior that Poincare spoke of, and that was demon- strated in these calculations for new kinds of attractors, was in fact present in a wide variety of practical nonlinear systems from ecology, economics, physics, chemistry, engineering, fluid mechanics, and meteorology. The advent of the computer was essential to these developments, but equally important were the deep mathematical insights. Indeed, the theory of dynamical systems has a rich mathematical tradition, one that involves many areas of mathematics: topology, number theory, measure and ergodic theory, and combinatorics have all been essential to the understanding of dynamical systems, especially the ones exhib- iting chaotic behavior. For instance, in dynamical systems with two or more attractors (that is, several types of long-term behavior depend- 101

APPENDIX B ing on the initial state), the ability to predict long-term behavior re- quires a detailed knowledge of the boundary between these different kinds of initial states. These boundaries can be extremely complicated and strange—"fractals." These fractal basin boundaries are currently under investigation by scientists, who need to understand physical systems, and by topologists who see fascinating mathematical struc- tures occurring in a natural way. A further important development was the realization that regularities can be expected whenever chaos arises through an infinite series of "period doubling bifurcations" as some parameter of the system is varied. Ideas explaining these regularities eventually led to rigorous proofs of the phenomenon. The first rigorous proof carried out the detailed analysis on a computer using a procedure called interval analysis: all calculations are performed with error bounds, so that results lie in intervals. The computer adds or multiplies intervals in which the correct results lie so that all errors are perfectly bounded. In the analysis of dynamical systems, there is a great need to compute dynamical entities other than chaotic attractors. Mathematicians are now beginning to create new numerical methods for computing the stable and unstable manifolds of which Poincare spoke. In a related vein, identification of "inertial manifolds" for partial differential equations is a promising route in the quest to reduce the essential dynamics of an infinite-dimensional dynamical system to that of an appropriate finite-dimensional one. Finally, mathematical investiga- tions of dynamical systems without the concern of immediate applica- bility, such as work describing complicated flows on spheres, have yielded important insights. 8. WAVELET ANALYSIS Wavelet analysis, a recent and exciting development in pure mathe- matics based on decades of research in harmonic analysis, is now addressing important applications in a wide range of fields in science and engineering. There are opportunities for further development of both the mathematical understanding of waveless and their ever-ex- panding applications. 102 Like Fourier analysis, wavelet analysis deals with expansions of func- tions, but in terms of "waveless." A wavelet is a given fixed function with mean 0, and one expands in terms of translates and dilates of this function. Unlike trigonometric polynomials, waveless are localized in

APPENDIX B space, permitting a closer connection between some functions and their coefficients and ensuring greater numerical stability in recon- struction and manipulation. Every application using the Fast Fourier Transform (FFT) could be formulated using waveless, providing more local spatial (or temporal) and frequency information. In broad terms, this affects signal and image processing and fast numerical algorithms for calculations of integral operators (not necessarily of convolution type). Wavelet analysis is an outgrowth of 50 years of mathematics (Little- wood-Paley and Calderdn-Zygmund theory) during which harmonic analysts, having realized the difficulties inherent in answering the simplest questions involving the Fourier transform, developed sim- pler flexible substitutes. Independent of this theory within pure mathe- matics we have seen variations of this multiscale (multiresolution) approach develop (over the last decade) in image processing, acous- tics, coding (in the form of quadrature mirror filters and pyramid algorithms), and in oil exploration. As a companion to the FFT it has been used in analyzing rapidly changing transient signals) voice and acoustic signals, electrical currents in the brain, impulsive underwater sounds, and NMR spectroscopy data, and in monitoring power plants. As a scientific tool it has been used in sorting out complicated struc- tures occurring in turbulence, atmospheric flows, and in the study of stellar structures. As a numerical tool it can, like the FFT, reduce considerably the complexity of large-scale calculations by converting dense matrices with smoothly varying coefficients into sparse rapidly executable versions. The ease and simplicity of this analysis have led to the construction of chips capable of extremely efficient coding and compression of signals and images. 9. NUMBER THEORY There has been impressive progress in number theory in the past five years on several fronts, which in turn has opened up exciting new opportunities. One achievement has been a significant advance in our understanding of what is known as the "local-to-global" principle of certain algebraic curves. The idea of the ''local-to-global" principle, which is contained in the classical theorem of Hasse-Minkowski, has to do with a single homogeneous quadratic equation (in many vari- ables) and its solutions. To illustrate it, take a simple case a · w2 + b · ~ + c · y2 + ~ · Z2 + e · WX + f YZ = 0, where the coefficients a,b,c,d,e,f are integers. With these coefficients 103

APPENDIX B fixed, the question one wishes to resolve is, Are there integer values of the variables W,X,Y, and Z (not all zero) that "solve" the above equation, i.e., for which the left-hand side is zero? The Hasse-Minkow- ski theorem answers yes, if and only if (1) there is a positive integer D such that for each integer N there are integer values of W,X,Y,Z (depending on N) having greatest divisor D and satisfying the con- straint that a · We + b · X2 + c · y2 + d · Z2 + e · WX + f · YZ is a multiple of N; and (2) there are real numbers W,X,Y,Z that "solve" the above equation. v v There are reasons (which aid one's intuitive grasp of such problems) to think of the criteria (1) and (2) as asserting the existence of nontriv- ial local solutions of the equation, while integral solutions of the equation can be thought of as global solutions. This explains the adjective "local-to-global." The Hasse-Minkowski theorem has been an enormously fruitful theme for research in the quest of broader domains in which the "local-to- global" principle or some modification of it remains valid. One may ask, To what extent does a similar result hold for homogeneous equa- tions of degree higher than 2? It is an old result that such a principle is false for curves of degree 3. A substitute for the "local-to-global" principle for equations of the above type ("curves of genus one"), which would be (if precisely controlled) every bit as useful as the original "local-to-global" prin- ciple, is the conjecture of Shafarevitch and Tate. This conjecture says that, accounting for things in an appropriate way, the "failure of the local-to-g-Iobal principle" is measured by a finite group. This Shafarevitch-Tate group, which measures the extent to which the lo- cal-to-global principle is not valid, is an important object in its own right: it is the Gateway to anv deen arithmetic study c)f elliptic curves ~ ~ -- - ~ - --r ~ ~ -- ----or and to the very phrasing of conjectures that guide much research in this area. The conjectures of Shafarevitch-Tate and related ones of Birch and Swinnerton-Dyer comprise some of the great long-range goals of the discipline, and a piece of them has recently been estab- lished in a very interesting context. Another facet of number theory that has seen an enormous amount of activity is arithmetic algebraic geometry. This has figured promi- nently in work on the arithmetic Riemann-Roch theorem, in the unify- ing conjectures connecting Diophantine problems to Nevanlinna the- 104

APPENDIX B ory and to differential geometry, and in recent results giving a new and very "geometric" proof of Mordell's conjecture over function fields a proof that may translate to an analogous new proof in the number field context. A third significant development of the last five years consists of the conjectures and results that bring some old Diophantine questions closer to the heart of issues perceived to be central to immediate progress in arithmetic. One might mention the recent conjectures of Serre, which suggest much important work to do in the theory of modular forms. Following these ideas, using a prior ingenious con- struction, it has recently been shown that the Shimura-Taniyama-Weil conjecture (that all elliptic curves over Q are modular) implies Fermat's last theorem. Finally, a beautiful and simple conjecture (often called the ABC conjecture) has been formulated: there is a universal constant e such that, if A and B are nonzero integers with C - A + B. then lA · B · C! is less than the eth power of the radical of A · B · C, where the radical of a number is the product of the distinct primes dividing it. An essentially immediate consequence of the ABC conjecture is an "asymptotic" version of Fermat's last theorem. It is also true that a vast number of other deep consequences would follow from the same conjecture. 10. TOPOLOGY Two of the most basic problems in topology are the following. I. Suppose one is given two manifolds (the e-dimensional generaliza- tion of surfaces) M and M'. How can one recognize whether the two manifolds are topologically the same, like a sphere and an elliptical surface, or whether they are topologically different, like a sphere and a torus (inner tube)? We seek invariants to distinguish between differ- ent manifolds. II. Suppose that one manifold K is embedcled in a higher-dimensional manifold M in two different ways. Is it possible to deform one embed- ding into the other? In the most basic and classical case, one studies an embedding of the circle K into ordinary three-dimensional space M. Such an embedding is a knot, and the goal is to understand when a given knot can be untied, and more generally when one given knot can be deformed into another. Again, invariants are sought to distinguish nonequivalent embeddings of K into M. 105

APPENDIX B For manifolds M of dimension 5 and above, these problems were essentially solved in the late 1960s and early 1970s. Dimensions 3 and 4 are much harder, and are still far from being completely understood. Nevertheless, there has been dramatic progress on low dimensions in the last few years. For instance, for problem (I) in 4 dimensions, the role of smoothness of possible equivalences of manifolds M and M' has come to the fore. Recently the following remarkable example was found: there is a smooth 4-dimensional manifold M that is topologically equivalent to ordinary 4-dimensional Euclidean space R4 by a "crinkly" non-smooth map, but that cannot be transformed smoothly to Euclidean space. Perhaps even more amazingly, it has since been learned that the number of such different examples is uncountably infinite. This phenomenon occurs in no other dimension except 4. It came as a complete surprise, both because dimension-4 behavior is so different from the previously known behavior of other dimensions, and because of the remarkable source of the discovery. In fact, the key invariants used to distinguish the exotic 4-manifold from ordinary Euclidean space have their origin in the study of the Yang-Mills equations, originally introduced in particle physics. Thus, an important connection has arisen between particle physics and topology. Important invariants for the study of problem (I) were also discovered for dimension 3. These invariants, called Casson invariants, recently shed light on a classical and fundamental problem of topology, the Poincare conjecture. The Poincare conjecture states that in 3 dimen- sions, the sphere is the only possible manifold (closed without bound- ary) whose simplest topological invariant, the fundamental group, is zero. The analogue of this conjecture has been proved in all dimen- sions except 3. The 4-dimensional case was done as part of the work described above. However, the 3-dimensional case is very difficult, and the statement may actually be false. A standard attempt to pro- duce a counterexample was based on the so-called Rochlin invariant. The Casson invariants work shows that this line of attack cannot yield a counterexample. Another remarkable recent development in topology concerns Prob- lem (II), in the classical context of knots. The main invariant in clas- sical knot theory was the Alexander polynomial, developed in the 1930s. A weakness of the Alexander polynomial is that it fails to distinguish between a knot and its mirror image. In 1984, in the 106

APPENDIX B course of study of questions on von Neumann algebras (a branch of functional analysis motivated by quantum mechanics), formulas were discovered that bore a striking similarity to classical algebraic formu- las from the study of knots and braids. Pursuit of this connection led to the discovery of a powerful new invariant of knots and links, now called the Jones polynomial, which has the important advantage that it distinguishes a knot from its mirror image. It is also easy to com- pute. Both the Jones polynomial and the work on exotic 4-manifolds arose through mathematical problems with strong connections to physics. This led to the conjecture that a quantum field theory, an exotic vari- ant of the laws of particle physics, could be constructed, in which the experimentally observable quantities are the invariants described. Such a quantum theory has recently been constructed, and although the work is highly plausible, it has not been rigorously proved. Finding a complete, rigorous proof of these calculations is a challenge for future research. 11. SYMPLECTIC GEOMETRY A fundamental development of nineteenth-century mathematics was Hamiltonian mechanics. A mechanical system composed of many particles moving without friction is governed by a complicated system of differential equations. Hamilton showed that these equations take a simple standard form when the Hamiltonian (the total energy of the system) is taken as the starting point. Hamiltonian mechanics revealed hidden symmetries in classical mechanics problems and was of tre- mendous importance in the discovery of statistical mechanics and quantum theory. Today, mathematicians study Hamiltonian mechanics from a global and topological point of view. The basic object of study is a "symplec- tic manifold," a higher-dimensional surface on which Hamilton's procedure to pass from Hamiltonian functions to differential equa- tions can be implemented. The study of symplectic manifolds is called symplectic geometry, and it has been revolutionized in the last few years. A major breakthrough was the use of nonlinear elliptic equa- tions (see Section 1, "Recent Advances in Partial Differential Equa- tions") and holomorphic curves. This yields a form of the Heisenberg uncertainty principle with many applications, including demonstrat- ing the existence of exotic symplectic structures on Euclidean space 107

108 APPENDIX B and leading to the solution of long-standing conjectures on the num- ber of fixed points of symplectic transformations. The solution of these conjectures is in turn closely related to the Floer cohomology of topology. If we restrict a symplectic structure to a surface of constant energy, we get a "contact structure." Along with the recent progress in symplec- tic geometry has come important work on contact structures. In par- ticular, exotic contact structures have been discovered on Euclidean space, distinguished from the standard contact structure by "over- twisting" on embedded discs. Contact geometry has been recently used to show that any open manifold free of obvious topological obstructions can be given a complex structure and embedded into complex N-dimensional Euclidean space. (This works in dimension 6 and above.) The result thus relates complex analysis, contact geome- try, and topology. There is a very substantial branch of mathematics of the border line be- tween symplectic geometry and dynamical systems. It deals with the iteration of area-preserving maps of the plane. Such maps ~ are the most basic examples of symplectic transformations. In addition to their theoretical importance in core mathematics, they arise in a range of applications from the orbits of asteroids to the confinement of plasmas in a Tokamak machine. As explained in the section on chaotic dynam- ics, iteration of ~ can lead to highly complicated unstable behavior. In the area-preserving case, however, the chaotic behavior coexists with a large class of orbits that are completely stable and predictable, and indeed are almost periodic. Such stable behavior occurs on a family of curves, called KAM curves, that surround those fixed points of ~ where the map twists. The discovery of KAM curves was a major development of the 1950s and 1960s. It is an important and difficult problem to understand how the plane splits into regions of stable and unstable behavior. A particular case of this problem is to predict the size of the largest KAM curve. This is significant for applications, because the old KAM theory unfortunately could deal only with tiny curves. KAM curves of reasonable size were proved to exist in the last ten years. Recently, with computer-assisted methods, the sizes of the largest KAM curves for (presumably typical) examples of area-pre- serving maps have been computed to within 10%. More generally, the state of understanding of the breakdown of stabil- ity for area-preserving twist maps used to be that KAM curves are

APPENDIX B destroyed by nonlinear resonances that occur in orbits of unfavorable "frequency." In the last few years it has been discovered that stable behavior persists even for the resonant frequencies. Although the curves of KAM theory are destroyed by resonances, there remain stable Cantor sets of fractal dimension less than 1. This is an important change in our view of how stability can arise in complicated nonlinear systems. Much remains to be done in this field. Indeed, complete understanding of area-preserving maps of the plane is a remote goal. 12. NONCOMMUTATIVE GEOMETRY A major mathematical development of the 1960s was to establish an intimate link between the field of analysis and the fields of topology and geometry. This unification of seemingly diverse areas of mathe- matics set the stage for numerous interrelations and the tone of mathe- matics today. The development of modern index theory provided this path. As a first step, mathematicians realized that geometric invari- ants of manifolds could be computed as analytic invariants of certain Laplace operators and Dirac operators on these manifolds. An ab- stract version of these ideas has become known as K-theory. In the past five years we have seen a rejuvenation of K-theory, leading to the discovery of cyclic homology, cyclic cohomology, entire cyclic cohomology, and graded (i.e., super) KMS-functionals. These differ- ent topics all have been points of view within the new field of non- commutative geometry. Basically, the ideas of differential geometry have been shown to extend to a noncommutative setting. In particu- lar the calculus of differential forms and the homology of currents can be extended to deal with spaces such as the leaves of a foliation, the dual space of a finitely generated non-Abelian discrete group (or Lie group), or the orbit space of the action of such a group on a manifold. Such spaces are badly behaved as point sets and do not lend them- selves to the usual tools of measure theory, topology, or differential geometry. They are better understood by means of associating a ca- nonical algebra to each space. In the case of an ordinary manifold, this algebra is a commutative algebra of functions on the manifold, such as the algebra of essentially bounded measurable functions on the manifold (for measure theory), the algebra of continuous functions vanishing at infinity (for topology), or the algebra of smooth functions with compact support (for geometry). In the realm of noncommuta- tive geometry, these algebras are replaced by noncommutative alge- bras. In special cases these algebras are van Neumann algebras or 109

APPENDIX B C*-algebras; they lead to the generalization of de Rham cohomology and to its applications, K-theory and index theory. The basic framework to study such problems is a Z2-graded algebra, a graded derivation of the algebra, and a trace that satisfies some basic cyclicity axioms. A basic result in this area is the association of a cyclic cocycle to this structure, ant! the construction of a Chern charac- ter for the derivation. In the commutative case, the construction re- duces to the ordinary de Rham cohomology theory and its K-theory. In the noncommutative case, the framework is more general. 13. COMPUTER VISUALIZATION AS A MATHEMATICAL TOOL In recent years computer graphics have played an increasingly impor- tant role in both core and applied mathematics, and the opportunities for further utilization are enormous. One core area where visualiza- tion has been of key significance is in the theory of surfaces. Complex problems that appeared to be intractable have been either completely or partially solved by insight gained from computer graphics. One such example in surface theory, drawn from the study of soap films, has a long history. A loop of wire immersed in a soapy solution and then withdrawn will support a film of the soap solution character- ized by its having the least area among all surfaces that have the given wire loop as boundary. Finding this minimal surface is easily ex- pressed as a problem in the calculus of variations and thus reduced to the study of a certain partial differential equation, the minimal surface equation. While the solutions of this equation are not difficult to describe, at least in the small, the global behavior of the solutions is very delicate, and many questions remain open. These problems actually have physical significance as well. For ex- ample, any physical soap film will not cross itself (i.e., it is embed- ded), but this property is difficult to determine from the standard representation of the solutions to the minimal surface equation. In fact, up until five years ago, there were only two known embedded minimal surfaces that were complete in the sense that they had no edges. These were the ordinary plane and a surface of revolution called the catenoid. In fact, it had been conjectured that these were the only complete embedded minimal surfaces in three-space. 110

APPENDIX B In 1983 a new example was found of a minimal surface that had the topology of a torus punctured at three points. This surface seemed, by evidence based on the theory of elliptic functions, to be a good candi- date for a counterexample to the above conjecture. However, the complexity of the defining equations made a direct attack on the embedding problem difficult. When a computer was used to make sketches of the surface, the surface was seen to have extra symmetries that had been overlooked in the purely analytic description. These symmetries not only made the surface easier to visualize, but also suggested a possible line of reasoning that eventually led to a proof that the surface was indeed embedded, thus disproving the conjec- ture. Moreover, features of this surface suggested a generalization that allowed mathematicians to construct an infinite family of embed- ded complete minimal surfaces. These new examples have invigo- rated the subject of minimal surfaces in general, and recent progress in the subject has been closely linked to computer graphics. More general calculus-of-variations problems have recently been approached by computer graphics techniques, which are invaluable in formulating and testing conjectures. It is clear that our understanding of global and stability problems in the calculus of variations is being tremendously enhanced by computer graphics. As an example, in 1988 the first computer simulations and visualizations of soap bubble clusters and other optimal energy configurations in three dimensions were computed and displayed. In particular, this allowed close study and experimentation with the geometry of the interfaces. Programs were also developed that in principle allow the interactive construc- tion of minimal area surfaces: draw a knot in space, specify the topo- logical type of surface of interest, and the program will compute and display a beautiful minimal surface in that class. Along similar lines, it has been possible for the first time to compute and visualize some striking crystalline minimal surfaces. In an entirely different direction, a theory called "automatic groups" has been developed. This is the theory of that class of infinite groups that can be analyzed by finite-state automata, for example, word prob- lems that can be solved by computer; the theory involves issues simi- lar to those used in constructing word-processing programs. Typical automatic groups include the groups of geometry. A computer pro- gram has already been used in explorations of the limit set of certain quasi-fuchsian groups. More generally, the theory is required in order 111

APPENDIX B to make a catalog of hyperbolic three-manifolds by computer, an ef- fort that is already well under way. 14. LIE ALGEBRAS AND PHASE TRANSITIONS The past five or six years have seen a fascinating interplay between various branches of pure mathematics and the physical theory of phase transitions in a two-dimensional worlcl. It should be noted that phys- ics in two dimensions is not just a theoretical curiosity: surface phe- nomena and thin films are much-studied experimental realizations of the theories discussed here. The modern era started in 1944 when Lars Onsager solved the Ising model of ferromagnetism for two-dimen- sional lattices. The Ising mode] gives the simplest possible picture of a magnet: "spins" that can point only "up" or "down" sit on the sites of a space lattice and are coupled by pairwise short-range interactions favoring parallel alignment. Onsager's solution showed, for the first time, that a phase transition is accompanied by non-analytic behavior of various physical quantities; for example, the spontaneous magneti- zation vanishes at a rate proportional to (T- Tc)0 as the temperature T approaches its critical value To, where ~ is a characteristic exponent. Subsequently, other exactly soluble statistical mechanical systems were found, leading to a large class of completely integrable two-dimen- sional models. 112 A remarkable feature found in all these models (and also in heuristic studies of polymer systems, percolation problems, and other two- dimensional systems) was that the characteristic exponents describing the critical non-analyticities were always equal to rational numbers. A deep result of the mathematical developments during the 1980s is that these rational numbers are now understood to label representa- tions of a symmetry algebra of the system, in much the same way that the mysteries of atomic spectra in the beginning of the century were understood in terms of the representation theory of the three-dimen- sional rotation group. One line of the development started with the introduction of a natural set of infinite dimensional Lie algebras (Kac-Moody algebras), central extensions of loop algebras of the classical Lie algebras. At the same time another infinite dimensional Lie algebra, the Virasoro algebra, entered physics in the dual resonance models and string theory. While the dual models lost much of their interest for physicists in the 1970s, there were important mathematical developments that grew from them: for instance, the development of a formula for the determinant of the

APPENDIX B contragradient form of a highest-weight module of the Virasoro alge- bra, formulas for the characters of integrable representations of the Kac-Moody algebras, and the explicit construction of these representa- tions in terms of vertex operators. The statistical mechanical developments started in 1984 with the reali- zation that the conformal invariance expected for a physical system at a critical point is, in two dimensions, realized as a symmetry under two Virasoro algebras. The particular central extension (parametrized by a positive "charge" c) characterizes the physical system. The criti- cal exponents turn out to be the highest weights of the representations of the algebra. It was shown that there are very special representa- tions, so-called degenerate ones for c < 1, having special rational weights and charges, and it was argued that some of these correspond to known physical models, the Ising model in particular. Subsequently, it was shown that with the additional physical assumption of unitar- ity, all the c < 1 critical statistical systems could be classified and all their exponents computed. Translating this analysis to physical lan- guage resulted in explicit computations of the asymptotic correlation functions for the c < 1 theories, thus effectively showing that they are all completely soluble. Progress in the c > 1 theories has since been made using the theory of Kac-Moody algebras. It was shown that these algebras occur as sym- metry algebras of a two-dimensional field theory. The conformal symmetry now turns out to be closely connected to the algebra of the Kac-Moody symmetry: the Virasoro algebra is embedded in the envel- oping algebra of the Kac-Moody algebra by an algebraic construction. This provides many new concrete Virasoro representations, and more importantly the so-called coset construction. It was shown that a given representation of a Kac-Moody algebra leads to a host of Vira- soro representations, corresponding to subalgebras of the Lie algebra. Thus an even more general infinite family of critical statistical systems was identified on the basis of symmetry. The possibility of classifying so many, if not all, statistical mechanical systems exhibiting critical behavior, albeit in two dimensions, would have been considered purely utopian only ten years ago. 15. STRING THEORY Some of the most exciting developments in recent mathematics have their origin in the physicists' string theory, the so-called "theory of everything." This development offers a classic example where a physical 113

APPENDIX B science required a great deal of sophisticated mathematics, much of which had, surprisingly, already been worked out by mathematicians pursuing other "pure" mathematical motivations. The physical the- ory returned the favor with a cornucopia of new intuitions and in- sights. The turnaround time for such cross-fertilization may have set speed records in this instance! To the nonspecialist the most striking feature about string theory is that it replaces the idea that the smallest idealized physical particle might be thought of as a concentrated "point particle" with the idea of exceedingly small but extended strings. A point particle moving through space with time traces out a trajectory, called the "world line" of the particle, which summarizes its physical history. A string moving through space with time traces out a surface, called the "world sheet." The underlying mathematics of surfaces is much more sophisticated than that of curves, so the basics of string theory are much more complex than previous physical theories. The entry of new mathematics into string theory is forced by prin- ciples of invariance one must eliminate superfluous parameters from the description of the theory. Three such principles emerge: parameter invariance on the string (the labeling of positions on the string is physically irrelevant); conformal invariance on the world sheet (only angles and not lengths are important prior to the appearance of mass); and gauge invariance on the string (physical quantities will be inde- pendent of the measuring frames of reference). This last principle has already had a profound effect in physics and mathematics, being the basis of all current descriptions of electromagnetism and elementary particles. Parameter invariance calls upon the theory of an infinite- dimensional symmetry group of the circle, the diffeomorphism group. Gauge invariance calls upon the theory of the infinite-dimensional Kac-Moody algebras and groups. These rose to prominence in mathe- matics both for mathematical reasons and because of their use in earlier physics as '~current algebras." Finally, conformal invariance calls upon a vast wealth of algebraic geometry and moduli theory for Riemann surfaces (Teichmuller theory). This is most surprising, since previously algebraic geometry had seemed largely remote from the physical world. When matter appears and gravitational effects must be described, current string theory calls for the replacement of points in space-time by very small closed six-dimensional surfaces! In order to reproduce 114

APPENDIX B the physics we already know at larger length scales, the geometry of these surfaces will have to obey an analogue of Einstein's equations from general relativity. These had already been studied by mathema- ticians. Their work was motivated by algebraic geometry and partial differential equations, the latter being classically the main vehicle for exchange between mathematics and physics. Many mathematical problems for future study have been posed by string theory. The most significant appear related to the study of surfaces considered up to conformal equivalence (as in conformal invariance above), and the topology and geometry of other low-di- mensional figures (three- and four-dimensional surfaces). Indeed, Witten has developed a physical dictionary of the entirety of low- dimensional geometry. This dictionary suggests, for physical reasons, a long list of deep questions and constructions. For example, in the study of knots in three-space, this physical picture contains whole new outlooks on even the most subtle recent studies of knots (includ- ing the Jones polynomial; see Section 10~. On the other hand, string physics is giving more shape and direction to our study of infinite-di- mensional geometry. This will be an open task for years to come. It is eerie and uncanny, both to physicists and mathematicians, that what was considered central and important for pure or aesthetic rea- sons by mathematicians has proved ultimately to be the same mathe- matics required by physical theory. It should be emphasized that this mathematics derives from a period considered the most abstract in the history of the field. 16. INTERACTING PARTICLE SYSTEMS This area of probability deals with configurations of particles that evolve with time in a random manner. Typically, a particle moves, dies, or gives birth according to its specified law, which depends only on the state of the system in some neighborhood of the particle. Interacting particle systems have their roots in the study of the Ising model described above but now pertain to a wide variety of applica- tions from the study of biological systems to image processing for medical and defense purposes. The contact process, a basic model for the spread of a biological population, was introduced in 1974. In contrast to branching process models, this system allows there to be only a bounded number of individuals per unit area. This physically 115

APPENDIX B reasonable constraint makes the system very difficult to study analyti- cally. The basic properties of the one-dimensional case (linear growth of the system when it survives and exponential decay of the survival probability when it dies out) were settled early in this decade, but only very recently have the corresponding facts been proved for the important two-dimensional case. Variations of the contact process with several types of particles are now being applied to the study of competition of species and host- parasite or predator-prey system. Other models more closely related to percolation are being applied to the study of the recent forest fires in Yellowstone. A third example is the study of the distribution and dynamics of antarctic krill. This last system is particularly interesting since it displays patterns on multiple spatial and temporal scales. The preceding are just three of a growing list of examples that show inter- acting particle systems are an appropriate framework for understand- ing the mechanism at work in various ecological phenomena. 17. SPATIAL STATISTICS The keen interest in the development of theory and methods for spa- tial statistics is strongly driven by an array of applications including remote sensing, resources estimation, agriculture and forestry, ocean- ography, ecology, and environmental monitoring. The common thread is the characterization and exploitation of proximity. Some of the outstanding opportunities for future progress are outlined here. In geophysical remote sensing, data arrive usually in "ridded form, commonly corrupted by unwanted atmospheric effects and positional and measurement errors, often using multiple wavelength bands making the data multivariate, and almost always in large quantities. Typical questions are, How does one suppress errors, how does one combine the information from different wavelengths, how does one extract patterns, how well is one cloing, how far can the data be pushed, and what would be the value of additional data? Statistical approaches to answering all such questions will be profoundly affected by proximity considerations. Procedures for suppression of unwanted effects must often make do with weak specifications of those effects. Procedures for extracting underlying patterns from the combination of multiband information should be strongly guided by probabilistic models with sufficient rich- 116

APPENDIX B ness. On the other hand, estimates of statistical precision should be as model-free as possible. These requirements present important chal- lenges to research statisticians, with technical and computational dif- ficulties considerably surpassing those associated with related prob- lems in time series analysis or one-dimensional stochastic processes. Spatial data for resources estimation and environmental monitoring are typically not obtained on regular grids and are comparatively sparse. With regard to resources estimation, while data obtained from core drilling may have good vertical resolution, the cores may be preferentially located in high-grade zones. What one usually wants is an estimate of total reserves, the frequency distribution of resource blocks, and a rational exploitation plan based on localized resource estimates, together with measures of uncertainty to guide the need for further exploration efforts. The original and still widely used statisti- cal methodology, based fundamentally on Gaussian process model- ing, is not particularly well adapted to the highly erratic nature of resources data, and considerably more research is needed to deal honestly with the particular qualities of resources data. In environmental monitoring, highly impacted areas may be preferen- tially sampled. The unevenness and selectivity of environmental data present important challenges for statistical modeling and inference. Furthermore, at each monitoring location there will typically be avail- able a time series of data with seasonal variation. What one usually wants to know is how environmental quality is changing. Since data records are usually short in relation to the amount of change that can be directly observed amidst large natural variability, a sound method- ology is needed to combine information from multiple monitoring locations. Also the augmentation and rearrangement of monitoring resources require research in statistical design problems that goes well beyond the simple idealized design problems for which we have some answers. During the last decade substantial strides have been made in the development of appropriate theory and methods for solving spatial problems using statistical tools. Thus the needed research described above has a substantial foundation on which to build. An important recent advance in spatial statistical research is the development and application of flexible spatial Markov lattice models for image proc- essing and the application of powerful techniques such as the Me- tropolis algorithm for implementation of these models. Other devel- 117

APPENDIX B opments include spatial smoothing techniques that adjust to the local complexity of spatial patterns, dimensionality-reduction methodol- ogy for multivariable spatial data based on spatial structure criteria, development of mathematically tractable non-Gaussian spatial field models for continuous parameter fields, non-linear non-parametric spatial interpolation methods for discontinuous spatial fields, demon- stration of the theoretical links between spline methods, kriging meth- ods, and Wiener filters, and cross-validation methodology for cali- brating spatial estimation techniques and assessing their precision. 18. STATISTICAL METHODS FOR QUALITY AND PRODUCTIVITY During the decades since World War II the industries that have raised their productivity and the quality of their products have survived and prospered. Those that have not have done poorly or gone out of busi- ness. Statistical methods to analyze production processes are indis- pensable tools for engineers to increase quality and productivity. There are four areas of statistical methods that are particularly heavily used: (1) statistical process control, (2) statistical experimental de- sign, (3) reliability, and (4) acceptance sampling. Statistical process control consists of methods for assessing the behavior of an engineer- ing process through time, in particular for spotting changes; the major methods in this category are control charts, a topic in need of new thinking. Most of the methods now in place go back decades and were invented in an era when computation was done by hand and when data were often assumed to be normally distributed since not doing so led to intractable computation. For example, variability in control- chart methods is often measured by the range of the data, an easy number to compute. Control-chart methods need to be rethought from the ground up. Experimental design is a crucial technology for isolating factors that can be changed to improve processes; thus, to achieve continuous improvement of an engineering process, design experiments probing the process must be continuously run. Most research in this area has focused on understanding how the mean level of a response depends on the factors; models are used in which the variance is either constant or, if it varies, is viewed as a nuisance parameter. But for many engineering processes, variance changes and is as crucial an object as the change in mean level; this is the case, for example, in robust 118

APPENDIX B design. Robust design recognizes that the quality of most products depends on a large number of factors. Some factors, referred to as "control parameters," are inexpensive to control, while others, re- ferred to as "noise parameters," are costly or even impossible to con- trol. The goal is to design a product whose performance is insensitive, or robust, to the noise parameters. Methods of reliability are used to study the lifetimes of components and systems and to relate these lifetimes to factors that determine performance. In this area, the research community needs to develop methods with which engineers can shift from analysis of time-to-fail- ure measures to the analysis of measures of degradation, which are more informative. Also in this area, more work is needed on models for data from accelerated failure-time experiments. Acceptance sampling consists of methods for inspecting a lot of a product to determine if it is acceptable; in some cases every item in the lot is tested, but more typically a sample of items is selected for testing, and inferences made about the entire lot. New attacks on methods are needed; some past work has suggested that Bayesian approaches are a fruitful avenue to follow. 19. GRAPH MINORS A minor of a graph is any graph obtained from a subgraph of the original graph by contracting edges. A number of interrelated results on graph minors, some in "pure" graph theory and some relevant to the design of algorithms, are among recent achievements. The results open up new avenues for research and suggest a number of problems and opportunities. An old and completely solved question in network flow theory sup- poses that one is given a graph in which some vertices are called "sources" and some are called "destinations." It is to be decided whether there are, say, ten paths in the graph, running from the sources to the destinations and not meeting one another. How can one pro- gram a computer to decide this? One way is to list all the paths, and try all combinations of ten of them, but that takes far too long, even for quite a small graph. Fortunately (because this is a very important problem, with a huge number of applications) there is another algo- rithm more indirect but very efficient. Thus, this problem can be viewed as completely solved. 119

APPENDIX B If the question is changed slightly, and it is posited instead that there are ten sources and ten destinations and one requires the first path to run from the first source to the first destination, the second path to run from the second source to the second destination, what then? This new problem is much more difficult. Even the two-paths~problem (with two sources and two destinations instead of ten) is difficult, and until recently the three-paths problem was unsolved. One of the main results about graph minors is that for any number (say ten), there is an efficient algorithm to solve the ten-paths problem. ("Efficient" here is used in a technical sense, meaning "with running time bounded by a polynomial in the number of vertices of the graph.") A second result was a proof of an old conjecture of Wagner, that in any infinite collection of graphs there will be one containing another. (A graph "contains" another if the second can be obtained from a subgraph of the first by contracting edges.) This is of interest in "pure" graph theory, but it also has algorithmic applications if it is used in combination with the algorithm described earlier. For in- stance, suppose that one would like to know if a certain graph can physically be constructed, using wires for edges, in such a way that no circuit of the graph is "knotted." No efficient algorithm is known to decide this. But it follows that an efficient algorithm for the problem exists, even though no one has found it yet. One can show a similar result in great generality that has a consider- able number of applications in theoretical computer science. Suppose that it is desired to design an efficient algorithm to test if a graph has a certain property. For some properties it is impossible to find such an algorithm; but suppose that no graph with the property contains any graph without the property. (For instance, the property of being knotlessly constructible satisfies this condition.) Then there is an efficient algorithm to test if a graph has the property, although it may not have been found yet. It should be emphasized that knowing that an efficient algorithm exists, even though one has not yet been found, is an important and significant piece of information. 20. MATHEMATICAL ECONOMICS Although mathematical discussion of the operation of markets began in the last century, the first rigorous mathematical description of the fundamental economics in the operation of markets came in the late 1940s and early 1950s. This start culminated in the famous model of 120

APPENDIX B general equilibrium (GE) under the hypothesis of complete markets, that all commodities can be bought and sold in markets that meet at the same time, and with perfect credit. In equilibrium, supply equals demand, and each household acts in its own interest, given its budget constraint. However, the complete-markets mode! just described suffers from a major drawback. Planning for future contingencies is an essential part of the economic allocation problem, and the GE model can be inter- preted as incorporating time and uncertainty by indexing the com- modities by event and date. The single budget constraint, however, forces on this interpretation the unrealistic view that all trades are negotiated at once. Recent work on incomplete market models that remedy this difficulty has achieved significant progress and has opened up a number of new questions and opportunities. In the general equilibrium model with incomplete markets (GEI), agents cannot trade all possible commodities at one time. For simplicity, suppose that there are perishable commodities that can be consumed at time zero, or under any of the S states of nature at time one. More- over, agents may be wealthy in some states, and poor in others. At time zero, agents are also allowed to trade a limited number of assets that promise delivery in various combinations of the goods, depend- ing on the state of nature. The stock of a firm, for example, is an asset that delivers a large quantity of goods in those states when the pro- duction plans work well, and many fewer goods otherwise. Several very surprising properties can be shown to hold for GEI equi- librium, using tools from differential topology. First, in great contrast to the complete markets model, if there are fewer assets than states, then the GEI equilibria are "generically" inefficient. Indeed the equili- bria are inefficient not only because there are missing asset markets but also because the markets that do exist are not used properly. A more surprising attribute of the GEI model is the special properties of monetary equilibria that it permits. If there is a commodity that has no effect on utility, and is not part of the initial endowment of any agent, such a good has two of the properties of money. If the assets promise delivery in this money, then under the conditions of the inefficiency theorem there are generically S - 1 dimensions of distinct equilibrium commodity allocations. Yet, if the asset market were "complete," then there would be typically only isolated equilibria. 121

APPENDIX B Removing just one asset creates a jump in indeterminacy from zero dimensions to S - 1 dimensions; this dimension is constant no matter how few assets there are. This result could not reasonably have been predicted, and certainly not convincingly demonstrated, without the aid of tools from differential topology. Note that without a money good, there are typically a finite number of equilibria. In the GEI model, money has a prominent role to play. There is another application of these mathematical ideas to the GEI model. It was thought for a long time that there might not be a GEI equilibrium in any robust sense. The trick to the correct analysis of the problem was to recognize that the GEI budget constraint can be re- expressed in terms of the span of the monetary payoffs across the S states, and hence in terms of a Grassman manifold constructed from the state space. Arguments from degree theory show that the simulta- neous equations defined by GEI equilibrium on this Grassman mani- fold generically have a solution. 21. PARALLEL ALGORITHMS AND ARCHITECTURES Dramatic advances in computer fabrication technology have had an equally dramatic effect on the way that we use computers. For ex- ample, the most powerful computers today actually consist of many smaller component processors (i.e., chips) that are integrated together to form a single parallel machine. In a parallel machine, the proces- sors are usually traditional sequential machines that are working together to solve a single large problem. By collecting N processors together to work on a single task, one hopes to perform the task N times faster than with only one processor. Although it seems intuitive that N processors should be able to solve a problem N times as fast as a single processor, this is not always possible. In fact, it can be very hard to design parallel algorithms for N-processor machines that run N times faster than on a uni-processor machine. As a very elementary example of what can go wrong when one tries to parallelize a sequential algorithm, consider the "trivial" task of add- ing two N-digit numbers. Addition is certainly easy to perform in N steps sequentially, but can we do it in one step with N parallel proces- sors? In fact, we cannot. Even worse, at first glance it would seem that we cannot solve the addition problem any faster with N proces- sors than we can with one processor because before one can compute 122

APPENDIX B any digit in the sum, one needs to know if there is a carry from the next digit to the right and so on. Therefore, the addition example seems to be very discouraging at first, for if one cannot efficiently parallelize addition, then what hope can there be for efficiently paral- lelizing other problems? Fortunately, it is now possible to derive fast parallel algorithms for many important scientific and engineering computing problems. For example, in the simple case of addition, there is an algorithm that takes just log(N) steps using N/log(N) processors, although, as we have just seen, it is not at all obvious that such an algorithm exists. In fact, dramatic progress has been made in the last five years in uncov- ering nontrivial and highly efficient methods for parallelizing impor- tant problems that seem to be inherently sequential. Examples in- clude algorithms for arithmetic, matrix calculations, polynomial manipulation, differential equations, graph connectivity, pointer jumping, tree contraction and evaluation, graph matching and inde- pendent set problems, linear programming, computational geometry, string matching, and dynamic programming. Much progress has also been made on the problems inherent in de- signing parallel machines. For example, a variety of clever communi- cation networks have been invented for linking the processors of a parallel machine together, and fast algorithms have been developed for routing the right data to the right place at the right time. This work has been highly mathematical in nature, drawing extensively on techniques from combinatorics, probabilistic analysis, and algebra. Indeed, parallel computers commonly use combinatorial-based inter- connection networks and routing algorithms. Again, many opportuni- ties for further advances flow from these already substantial achieve- ments. 22. RANDOMIZED ALGORITHMS Over the past 15 years computer scientists have come to recognize the many advantages of algorithms that toss coins in the course of their execution. For a wide variety of tasks, ranging from testing whether a number is prime to allocating resources in distributed computer sys- tems, the simplest and most efficient algorithms currently known are randomized ones. Therefore, expanding our understanding of such algorithms is a challenge and opportunity for the future. 123

APPENDIX B Almost from the beginning of the computer era, random number generators have been applied to the simulation of complex systems involving queueing and other stochastic phenomena and to the esti- mation of multidimensional integrals and other mathematical quanti- ties, using various sophisticated sampling techniques known collec- tively as the Monte Carlo method. A major factor in drawing the attention of computer scientists to the wider uses of randomization was the discovery, around 1975, of two efficient randomized algorithms for checking whether a number is prime. Each of these algorithms is based on the concept of a witness to the compositeness of a number. A simple illustration of this con- cept is based on a theorem due to Fermat, which says that if n is a prime number, then, for any integer m that is not a multiple of n, me - ~' - 1 is a multiple of n. If this calculation is performed for some m, and one does not get the result predicted by Fermat's theorem, then n is composite (i.e., not prime); in this case, m is called a witness to the compositeness of n. The tests mentioned are based on slightly more complicated kinds of witnesses. The effectiveness of these tests stems from theorems that show that, if n is composite, then most of the integers between 1 and n -1 will serve as witnesses. An interesting aspect of these tests is that they do not provide witnesses for primal- ity, but this weakness was rectified in work that defined witnesses for primality rather than compositeness, showing that if n is prime, most randomly chosen numbers will bear witness to that fact. There are many other randomized algorithms based on the abundance of wit- nesses. Randomized techniques have also proved to be a very effective tool for algorithm construction in the areas of sorting, searching, and computational geometry. A simple illustration is the problem of list- inz all intersections among a set of line segments in the mane. There ~ ~ O ~ O--~ -- r-- · ~ · ~ ~ · · . ~ ~ · . t . ~ . · ~ . ~ lS a talrly OUVlOUS incremental algorithm that considers the segments one at a time and reports the intersections of each new segment with all the previous ones. If the segments are read in a particularly unfor- tunate order then the run time of this algorithm will be excessively long; however, it can be shown that if the segments are processed in a random order, then with extremely high probability the algorithm will be very fast. In addition, randomization plays a crucially important role in the design of distributed computing systems, in which many geographi- 124

APPENDIX B catty dispersed computers connected by suitable communication links work together as a single system. Such systems must cope with the possibility that individual computers or communication links may fail or may run synchronously at different speeds/ and must ensure that the overhead of communication between processors will not become an insurmountable obstacle. Randomization is particularly effective in allocating computational tasks to the individual processors and in choosing the communication paths along which data shall flow. It can be shown in a variety of settings that random allocation of tasks to processors and data to memory modules, together with randomized routing of messages, yields near-optimal performance with high proba- bility. All the applications of randomization that we have mentioned depend on the assumption that algorithms, or computer programs, have ac- cess to a stream of independent random bits. More commonly, com- puters use pseudorandom numbers that are generated from an initial number, called the seed, by some purely deterministic iterative proc- ess. These generators are typically subjected to certain statistical tests in order to confirm that the streams of numbers they generate have some of the properties of random sequences, even though they are generated by a purely deterministic process. Currently, a deep line of research into the properties of pseudorandom number generators is being pursuecl. The goal of this research is to show that, as long as the seed is random, the output of the generator cannot be distinguished from a purely random sequence by any polynomial-time computa- tional test whatsoever. Finally, recent theoretical research has focused on a connection be- tween pseudorandom generators and the concept of a one-way func- tion, which is fundamental in cryptography. A one-way function is a function that is easy to compute but hard to invert. It has been shown that any one-way function can be used to construct a rigorously justi- fied pseudorandom number generator. Unfortunately, researchers in computational complexity theory have not yet determined whether one-way functions even exist. This is one of the many important problems remaining to be addressed. 23. THE FAST MULTIPOLE ALGORITHM There are great opportunities for progress in algorithms dealing with problems such as particle beams in plasma physics, underwater acous- 125

APPENDIX B tics, molecular modeling, and even very important aspects of aerody- namics. A basic calculation of central importance in these applica- tions is the calculating of interactions in a many-particle system. These interactions are often long-range, so all pairs of particles must be considered. Because of the latter constraint, the direct calculation of all interactions requires on the order of N2 operations in a system with N particles. We will refer to this calculation as the N-body potential problem. There have been a number of efforts aimed at reducing the computa- tional complexity of the N-body potential problem. The oldest ap- proach is that of particle-in-cell (PIC) methods, requiring on the order of N log(NJ operations. Unfortunately, they also require a mesh that provides limited resolution and is inefficient when the particle distri- bution is nonuniform. A more recent approach is that of the hierarchi- cal solvers, which are gridless methods for many-body simulations, having computational complexities also estimated to be of order N log(N). The Fast Multipole Method (FMM), which has recently been devel- oped, requires an amount of work only proportional to N to evaluate all pairwise interactions to within roundoff error, irrespective of the particle distribution. Like the hierarchical solvers, the FMM is a di- vide-and-conquer algorithm, based on the idea of clustering particles on a variety of spatial length scales. The method is in fact based on a combination of physics (multipole expansions), mathematics (approxi- mation theory), and computer science, and its use in applications is growing. There are several immediate industrial applications for the techniques being developed. The payoff should be substantial and almost imme- diate in the straightforward use of particle simulations. Simulations of this type are performed in the modeling of high-powered electron beam microwave devices (e.g., gyrotrons and free-electron lasers), particle beams, controlled fusion devices, and so forth. A second immediate industrial application is in molecular dynamics, a technique for studying the properties of fluids (and other phases of matter) by computer simulation. Once initial positions and velocities are chosen for some number of representative particles, their trajecto- ries are followed by integrating Newton's second law of motion. In 126

APPENDIX B early simulations, only nonpolar fluids were considered, where the amount of computation per time step is proportional to the number of particles N. In polar fluids, the situation is quite different. A coulom- bic term is added to the potential function and all pairwise interac- tions need to be accounted for, a standard N-body calculation. The FMM allows for the simulation of much larger chemical systems than was previously possible. The study of a protein molecule in water, for example, requires following the motion of tens of thousands of atoms over tens of thousands of time steps. Real gains are possible in the long term, beginning with detailed theoretical studies of chemical kinetics. 24. INTERIOR POINT METHODS FOR LINEAR PROGRAMMING Many problems in resource allocation can be modeled by what is called the "linear programming" problem, in which one attempts to optimize a linear function over the vertices of a multidimensional polytope. The traditional Simplex algorithm for this problem, which works by traveling along the boundary of the polytope, has had immense value and influence during the 40 years since it was discovered. It has a significant theoretical drawback, however: its running time can, in pathological cases, grow as an exponential function of the number of variables. Much more desirable, at least from a theoretical point of view, would be an algorithm with polynomially bounded worst-case running time. In 1976, the first such algorithm, the Ellipsoid method, was discov- ered. Its running time was O(n4L2), where n is the number of variables and L is a measure of the number of bits needed to describe the input. This algorithm has the additional desirable property that it applies to more general "convex programming." Moreover, it does not require a complete description of the convex body over which optimization is to take place, but merely a "black box" subroutine that, given a point, tells whether that point is in the polytope, and if it is not, will identify a hyperpla~ne that separates the point from the polytope. The Ellip- soid method is thus applicable to a much broader class of problems than is the Simplex method, and its existence has led to a wide variety of polynomial-time algorithms for previously open problems. For linear programming, however, researchers quickly discovered that its improved worst-case running-time bound clid not correlate with bet- ter performance in practice. 127

APPENDIX B Polynomial-time programming algorithms thus seemed an impractical theoretical nicety. In 1984 this was changed with the introduction of a new breed of polynomial-time linear programming algorithm, based on a clever variant of an old idea. The idea, one that had been aban- doned long ago as impractical, is to cut across the interior of the polytope in searching for an optimum vertex, rather than traversing the outside as does Simplex. The difficulty in making such an "inte- rior point" approach work lies in finding the right direction and dis- tance to travel at each step. The solution involves the use of projective transformations and a logarithmic potential function to guide the search, and yields a running time of O(n35L2~. The theoretical improvement over the Ellipsoid method running time was not the main story, however; more important, this algorithm (along with several of its variants) appears to be competitive with Simplex when implemented with appropriate sparse matrix techniques. Moreover, it appears to have substantial running-time advantages for large and/or degenerate in- stances. Indeed, important practical applications have been found in the design of large telecommunication networks and in the solution of large-scale logistics planning and scheduling problems. Since the first reports of the potential practicality of the approach, there has been a torrent of research on interior point methods. Rela- tions between this algorithm and earlier algorithms have been exten- sively explored. For instance, the algorithm can be viewed as a type of "logarithmic barrier function" algorithm, or even as an application of Newton's method (in an appropriately transformed space). In the limit of infinitesimal step length, it generates a vector field inside the polytope, all of whose limiting trajectories go to the optimal vertex. In this context, it can be viewed as attempting to follow such a trajectory approximately, by taking short steps along tangent lines. This in turn suggests variants in which one steps along curves that represent higher- order power series approximations to the vector field. Other variants concentrate on approximately following a particular trajectory, the so- called "central trajectory." These latter have led to better and better worst-case running times, with the current champion having a run- ning time of O(n25L2), based on a clever use of recent developments in the field of fast matrix multiplication. New algorithms and insights continue to pour forth at a rapid rate, and although it seems unlikely that this will lead to polynomial-time solutions for the much harder NP-complete problems, there is much 128

APPENDIX B hope that the interior point approach will greatly extend the range of problems for which useful answers can be determined. 25. STOCHASTIC LINEAR PROGRAMMING Deterministic models for linear programming problems and their so- lutions, ever since their first appearance 40 years ago, have been keep- ing pace with the extraordinary advances that have taken place in computing power. At the present time, industry and government routinely solve models in which there is no uncertainty; that is, they solve deterministic linear and mathematical programs for planning and scheduling purposes, some involving many thousands of vari- ables with a linear or nonlinear objective function and many thou- sands of inequality constraints. These assume, for example, that knowledge about what future technologies will be available to choose from is known with certainty. As a result, the solutions obtained from deterministic models are incomplete because they do not properly take account of future contingencies. Although it is easy to reformu- late the mathematical models to include uncertainty, the resulting size of the mathematical systems to be solved becomes too enormous in most practical applications. The bottleneck to solving stochastic pro- grams has been (and is) calculating capability. Therefore, despite the progress made, there remains an unresolved class of decision problems of great importance: that of finding an "optimal" solution to stochastic mathematical and linear programs. Stochastic here means uncertainty about, for example, the availability of resources, foreign competition, or the effects of possible political upheavals. Since such problems concern the optimal allocation of scarce resources under uncertainty, it is of fundamental importance to include uncertainty in the problem setup. If such problems could be solved in general, it would significantly advance our ability to plan and schedule complex situations. At the present time there is intense activity taking place in the United States and elsewhere to solve certain relevant classes of stochastic linear and nonlinear programs. Important new developments in computer hardware are spurring this effort, particularly the availabil- ity of multiple vector processor mainframes and parallel processors. It is hoped that the combination of three techniques improved ways to mathematically decompose large-scale problems into smaller ones, 129

APPENDIX B improved techniques of Monte Carlo (importance) sampling, and the use of parallel processors will bring about important advances in the relatively near future. 26. APPLICATIONS OF STATISTICS TO DNA STRUCTURE A strand of DNA can be represented as a string of bases, A,C,G,T, that carries the information governing the growth and function of an or- ganism. Great effort has therefore been expended in determining DNA sequences. Rapid sequencing methods were introduced in 1976 and were followed by an explosion in quantitative genetic informa- tion. Today over 25 million letters of sequence have been determined, in segments averaging length 1000, from a wide variety of organisms. Improvements in sequencing technology continue to be made, and the associated discoveries in basic biology are staggering. Two kinds of maps are constructed for DNA: genetic maps and physi- cal maps. Both types are generated from the use of restriction en- zymes that cut DNA at specific patterns in the sequence, producing fragments whose lengths can be measured with some degree of inher- ent experimental error. It was suggested in 1980 that slight variations in DNA sequence could produce differing restriction fragment lengths that could be used as "markers" traits that could be approximately mapped to specific locations on specific chromosomes. The amount of data subsequently available has created a number of new statistical problems. Physical maps give the relative locations of identifiable and clonable pieces of DNA. Availability of a physical map facilitates the complete sequencing of a DNA strand. Given the mapped locations of a com- plete library of clones each having a length on the order of several tens of thousands of nucleotides a number of laboratories in coordi- nation could then proceed simultaneously to sequence the individual clones. We can expect statistical analysis of design options, such as number and choice of cutting enzymes, to yield benefits in the map- ping process. The process of physically locating clones along the genome should be substantially facilitated by an understanding of the design parameters and sources of variation inherent in the process. Obtaining DNA sequence data is only a first step in modern molecular biology. The sequence is next subjected to extensive analysis, to relate it to what is already understood about DNA sequences. Because 130

APPENDIX B evolution operates to preserve biological features of importance, in- cluding sequence features, these analyses can be very important in understanding the function of specific portions of the sequence. Use of computers to implement complex algorithms is often required; mathematical analysis of algorithms is essential, both to ensure an unambiguous, informative interpretation of the results and to ensure that a programmed algorithm will complete its operations rapidly enough. The study of molecular evolution has developed with the ability to read DNA and protein sequences. It is just now becoming possible to sample the sequence for a gene within a defined population. This opens up many new questions. How does molecular evolution pro- ceed, in the long term and in the short term? Constructing evolution- ary trees and determining rates of evolution can both be accomplished with stochastic process models of molecular evolution. Some of the most central work goes into identifying protein coding regions or genes in DNA. Locating a gene of 600 letters that is spread out in small segments along 10,000 or 20,000 letters of DNA is a daunting but essential task, requiring sophisticated combinatorial and statistical analysis. The science of molecular biology is currently undergoing rapid treat- ment. The anticipated quantity of DNA and protein sequence data makes it an area ripe for mathematical development. The nature of the science makes it necessary that mathematical and biological scien- tists closely communicate. Both sciences will surely benefit from such collaborative effort. 27. BIOSTATISTICS AND EPIDEMIOLOGY Epidemiology concerns the distribution and determinants of disease in human populations. It encompasses such varied subjects as the worldwide geographic variation in disease incidence rates, the setting of radiation exposure standards in the workplace, and the evaluation of vaccine efficacy using randomized field trials. Two distinct study designs, cohort and case-control, are used for much of the research in chronic disease epidemiology. In cohort studies, exposures and covariables are measured on a defined population of disease-free persons, who are then followed forward in time to deter- mine which ones develop or die from the disease of interest. The 131

APPENDIX B methods and concepts of survival analysis, particularly the propor- tional hazards regression model, have greatly affected the statistical treatment of cohort clata. They provide a mathematically rigorous framework for elaboration of the key epidemiologic notions of inci- dence rate and relative risk. Older epidemiologic techniques are given a new interpretation in terms of classical statistical theory, while the way is paved for the development of more flexible and powerful methods of data analysis. Case-control studies involve samples of diseased and nondiseased persons whose history of exposure is known. The demonstration that the exposure odds ratio calculated from case-control data approxi- mates the ratio of disease incidence rates among exposed and nonex- posed was of paramount importance in establishing the scientific validity of this design. More recently, biostatisticians and economet- ricians independently have developed methods for the analysis of case-control and other data where the sample is selected on the basis of the outcome of primary interest. Further elaboration of this meth- odology is needed to handle more general stratified designs and for situations where only partial information is available for a large number of the sampled subjects. Additional work is needed also on methods of analysis of epidemiol- ogic data with dependent outcomes, such as arise in longitudinal studies with repeated measurements on the same person over time or in ge- netic studies of the patterns of disease in families. Better techniques are needed for assessing the magnitude of measurement errors and to correct for the tendency of such errors to dilute the observed associa- tion between exposure and disease. Recent advances in statistical computing and in the statistical theory of quasi-likelihood analysis based on generalized estimating equations promise rapid advances in this field. Finally, AIDS, the major epidemic of our time, poses urgent challenges to the biostatistician and epidemiologist. One problem is to estimate the past, present, and future rates of infection with HIV so as to determine the future number of AIDS cases. Another is to understand better the patterns of HIV transmission within and between various pools of susceptible individuals so as to be able to plan the optimal organization of community resources for the prevention of further infection. The collection of data related to AIDS is seldom routine and often suffers from a lack of key pieces of information, so that studies 132

APPENDIX B are rarely amenable to straightforward analysis. Mathematically, the problem of estimating HIV infection rates, using data on AIDS inci- dence rates and the distribution of time from infection to diagnosis, can be viewed as an unusual type of deconvolution problem. It is related to problems that occur in the field of image processing, where rapid progress has been made. But the lack or poor nature of key types of data makes it much more formidable. NOTE 'Goldhammer, M. I., and Rubbert, P. E., "C.F.D. in design: An airframe perspective," Proceedings of the 27th Aerospace Sciences Meeting, January 9-12,1989, Publication Num- ber 89-0092 (American Institute of Aeronautics & Astronautics, Washington, D.C.). 133

Renewing U.S. Mathematics: A Plan for the 1990s Get This Book
×
Buy Paperback | $48.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!