Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.

Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter.
Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 15

4
THE ROLE OF THE MATHEMATICAL SCIENCES
IN THE GRAND CHALLENGES
The ultimate driving force behind the HPCC initiative is the promise of making so-called
grand challenge problems computationally tractable. These are problems of significant
societal and intellectual importance whose solution demands computing power far in excess
of that available today. However, even with all the computing power to be available with
successful completion of the HPCC program, the grand challenges cannot be conquered
without significant mathematical sciences involvement. Such involvement should generally
be through multidisciplinary teams or informal collaborations, since the scientific
background required to contribute to these problems is formidable for one person, and
often multifaceted. This section examines the interaction of the mathematical sciences
with some of the grand challenges in recent years and describes some promising
mathematical directions for the near future.
Turbulence Research
The quantitative statistical properties of fluid flows remain largely unknown. Yet the need
to estimate mean turbulent transport is crucial to a wide range of computational fluid
dynamics applications, from weather prediction to aircraft design to chemical processing.
In recent decades, the mathematical theory of turbulence has dealt primarily with the
properties of homogeneous isotropic turbulence and has turned to massive numerical
simulations of fluid flows for experimental verification. From this work have come
stochastic models of turbulence in which the original nonlinear chaotic dynamical system
is modeled by more tractable linear random dynamical systems. These provide normal
or gaussian approximations to the original unknown nongaussian statistical properties.
More recent work on nonlinear mapping schemes is attempting to recapture the nh~erv~1
nongaussian properties of turbulence.
~cat ~rib
In practical engineering applications, however, turbulence is neither isotropic nor
homogeneous, and the best hope has been to use numerical simulation of the
three-dimensional time-dependent flows of interest. The range of scales of motion
involved far exceeds that resolvable on any present or future computer, but fortunately the
important eddy transport properties of turbulent flows are determined by the larger,
resolvable scales of motion. There remains, though, the problem of accounting for the
effects of unresolvable motions on these larger eddies. Such turbulent subgrid-scale
models have for many years been used to estimate the mean damping effects of associated
15

OCR for page 15

eddy viscosity. Recently, the random forcing effect of unresolvable eddies on the large
eddies has been approximated [Mason and Thompson, 19913. The form of such forcing
terms has been determined by earlier work in the mathematical theory of turbulence.
As the goals of the HPCC program are reached and the range of resolvable scales is
increased, the behavior of subgrid-scale fluctuations is expected to become more purely
probabilistic, and thus more amenable to stochastic mocieling. Further research on a
mathematical theory of the interaction between subgrid-scale modeling and the properties
of finite approximations to the Navier-Stokes equations is needed in order to derive full
benefit from the HPCC program. Further study is also required of their long-time
dynamical properties, such as the persistence of patterns and the occurrence of periodic
and chaotic behaviors. In this connection, the new mathematical theory of approximation
dynamics, which seeks to compare the long-time dynamical properties of a given equation
with those of an approximate equation, is relevant [Plies and Sell, 19914.
The recent discovery of inertial manifolds and related topics have enabled mathematicians
to give a rigorous foundation to the observation that the formation of patterns, the
occurrence of periodic and chaotic behavior in certain turbulent fluid flows, can be
described exactly in terms of finite systems of ordinary differential equations. Because this
reduction of an infinite-dimensional problem to a finite-dimensional one is exact and
error-free, it holds the promise that approximation schemes based on the inertial manifold
paradigm can lead to greater computational efficiency and better modeling of complicated
systems, such as global weather phenomena or cardiac networks. Nonlinear Galerkin [e.g.,
Marion and Temam, 1989, 1990], or approximate inertial manifold (ATM) methods, are
based on this development and show good computational promise, but more work is
needed.
Turbulence--and computational fluid dynamics in general--is of major importance to the
chemical industry, among others. Chemical processing often involves multiphase chemically
reacting flows, which lead to stiff time-dependent mathematical problems with moving
boundaries. Such problems are beyond the capability of current computational science,
without even attempting to include turbulent effects. Due to the size of the chemical
industry, mathematical methods that could simulate, and hence help to optimize, these
processes would have enormous economic impact.
Computational Biology
Proteins are the fundamental molecules of all living organisms. They are essentially linear
chains of subunits called amino acids. There are 20 amino acids, and thus we can think
of a protein as a word constituted from this 20-letter alphabet; this word is referred to as
the protein's primary structure. The primary structure is encoded in the DNA of an
organism by the genetic code: each amino acid is represented by a triplet of DNA base
16

OCR for page 15

lo
pairs. Proteins fold up into complicated three-dimensional structures to form the
geometrical shapes, called secondary and tertiary structures, needed to carry out their
function. It has become a standard laboratory procedure to determine the DNA sequence
. · ~
encoding tor a protein and translate it, by applying the genetic code, into an amino acid
sequence. However, it is quite difficult to determine the three-dimensional structure of
a protein given its primary structure, even though all the information required is there.
(Mathematical advances from areas as disparate as optimization methods and knot theory
have helped with this problem, though.) This difficulty has led to a large discrepancy in
the size of the sequence databases: there are approximately 2S,000 proteins (S million
amino acids) in the protein sequence database and about 41,000 DNA sequences (51
million nucleotides) in the GenBank DNA sequence database, but the structure database
currently has only about 400 proteins. If the plan to sequence the entire human genome
becomes a reality, the database of human protein sequences will increase some 1000-fold
in the next decade, thus intensifying the need for an effective computational means of
analyzing this information.
Because the problem of determining the function and structure of a protein from its
primary sequence is so difficult, biologists have looked for other approaches. One highly
successful technique is to compare a newly sequenced protein with al] of the known
proteins in the database. Similarities in the amino acid or nucleotide sequence are used
to infer similarities in the structure or function of the related proteins [DoolittIe, 19863.
This approach makes sense for two reasons. First, proteins with similar biochemistry are
apt to fold and to act in very similar ways under the laws of physics. Also, proteins that
have large similar regions are often evolutionarily closely related, which indicates that the
proteins may perform related functions or catalyze reactions in very similar ways. These
comparison searches have been quite successful in uncovering relationships between
apparently unrelated proteins. Several years ago it was discovered that some
cancer-causing genes have a close similarity to genes that encode for proteins such as
human growth factors tDoolittle et ai., 1983i, which suggests a possible mechanism by
which these cancer-causing genes produce uncontrolled cell growth.
The methodology just described has raised a number of interesting mathematical questions
that impact the use of high-performance architectures in the area of computational biology.
The main search algorithm that is used for finding similar regions in proteins is the
dynamic programming algorithm first described in the 1950s [Bellman, 1957] and
rediscovered by biologists for use in this problem in the 1970s [Needleman and Wunsch,
1970; Smith and Waterman, 19814. Current versions of this algorithm allow greater
sensitivity in the searches and the use of different metrics based on biochemical properties
of the amino acids and on evolutionary relationships between them. It is quite time
consuming to do large numbers of comparisons on conventional computers, and so recent
work has focused on developing faster serial algorithms for dynamic programming [Yao,
1982; Lipman and Pearson, 1985] and implementations of dynamic programming algorithms
for parallel architectures [CouIson et ai., 1987; Edmiston and Wagner, 1987; Lander et ai.,
17

OCR for page 15

1988] and special-purpose hardware [Lipton and L~opresti, 1985; Hunkapiller et al., 1990].
Work on improving search techniques for the databases will have to continue as their
size increases. Moreover the improved searching will create large databases of similarities
that will have to be analyzed. The problem of the statistical significance of a match or
a similarity is understood only in the simplest of cases, and much work will continue to
be needed in this area [Arratia and Waterman, 1985; Arratia et al., 1986; Waterman et al.,
1987]. Also, there is work to be done in analyzing the effects of varying certain
parameters of the scoring metrics in the commonly used versions of the dynamic
programming algorithm. The ability to do many searches with vaporing metrics at high
speed wii] help provide the data for such a study [Lander et al., 1989], and perhaps lead
to a more general theory.
Dynamics of Cardiac Models
One of the early successes in the area of HPCC has been the use of the new massively
parallel computers by physiologists to do numerical studies of models of major tissues in
the heart [Winslow et at, 19913. These models consist of a network of single heart cells
that obey certain rules on the transmission of electrical currents between neighboring
cells. The behavior within each single cell is governed by a system of ordinary differential
equations that describes the chemical concentrations and the electrical currents within
that cell. It turns out that each cell is an oscillator with a stable periodic orbit for a
certain range of the physiological parameters. The present massively parallel computers
are large enough so that an entire mammalian sinus node, for example, can be simulated
on the machine. These models become large systems (of order 105) of coupled oscillators.
The challenging mathematical issue is to understand the long-time dynamics of these large
systems. Since large mammalian hearts have physiological features that differ significantly
from those of small hearts, one cannot expect that the dynamics of small coupled systems
of oscillators will, or even can, emulate the dynamics of large hearts. New methods, which
combine analysis and numerical studies, are needed to study complicated behavior like
atrial fibrillation or dysrhythmia.
Global Change
Understanding and predicting global climate change is a leading grand challenge problem
that will require great increases in the availability and utilization of new computing
resources. In this context, "high-performance" means highly parallel, and a central problem
is to move climate modeling work onto the coming generations of highly parallel
computers. It is clear, however, that computing power alone is not enough, and much
needs to be done to improve the effectiveness with which available and forthcoming
18

OCR for page 15

computing power is used. It is in this latter area that the mathematical and statistical
sciences may play the largest role.
The atmosphere is a reasonably well measured fluid, but the other major governors of
climate, the ocean and some aspects of biogeochemical cycles, are not well measured.
The thermodynamic and flow state of the global atmosphere is determined at least once
per day by assimilating new observations into a prediction model, so that past information
is included in the estimate, as are certain nonlinear dynamical constraints. There remain
many fundamental statistical and mathematical problems associated with this process of
finding the best statistical estimates on a nonlinear dynamical manifold in a state space of
myriad dimensions. There is also much to be done yet in applying the mathematical
methods of classical statistical physics to relate day-to-day weather fluctuations and
long-term climate trends. In the validation of atmospheric climate models, in the design
of new climate observing systems, and in the use of fluctuation dissipation relations to
estimate the properties of the system related to its rapid response to external influences,
determination of the fast space- and time-correlation properties of the atmosphere is
crucial.
Only within the past few years have global ocean models begun to be used to study the
current systems coupling the various ocean basins. The time scales in the ocean vary from
weeks in the surface mixed layer to millennia in the abyss, and the space scale of
important eddies is much smaller in the ocean than in the atmosphere. These facts
combine to make ocean models require much more computing power in spite of the
simpler physics involved. The further addition of atmospheric chemistry to the climate
models adds the dimensions of species space to the overall problem, and the range of
reaction rates is so large that the associated differential equations are exceedingly stiff.
The fundamental mathematical problem here is the development of a theory that could
guide the lumping together of species and reactions to reduce the size of species space
and to remove stiffness at a minimal sacrifice in accuracy.
Both atmospheric and ocean models attempt to describe a wide range of scales of fluid
motion. They are thus faced with the usual turbulence problem of being unable to resolve
all scales with available computing power. Difficult as this problem is for turbulence
modeling, it is far worse in climate models where important dynamical processes in the
ocean are occurring at about the limit of resolution, and in the atmosphere many physical
processes remain unresolved. In general the important transports in the atmosphere are
carried by resolved eddies, so that representing the unresolved eddies by a subgrid-scale
mode] has proved workable. The situation in the ocean is less clear, but progress on this
fundamental problem would be important for efficient use of computers when the only
alternative would be to increase spatial resolution.
Climate modeling is beset by a complex interplay of uncertainty arising from (1) the
imperfect match between the mathematical model and reality, (2) the approximate nature
19

OCR for page 15

of numerical solutions, and (3) uncertainty in the input data. The merging of information
gained from real observations, physical experiments, and numerical experiments creates
statistical issues that must be understood. The use of prior distributions in developing
methods for assessing accuracy looks very promising.
The discussion so far has been of problems that arise in the more or less conventional
approaches to climate research. There are also a number of untried new approaches
with higher risk but potentially higher rewards. Two relate to how the continuous fields
of relevance to climate are represented in models. Traditionally, they have been
approximated by grid-point values or spectral amplitudes in a surface harmonic expansion.
An alternate approach that has often been suggested is to use instead empirical orthogonal
functions obtained by diagonalizing the spatial covariance matrix. These have the
advantage of representing the variance in the system with the fewest variables, but the
disadvantage of having more complicated dynamical laws [Rinne and Karhila, 19753. A
newer possibility is the use of waveless for the representation. These combine the
localness of grid-point representations and the scale properties of spectral representations.
In recent years a wavelet transform theory has been developed in multidimensional
Cartesian space complete with a fast transform [Daubechies, 1988~. For global climate
models it would be useful to find a wavelet base for the surface of the sphere. - The
wavelet approach has been found to have many interesting features for the representation
and display of fields. It may be less satisfactory for the formulation of dynamical laws.
, , ~ ~ , ~ ~ , ~ , ~ ~ e, ~ e ~ ~
It is realistic to expect that the theory of nonlinear dynamics, and especially the promising
developments expected as a result of the HPCC program, will offer opportunities for
mathematical scientists to make significant contributions to global weather and climate
predictions. Hardware and software capabilities are fast approaching a level that will allow
serious investigations of the dynamics of the attractors arising in the very large systems of
ordinary differential equations used to model atmospheric flows.
Another promising development is the recent discovery of global attractors for the Navier-
Stokes equations on thin three-dimensional domains and the related fact that these
attractors are well approximated by associated fluid flows on two-dimensional surfaces
[Raugel and Sell, 1989]. This may be relevant to atmospheric modeling because, by
resealing, the atmosphere can be mapped to a thin domain on the surface of a sphere.
A new paradigm for the study of systems with many effective degrees of freedom suggests
that they may be attracted to states of self-organized criticality in which small disturbances
can have consequences on widely varying space and time scales (Bak et al., 1988~. The
sandpile is taken as a prototype in which an added grain of sand may displace one other
or create an avalanche down the whole pile. This is proving to be an interesting model
of the ubiquitous 1/f noise found in so many physical and engineering applications. It is
not immediately evident that the climate system should be in such a state, but if for some
reason it were, then its low-frequency variability would be better understood.
20

OCR for page 15

Materials Science
Understanding the macroscopic properties of materials--such as fiber-reinforced composites,
metal alloys and shape-memory materials, polymers, and ceramics--from their microscopic
structure is of great technological interest. The broad interaction between the
mathematical sciences and materials science was recently surveyed [National Research
Council, 1991]. The correct modeling of the macroscopic or bulk properties of complex
materials is an extremely difficult problem that will benefit substantially from the
accomplishments of the HPCC program.
Many of the most promising materials on the horizon, especially for structural purposes,
are composites, wherein the microstructural arrangement of constituents is critical. At
present, we are unable to predict the properties of composites based on their
microstructure, but mathematical scientists have begun to make progress toward this.
Advances in the understanding of weak convergence, homogenization, Young measures,
and numerical nonlinear analysis have been critical, while methods for averaging and
computing the solutions to nonlinear systems of partial differential equations have
contributed more generally.
Mathematical challenges also abound in creating a theoretical description of polymers so
as to relate bulk properties to the structures and interactions of the individual intertwined
molecules. Many mathematical descriptions--including self-avoiding random walks, lattice
models, Wiener functional integrals, and renormalization group methods--have been helpful
in characterizing polymers. Established methods of statistical mechanics have helped in
the prediction of some solid-phase polymeric properties--for instance, in predicting the
elastic constants for glassy polypropylene. In the area of polymer processing, many serious
and important mathematical problems are raised by attempts to model the non-Newtonian
flows involved.
Semiconductor Modeling
Of all the grand challenge problems, semiconductor modeling is most closely linked with
the success of the main HPCC hardware initiative. Future generations of semiconductors
will involve the use of more exotic materials and processing techniques, as well as
decreased feature size and expanded capabilities. This in turn demands more sophisticated
physical models and simulation tools.
For example, below one-half micron, the physical simplifications underlying the standard
drift-diffusion model, which is the basis for most current device simulators, begin to lose
validity. As devices become more densely packed, possible unintended interactions
between nearby devices must be studied, which will require three-dimensional calculations.
At the same time, more efficient iterative techniques must be developed to handle the
21

OCR for page 15

larger sets of nonlinear equations that result from more complex models, and careful
numerical analysis must be performed to ensure stable and accurate discretizations. Fast
prototyping of these more demanding simulations through statistical modeling may become
preferred to numerical experimentation, as the former is more adaptable to achieving
robust engineering design.
Quantum mechanics (QM) is essential to the study of semiconductors as well as many
other materials. The present methods for accurate QM simulations have large scaling laws
(o(N3) to o(N7)), which restrict their application to small- to moderate-sized systems of
atoms. The development of new mathematical algorithms, probably utilizing massively
parallel computing, would significantly increase the ability of researchers to study
industrially important problems dependent on the quantum-mechanical nature of materials.
Geophysical Modeling
Propagation of elastic waves in the earth is of great interest in geophysics, with important
applications to seismology, exploration and reservoir characterization for petroleum
production, nuclear waste disposal, and remediation of contaminant leakage. The inverse
problem of determining the structure and properties of the earth's interior from the
elapsed travel time of sound waves is highly nonlinear. The solution of such problems
involves repeatedly solving the forward problem--which can routinely require an hour of
time on a Cray Y/MP for each run--with different assumptions about the unknown
properties. The inverse problem is highly ill-posed and must deal with weak and noisy
data. Mathematicians, as part of multidisciplinary teams, have made significant progress
toward understanding and quantifying the nonuniqueness of the problem. They have also
made advances in the stabilization and solution of the resultant least-squares formulation
of the problem iSantosa and Symes, 19893. Mathematicians have also developed boundary
and interface conditions necessary for discretizing complex geometries [Sochacki et al.,
1991] and have also begun developing solution programs for parallel computers.
Due to the ilI-conditioning of the inverse problem, a good initial guess is required for the
iterative process. The increasing use of three-dimensional visualizations of inputs and
outputs can give enormous intuition for understanding various seismic signatures. This
adds to other depositional information to suggest a good initial guess for the true structure.
Efficient three-dimensional visualization calls for data compression algorithms; the latest
and most promising are based on wavelet technology.
Geophysical techniques and inverse problems at different length scales are also required
to characterize a reservoir once one is located. These methods must be coupled with
complicated models of the multiphase flow processes in heterogeneous media to obtain
reservoir properties through history matching. Control and optimization techniques have
allowed great strides in automating the inverse problem, but they have not yet been
22

OCR for page 15

successfully applied to the forward problem. The importance and role of mathematics in
these applications have been discussed in some detail in Glimm [1991~.
Machine Vision
Automation is a cornerstone of modern military and industrial competitiveness. An
important impediment has been a traditional and growing gap between often spectacular
improvements in computational speed and memory as well as other hardware advances,
on the one hand, and a relative lack of progress in the development of algorithms and the
associated software, on the other hand. Substantial progress in algorithm development
often first requires appropriate tools from the mathematical sciences. Robotics, for
instance, increasingly draws upon (and contributes to) modern control theory and other
aspects of dynamical systems theory, while machine vision relies fundamentally on the
theories of filtering, probability, statistics, and stochastic processes. For example, modern
statistical tools have been used in the interpretation of satellite data for weather and crop
yield prediction, pollution assessment, and geological applications. Image formation in
both transmission tomography (CAT) and emission tomography (PET and SPECT) is
based upon sophisticated mathematical and statistical formulae.
Other examples of the interplay between machine vision and the mathematical sciences
include the use of methods from partial differential equations and numerical analysis in
the definition and computation of natural boundaries [Mumford and Shah, 1989], the
computation of three-dimensional surfaces from shading information tHorn, 1986], and the
restoration of blurred or noisy images [Lions et al., ]9913; the use of spatial stochastic
processes, such as Markov random fields, for texture analysis [Cross and Jain, 1983], image
restoration [Grenander, 1983; Besag, 1986], and reconstruction in emission tomography
[Geman and McClure, 1987; Green, 1990~; and the use of regularization methods for
various inverse problems, such as motion detection and stereo mapping [Poggio et al.,
19854.
Not surprisingly, it is the high-level machine vision tasks that have proven the most
challenging, and it is here that mathematics is likely to play a most pivotal role. The issue
of accommodating textures in automated industrial inspection was already discussed in
Section 3. A second example is the problem of machine recognition of handwritten
characters. Among many other applications, a robust solution would allow for fully
automated processing of checks and credit card slips at great savings to banks and various
other companies, such as utilities, that handle large volumes of checks. All that is needed
is the recognition of the ten numerals, zero through nine, yet no company has yet been
able to offer a commercially viable solution. Model-free approaches have largely failed.
Explicit models are needed, especially for the infinite variety of stroke shapes and
sequences that constitute the allowable presentations of a particular digit. This
combination of structure and variability again suggests the use of stochastic processes. A
23

OCR for page 15

promising example is the class of hidden Markov models, which have already been
successfully applied in automatic speech recognition [Rabiner, 19893. These processes lend
nemse~ves to a n~gn~y e~r~c~ent computational algorithm known as "dynamic programming,"
~. . ^ ~- ·
~, , . 1
. ~· ~se ~-
and are provably versatile: Every stochastic process can be arbitrarily well approximated
by a hidden Markov model.
In these and other machine vision problems, the central issue is modeling, both for the
articulation of regularities and for the accommodation of variability. The mathematical
sciences offer some basic tools, mostly involving probability, statistics, and stochastic
processes. It is unreasonable to expect engineers, computer scientists, cognitive scientists,
or others involved in the practice of machine vision to have a working knowledge of the
various mathematical methods that may be brought to bear. Indeed, it is certainly true
that existing mathematical tools will need to be extended and improved before they are
suitable for applications. Thus, direct involvement of mathematical scientists would appear
to be an essential ingredient in any effort to fill the growing gap between sophisticated
computing hardware and available algorithms and software for machine vision.
Human-Machine Communication
Natural voice is the preferred means for information exchange by humans. As computers
and communications systems become more complex, the incentive for ease of use will
grow. Voice control and conversational interaction will be expected to bear more of the
burden for easy, natural, and effective human-machine communication. Meeting this
expectation will depend on expanded mathematical understanding. Several specific areas
can be mentioned.
Fluent conversational interaction with machines requires large recognition vocabularies
and extensive computational models of language. Algorithms for the recognition of large
vocabularies are not yet in hand. Mathematics research on higher-order Markov models
is likely to contribute to this goal. Then, given accurate, reliable acoustic pattern
recognition to produce word-string candidates, semantic and pragmatic interpretation must
be accomplished. Present indications are that statistical models of language can contribute
strongly to this capability, and again mathematical techniques for programming and
manipulating probabilistic models of natural language are needed [Flanagan, 1991~. Very
similar pattern recognition and semantic interpretation problems exist in a variety of fields
as diverse as underwater acoustics, biology, medicine, and image compression.
On the other side of the coin, our knowledge has not yet reached the point of being able
to provide natural-sounding synthetic voices for computers. One reason is that the
computation available cannot support the physical detail required, and the necessary
mathematical understanding is not complete. For example, representation of the unvoiced
sounds of speech requires accurate simulation of turbulence within a simulated larynx and
24

OCR for page 15

oral cavity. The mathematical understanding and the computational techniques for this
are not yet established. Detailed application of Navier-Stokes computations of turbulence
will aid this understanding.
Finally, natural communication with machines (as well as large-group conferencing) benefits
from hands-free sound pickup with stereo spatial reality. Autodirective microphone arrays
are now possible thanks to inexpensive high-quality microphones and single-chip signal
processors that can be dedicated to individual sensors. Estimates for full spatial realism
in sound capture and projection suggest the use of hundreds of microphones and a
teraflop computer to achieve three-dimensional spatial selectivity [Flanagan et al., 19903.
25

OCR for page 15