National Academies Press: OpenBook

Computational and Theoretical Techniques for Materials Science (1995)

Chapter: Challenges in Materials Research for the Remainder of the Century

« Previous: Introduction and Summary
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

Chapter 2

Challenges in Materials Research for the Remainder of the Century

NEW MATERIALS

Background

Over the ages, new materials have been a dominant factor in driving advances in materials usage and materials technologies. Recent developments in materials science and condensed-matter physics are no exception. In the past decade, much of the progress in fundamental knowledge and technological applications in this area was related to unexpected discoveries of new materials with novel and desirable properties. Examples are numerous. Recent new materials that have generated much excitement include high-temperature copper oxide superconductors, fullerenes and fullerides, nanophase materials, superhard materials, semiconductor quantum wells and superlattices, magnetic supperlattices, and other artificial structures such as quantum dots and quantum wires. Many of these materials have already made the transition from being objects of basic research interest to those of practical applications. Quantum well lasers, high-Tc superconductor devices, and magnetic superlattices with giant magnetoresistance in recording heads are examples.

In the study of these new materials, theory and computation have played an important role in unraveling their properties. The theoretical approaches range from very empirical schemes to ab initio methods that require no experimental input. In some cases, predictions of new materials were made and subsequently confirmed by measurements. Researchers are thus in the early stages of using theory and computation to “design” materials.

Present Status and Critical Issues

Understanding new materials requires fundamental knowledge of their structure; phase stability; and various structural, electronic, vibrational, and mechanical properties. These quantities are intimately related to the electronic structure of the solid. Until the 1970s, many of these properties could be studied only empirically or by using model calculations. In general, theoretical predictions for specific real materials were lacking.

That situation has changed dramatically in the past decade. In particular, there have been many important advances in electronic structure theory and algorithmic development for the study of real materials. Among these advances are highly efficient methods for local density approximation (LDA) total energy calculations, new pseudopotentials, a first-principles molecular dynamics (e.g., Car-Parrinello) approach for dynamical and thermodynamical properties, realistic tight-binding total energy schemes, a first-principles method for electronic excitation (quasi-particle) energies, and quantum Monte Carlo methods for correlated electron effects. These theoretical advances, together with dramatic improvements in computing power, have permitted, in the past few years, the computation and prediction of a broad range of properties of real materials of increasing complexity.

The structural, vibrational, mechanical, and other ground-state properties of systems containing up to several hundred atoms can now be computed using the LDA. In these calculations the development of iterative schemes, together with new soft-core pseudopotentials, has made possible the study of very large systems. Select examples of recent successes include the unraveling of the normal-

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

state properties of high-Tc superconducting oxides (e.g., structural parameters, phonon spectra, Fermi surfaces); prediction of new phases of materials under high pressures (e.g., superconductivity in the simple hexagonal phase of Si); prediction of a superhard material (C3N4); determination of the structure and properties of surfaces, interfaces, and clusters; and calculation of the structure and properties of the fullerenes and fullerides.

The Car-Parrinello-type ab initio molecular dynamics approach and the less accurate but less time-consuming tight-binding molecular dynamic schemes have made possible the quantum mechanical calculation of the dynamics and thermodynamical properties of systems in the solid, liquid, and gaseous states. The first-principles quasi-particle approach based on the GW approximation (which evaluates the electron self-energy to the first order in the electron Green's function G and screened Coulomb interaction W) has been used to calculate electron excitation energies in solids that, in turn, are used for the quantitative interpretation of spectroscopic measurements. The excitation spectra of crystals, surfaces, and materials as complex as the C60 fullerites have been computed. Quantum Monte Carlo methods have yielded cohesive properties of unprecedented accuracy for covalent crystals and have provided the means to study highly correlated electron systems, such as two-dimensional electrons at semiconductor heterojunctions in a strong magnetic field.

Very accurate determination of the properties of specific materials by itself is not sufficient, however, for the general goal of materials by design. A critical issue is how one can intelligently sample the vast phase space of taking different elements in various proportions over the periodic table to make a material of specific desired properties. Even with computers that are several orders of magnitude more powerful than existing machines, it would not be possible to sample the phase space adequately. Together with accurate methods, there must be guiding principles (e.g., those based on structure-property relationships) in the theoretical search for new materials. The discovery of general principles regarding materials behavior is thus as important in the theoretical design of materials as are accurate computational methods.

Another important related issue is the process of starting from conception to the final synthesis of a useful new material. Because of the large phase space discussed above, it is unlikely that a material of optimal desired properties will be obtained on the first try. Many iterations involving theoretical prediction, experimental synthesis, and characterization are needed. A major challenge then is to find ways to accelerate convergence in this iterative process.

A case in point is the recent provocative and potentially useful prediction of a new material, carbon nitride, which rivals or exceeds the hardness of diamond. In 1989, based on an empirical idea and through first-principles total energy calculations, a new compound, C3N4, was predicted to be stable and to have bulk modulus comparable to that of diamond. Its structural and electronic properties were predicted by using LDA. Subsequent to the theoretical work, several groups proceeded to synthesize and characterize this possible material in the laboratory. In 1992 independent experimental evidence was obtained from three groups lending support for the theoretical prediction. This case provides a concrete example of how ideas, computations, and experimental characterization may work together in the design of materials of intrinsic scientific interest and potential utility.

Future Theoretical Developments and Computing Forecast

In the past several years much effort has been devoted to developing algorithms to extend the applicability of the above new methods to ever larger systems. The LDA Car-Parrinello-type calculations have been successfully implemented on scalable parallel machines. Recent calculations of semiconductor systems using this method have reached the size of

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

supercells containing ~1,000 atoms. Methods for calculating free energies using quantum molecular dynamics are being developed to study melting and other phase transitions. The quantum Monte Carlo approaches are now readily amenable for use with massively parallel machines. It is now feasible that the quasi-particle calculations could be implemented on massively parallel machines with an expected gain in efficiency and power (in particular with a real-space formulation) similar to the LDA-type calculations. Thus, the use of these methods on the new massively parallel machines will certainly greatly enhance our ability to investigate new and more complex materials.

Another exciting recent development is the work on more efficient and new algorithms such as real-space methods (e.g., wavelets, finite differences) and methods that scale better for large systems. For example, there has been recent work on circumventing the scaling limitation of N3 for LDA calculations where N is the number of atoms in the system. Significant progress has been made by several groups in developing methods that would scale as N in these calculations. The success of these approaches would enhance our ability to compute and predict in the near future the properties of very large molecular and materials systems, including systems with perhaps tens of thousands of atoms.

It should be noted that, although the LDA for ground-state properties and the GW method for excited-state properties have given impressive results for large classes of materials, there are some systems, such as highly correlated magnetic materials, to which these ab initio methods are less applicable at this time. This is because of the necessarily approximate treatments of many-electron exchange-correlation effects in these large-scale first-principles calculations for real materials. Theoretical developments for better treatment of many-electron effects will thus be another important direction for computational materials research. On the other end of the spectrum, further improvement of tight-binding molecular dynamics and related methods will enhance our ability in the near future to examine materials phenomena that cannot be modeled by several hundreds or thousands of atoms.

SEMICONDUCTORS

Recent technological advances ranging from pocket radios to video cassette recorders to powerful supercomputers were all made possible by our improved understanding of how to synthesize and process electronic materials (i.e., semiconductors). The development of new semiconductor technologies has played a significant role in contemporary economic competition. Semiconductor technology and related applications have played a major role in the past 50 years and will continue to do so in the foreseeable future with the development of new artificially structured semiconductor materials, such as superlattices and multiple quantum wells.

In electronic materials research, scientific and technological advances are often intimately connected. The technological need for ultrapure, well-characterized semiconductors has resulted in the development of new experimental and theoretical methods. The application of these methods has tremendously improved our understanding of semiconductors such as silicon and gallium arsenide. It may be that our understanding of the crystalline state of semiconductors exceeds that of any other material.

Often, semiconductor-related activities can be divided into (1) understanding the fundamental electronic structure and atomistic processes and (2) the modeling of fabrication, processing, and characterization of devices. The fundamental issues in semiconductor materials science research involve knowledge not only of the ideal crystalline material but also of the role of line and point defects, intrinsic and extrinsic impurities, dopants, grain boundaries, and surfaces. Much work in this area has been done, but much remains to be done. Methods for examining the electronic and structural properties are further developed than in other

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

areas. There are a number of approaches to understanding the chemical bond in semiconductors. At the fundamental level, one can consider ab initio methods in which the only input is the atomic number, and perhaps the atomic mass, of the constituents. This approach is the most powerful one in that no experimental information is required and accurate techniques have been developed. For example, ab initio pseudopotentials constructed in the LDA have been used to model amorphous materials, clusters, surfaces, and liquids. The accuracy of these approaches is quite good. Typically, structural and vibrational properties can be determined within a few percent. Excitation spectra and optical properties are more complex, but these properties also can be addressed. Another fruitful approach is based on a tight-binding approach. This requires input either from experimental sources or ab initio procedures. However, it is more computationally tractable and has led to accurate descriptions of semiconducting materials and surfaces.

At the other end of the spectrum are empirical methods. One empirical approach has been to use the pseudopotentials themselves as adjustable parameters. This approach is not appropriate for structural studies but can be used for the analysis of optical spectra. Often, only a few Fourier coefficients of potential are required to fit the potential to the optical spectrum of a crystal. For structural properties, empirical interatomic potentials can be used. This approach is conceptually difficult, though, since quantum mechanical interactions must be mapped onto classical interactions. Using such an array of tools, it is almost a routine procedure to examine such properties as lattice parameters, vibrational modes, and phase stability.

Given the computational tools, software, and hardware to achieve a two-orders-of-magnitude increase in both speed and memory, what new and interesting problems would become feasible? With such an increase in computational power, attention could be focused on (1) static effects, such as extended and charged defects, amorphous semiconductors, grain boundaries, incommensurate overlayers, doping, and point defects, and (2) dynamical effects, such as diffusion, ion implantation and laser annealing, temperature effects, transport phenomena, and excited states.

Even with a hundredfold increase in computing power there will remain limitations. For example, any dynamical simulation is not likely to be realistic in terms of experimental time frames. As an example, dynamical simulations rarely use time steps that are longer than a few femtoseconds. With such time scales it would take years of CPU time to model a system of a few hundred atoms for simulation of a second of “real” time. Despite this limitation, it may still be useful to examine the dynamical behavior of systems on a picosecond, or nanosecond, time frame. For example, the diffusion constants of semiconductor liquids have been accurately modeled on a picosecond time frame. Nonetheless, new methods for the “intelligent” sampling of phase space should be developed. A recent example of such an approach is based on “ genetic” algorithms.

Another limitation is the size of the system to be examined. It is not likely that systems of more than a few hundred atoms will be routinely examined using ab initio methods. New algorithms that take full advantage of parallel architecture will need to be developed, along with methods that scale more efficiently with the size of the system. Also, new techniques will need to be developed to handle some fundamental issues. It would be desirable to examine the free energy of semiconductors to predict phase diagrams and other issues that involve thermal properties (e.g., thermal expansion coefficients, diffusion). Current methods are in the developmental stage, and there are questions as to the accuracy of local density methods for such applications.

OPTICAL PROPERTIES

Background

Of the many properties of materials, the optical properties are undoubtedly among the most useful and interesting, especially for

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

semiconductors. Many device applications, including semiconductor lasers and light-emitting diodes, rely on specific optical responses of a material. Linear and nonlinear optical spectroscopies are now common probes for investigating and characterizing the underlying microscopic nature of materials. With devices becoming increasingly smaller and dependent on manmade structures, the relationship between microstructure, defects, and the optical properties of materials is one of increasing importance. For photovoltaic applications the dominant issue today is the minority carrier lifetime. However, concern for more environmentally safe fabrication may drive a search for new candidate materials with the requisite characterization of their optical properties. Interest in thermo-photovoltaics for lower-temperature terrestrial heat sources will require characterization of materials with band gaps approaching half an electron volt. Hence, the ability to calculate optical responses is crucial for understanding and employing new materials, surfaces and interfaces, nanostructures, clusters, materials with defects, amorphous materials, and materials under various extreme conditions such as high pressures.

Unlike the sharp optical spectra of atomic and molecular systems caused by transition between narrow energy levels, solid-state spectra arising from electron transitions between bands are broad and relatively featureless. To unravel these spectra for microscopic information necessarily requires extensive theoretical analysis. In addition, the electronion and electron-electron interactions in a solid are complex. The nonlinear optical properties are, of course, even more difficult to compute or predict because of the additional intermediate states involved. For linear optical properties an early successful approach was the empirical pseudopotential method developed in the 1960s that explained the optical spectra of many materials and led to a detailed description of the band structure and bonding nature of solids. This approach employs pseudopotentials with several empirical parameters. The resulting electronic structure is then used to derive response functions for comparison with optical and photoemission spectra. Similar calculations based on empirical band structures have been carried out for the nonlinear spectra but with less success. This and other similar empirical approaches require extensive experimental input and hence are less suitable for investigating new materials or systems such as surfaces and interfaces for which detailed experimental data on the structure and other properties are often much less available. Nonlinear optical materials are central to laser science and science using lasers. Uses range from upconversion of frequency to ultrafast detection schemes involving the interaction of an (attenuated) original probe laser with the scattered probe to data storage via the photorefractive effect. A nonlinear material may have a number of criteria that dictate its usefulness. For example, not only must the second-harmonic coefficient be large, but the two dielectric constants must permit (by phase matching) a large interaction volume for second-harmonic generation. Accordingly, there is a continuing search for materials with desired properties in an ever-broadening frequency range.

The emergence of synthetically modulated nanostructures, such as superlattices and quantum wells, where the chemical composition can be varied on an atomic scale has had a large impact on the search for and discovery of novel optical materials with desirable properties without analogs in bulk. Semiconductor superlattices are currently of particular technological interest because they afford the possibility of tailoring the electronic structure by controlled modifications of the growth parameters—layer thickness, alloy compositions, strain, growth orientation, and so on. These controlled modifications of the chemical and structural parameters lead to precise control of the degree of quantum-confinement of the carriers and, as a result, lead to the tunability of the electronic and optical properties of synthetically modulated structures. Current applications where the flexibility of tuning the electronic structure of semiconductor

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

superlattices serves as the basis for the design of materials and devices exhibiting desirable properties are numerous. This feature is used in the fabrication of semiconductor diode lasers, electrooptical modulators, nonlinear optical devices, new designs for long-wavelength infrared detectors based on III-V InAs/GaInSb strained-layer semiconductor superlattices, and blue-green diode lasers synthesized from wide-gap II-VI semiconductors and quantum wells such as CdZnSe/ZnSe and the like.

In addition, there has been a recent resurgence in the use of conjugated polymers for micro- and optoelectronics applications with the discovery in 1990 that certain organic polymers could be stimulated by carrier injection to emit visible light. This constituted a fundamental breakthrough in solid-state physics that is of great potential industrial importance. This discovery has raised the possibility of new applications of polymers to electronics, particularly in flat-panel display technology.

Present Status and Critical Issues
Linear Optics

Ab initio calculation of the optical response of a solid is conceptually and computationally challenging because of many-electron effects. Even at the simple level of neglecting electron-hole (excitonic) interactions, computation of the dielectric function requires knowledge of the excited electron (quasi-particle) energies and wavefunctions. Being based on ground-state theory, standard electronic structure methods such as the local density functional formalism, which yields excellent results for structural and other ground-state properties of many solids, do not give accurate electron excitation energies. Before the mid-1980s it was not possible to determine from first principles whether a crystal such as Ge was a semiconductor or a metal—let alone the quantitative value of its optical band gap and excitation spectrum. In general, LDA calculations underestimate the band gap by 50 percent or more, and typically overestimate Hartree-Fock gaps compared to experimental results. The reason for these discrepancies is that exchange-correlation (self-energy) effects can significantly modify the properties of the excited electrons from those of an independent-particle picture. Optical transition energies, even at the simplest level, need to be properly computed as transitions between quasi-particle states.

First-principles calculation of the quasi-particle energies in real materials became possible in 1985 with the development of methods based on the GW approximation for calculating the electron self-energy. This approach is based on an evaluation of the self-energy operator expanded to first order in the dynamically screened (with local fields) Coulomb interaction and the electron Green's function. The method has been applied to a range of systems, including semiconductors and insulators, simple metals, surfaces, interfaces, clusters, materials under pressure, and materials as complex as the C60 fullerites.

Nonlinear Optics

Historically, new nonlinear optical materials (e.g., urea, lithium niobate, ZnGeP2) have been found empirically, and considerable time has been spent making large enough pure crystals to check that the material satisfies all the needed criteria. At the moment, most levels of theory can provide rough estimates of the properties of any proposed nonlinear optical material. There are no really first-principles approaches; indeed, most are semiempirical. Such methods are good at discussing possible systematics between related materials—that is, via interpolation —but they are not good at extrapolating to new classes of materials. Those few (nearly) first-principles calculations can reasonably consider only a few simple materials; even then, simplifying approximations are made.

Future Theoretical Developments and Computing Forecast

With computational effort typically several times more than that of an LDA band calculation, the GW quasi-particle approach at present remains the only successful ab initio

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

method for calculating electron excitation energies in solids. Researchers are pushing forward in several different directions. One direction is continuing the applications of the full method and extending it to other systems. For example, this approach is just being refined for application to transition metals and transition-metal oxides. Another direction is developing algorithms and codes to reduce the computational effort in the calculations, such as reformulating the present k-space formalism for the self-energy operator to one of real-space formalism. At present, ab initio quasi-particle calculations are basically limited to systems with less than about 100 atoms. Yet another direction is developing less accurate but simpler methods for approximate self-energy calculations. Some of the models tried, with limited success, include the “scissors operator” approximation and simplified local forms for the self-energy operator.

Calculation of the intensity of linear optical spectra actually requires computation of the two-particle Green's function with the proper vertex function included to take into account electron-hole interaction (or excitonic) effects. This kind of calculation has yet to be carried out from first principles for real materials. Previous theoretical investigations on this subject were mainly restricted to model tight-binding Hamiltonian studies. For semiconductors such as silicon, standard random phase approximation calculations for the dielectric function yield band-edge optical absorption strengths that are typically off by more than a factor of two. Clearly, this is an important issue and needs to be addressed before quantitative prediction of optical absorption can be achieved.

Of course, the higher-order optical response of solids is even more difficult to calculate. The results are highly sensitive to the electronic excitation spectrum because of the multiple energy denominators. Current work in this area is basically at the level of using empirical band structure in an independent particle picture or using LDA results together with the “scissors operator” approximation. However, several groups are now working toward better treatment of these quantities.

In the future we can expect a much wider class of nonlinear optical materials to be produced, including, for example, polymers and metallorganic materials. Nanostructures also are a source of nonlinear optical activity. The challenge to theorists will be to devise effective means to explore the parameter space of these materials. Meeting this challenge will require not only greatly enhanced computing capability but also improved understanding of the effect of electron-electron interactions (for self-energy and excitonic effects).

SURFACES AND INTERFACES

At the heart of essentially all modern-day critical technologies is the need to describe how atomically and chemically dissimilar materials bond to each other to form solid interfaces. The central importance of understanding the behavior of solid surfaces and interfaces can be readily demonstrated from a consideration of key industrial segments of the economy; metal-semiconductor, semiconductor-semiconductor, and semiconductor-oxide interfaces form the cornerstone of the microelectronics industry. Adhesion, friction, and wear (tribological phenomena) between solid surfaces are ubiquitous in all manufacturing processes. The microstructural evolution of grain boundaries is central to materials performance in metallurgy. An understanding of the chemical processes occurring at surfaces is key to the development of cost-effective semiconductor etching and catalysis processes. An understanding of the structure of surfaces and how they function as catalysts offers opportunities for the design of completely new types of catalytic systems leading to revolutionary applications.

While the behavior of surfaces and interfaces often determines the basis for the performance of existing materials, one cannot overestimate the scientific and technological impacts of the design of new materials based on our ability to synthesize atomically structured materials. All synthetic, atomically modulated

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

structures involve interfaces between thin layers of chemically different substances. Because of the dramatic effects caused by the interfaces (e.g., quantum confinement of the carriers), these atomically modulated structures exhibit novel properties—electronic, optical, magnetic, and mechanical—that have no analogs in the bulk constituents from which they are synthesized. It is the emergence of advanced crystal growth techniques that has allowed the atom-by-atom synthesis of novel materials exhibiting unique and unexpected properties. An important class of such artificial crystals is multilayer or “superlattice ” structures, which are synthesized by alternately growing atomically thin layers of chemically different materials in a periodic sequence. By adjusting growth parameters such as the thickness, composition, growth axis, and sequence of the layers, it is possible to “atomically engineer” the optical, electronic, magnetic, and mechanical properties of the resulting superlattice structure. It is this capability to atomically engineer the properties of synthetically modulated materials that has led to a revolution in the field of modern materials science by qualitatively altering our approach to materials design at the atomic scale.

Critical Issues

Surfaces and interfaces are complex, heterogenous, low-symmetry systems. Their description by accurate quantum-mechanical methods is a challenging task because the reduced symmetry of these systems implies that large unit cells must be utilized. Moreover, solid surfaces exhibit a much larger variety of atomic structures than their bulk counterparts. In fact, because of the different possible crystallographic orientations and the numerous metastable structural phases for a given orientation, the number of possible atomic structures is essentially infinite. The study of solid interfaces and chemisorbed species presents an even greater challenge with regard to the possible number of systems. Nevertheless, the close synergy between experimental and theoretical activities that is characteristic of the field of surface science has allowed rapid progress in the development of general physics-based guiding principles to predict the atomic geometry and electronic structure of solid surfaces and interfaces. The advent of ultrahigh vacuum technology and the accelerating theoretical developments in electronic structure calculations, coupled with the emergence of high-performance computing environments, have greatly enhanced our understanding of chemical bonding at solid surfaces and interfaces. Below is a discussion of critical issues germane to the theoretical description of surfaces and interfaces.

Since the mid-1970s, there have been tremendous advances in our ability to describe the ground-state properties and phase transformations of bulk materials using ab initio methods. These same ab initio electronic structure methods have now been used to determine the atomic geometry and electronic structure of clean and adsorbate-covered surfaces. Modern surface science has greatly benefited from the continuous development of powerful methods, such as the density functional method, which, together with the efficient implementation of pseudopotential and all-electron formalisms, has enabled very accurate calculations of the ground-state properties of surfaces and interfaces. Moreover, significant conceptual developments in electronic structure theory have enabled dramatic increases in our ability to perform ab initio calculations of the static and dynamic properties of large-scale materials systems. Prominent among these developments are the advances made by Car and Parrinello in calculating structural and dynamical properties. Ab initio quasi-particle calculations based on the GW approximation have permitted the determination of the electronic excitation spectrum at surfaces, in excellent agreement with experimental spectroscopic observations. Also to be emphasized is the critical role played by empirical quantum mechanical methods, such as the tight-binding method, in the early determination of the atomic and electronic

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

structures of clean surfaces. The experimental validation of these early theoretical predictions demonstrated the predictive power of quantum mechanical electronic structure methods and laid the foundation of modern theoretical surface science.

In surface and interface science the role of computational materials physics is to complement experimental work by addressing critical issues that cannot be measured directly and to provide physical insight into observed phenomena. Foremost among these critical issues are the following:

  • The nature and origin of atomic reconstructions at surfaces, interfaces, and grain boundaries;

  • The electronic structure (electronic surface and interface bound states and resonances, chemisorption-induced surface states, and so on) of clean and adsorbate-covered surfaces, and interfaces;

  • The attachment sites and binding energies of chemisorbed atoms and molecules on reconstructed surfaces;

  • The effects of steps, defects, and impurities on the physical properties of surfaces and interfaces;

  • The determination of the rectifying potential (Schottky barrier) at metal-semiconductor contacts;

  • Determination of energy band offsets at a semiconductor heterojunction; and

  • Prediction of novel electronic, optical, magnetic, and mechanical properties of semiconductor superlattices and metallic multilayer materials.

Forecast and Impact of High-Performance Computing

As in the case of most areas germane to the field of theoretical and computational materials physics, the advent of high-performance computing environments will have a significant impact on the field of surface science. In particular, the size and complexity of the system that can be described will increase with computer power and memory. Work in this area has already begun—for example, the ab initio density functional calculation of surface reconstruction in Si (111) - (7 × 7) performed on a parallel computer using approximately 1,000 atoms. As massively parallel processor (MPP) computing environments mature from the development phase to the production phase and become available to a wider user base, it is likely that similar large-scale calculations will be performed on a routine basis, bringing tremendous benefits to the field.

Another aspect of surface and interface science that will greatly benefit from the wide availability of high-performance computing environments is the bridging of the length-scale gap by physics-based multiscale modeling, from the atomistic level (atomic geometry and electronic structure) to the continuum (elasticity, plastic deformation, and so on). Much theoretical work remains to be done in this area. As an illustration of the impact of multiscale modeling, consider the task of predicting the deformation under load and the eventual failure of a polycrystalline metal with incorporated atomic impurities that give rise to a wide spectrum of grain boundary strengths. Current continuum-based, finite-element methods cannot be used to perform simulations unless they are augmented to include microscopic processes involving dislocation movement. Consequently, significant effort should be expended in extracting atomic-level parameters from ab initio quantum mechanical calculations in order to augment constitutive models used in continuum-like simulations. For the particular case of predicting the microstructural evolution of grain boundaries in polycrystalline metals, essential atomic-level parameters are grain-boundary potentials, which are total-energy curves obtained from tensile and shear deformation of the boundary. The next section on thin-film growth describes strategies to integrate modeling at different length and time scales as they relate to simple growth.

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

GROWTH OF ARTIFICIALLY ENGINEERED FILMS

Recent technological advances have led to the development of electronic and photonic devices of smaller and smaller sizes. The fabrication of these structures requires high material uniformity and interfacial homogeneity on the atomic level, currently achieved by using such techniques as molecular beam epitaxy (MBE) and chemical vapor deposition (CVD). In addition, scanning tunneling microscopy (STM) has made it possible to observe the formation and shapes of small clusters of material at the initial stages of deposition and the layer-by-layer growth of a crystal that often takes place on sharply defined steps between adjacent layers of the material. These processes are governed by physics at the atomic scale and cannot be explained by traditional nucleation theory or phenomenological continuum approaches. The most useful theoretical approaches are computational in nature and involve ab initio calculations of interatomic potentials, molecular dynamics to probe the short time dynamics, and larger scale kinetic Monte Carlo (KMC) calculations, which are relevant to growth processes involving the deposition of many atoms.

Figure 2.1 Schematic view of growth on a vicinal surface by MBE, deposition, and different kinds of diffusion processes. In some cases, such as silicon, surface reconstruction can produce inequivalent steps that are alternately rough and smooth.

The KMC simulations take as input specified rates for atomic deposition and diffusion and then use those rates to simulate the kinetic processes in which atoms are added and are moved about on the growing surface (see Figure 2.1 ). However, the rates that are currently determined from a combination of experimental and theoretical information are not well established. While KMC has shown great

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

promise in its ability to reproduce growth patterns observed experimentally, more detailed microscopic modeling of the relevant rates should be possible with the next generation of computers and would greatly enhance the predictive power of this technique. The STM also makes possible a direct comparison of theoretical and computational modeling of the kinetic processes governing the growth of these structures. The use of this experimental technique along with advances in computational capabilities will improve our understanding of the kinetics of growth and our ability to fabricate smaller and better technological devices in the next decade.

Many of the problems that arise in modeling growth are similar to those encountered in the study of other nonequilibrium processes. The most interesting phenomena observed experimentally often involve behaviors that occur over a broad range of spatial and temporal scales. Realistic parameter regimes are impossible to achieve with the computers that are currently available, and, for systems that stretch the limits of the current capacity, it is difficult to run a sufficient number of realizations to ensure the statistical significance of the results. This makes it difficult to relate numerical results to observed behavior or, more ambitiously, to use simulations to predict future directions for experimental study. Anticipated improvements in machine performance and parallelized codes should allow these problems to begin to be addressed. Some specific issues of interest, starting with those associated with the smallest scales, are discussed below. Integrating the modeling done at different length and time scales is the foremost challenge to a comprehensive understanding of the process of growth.

Determination of Phenomenological Potentials

From a computational viewpoint, we are still far from being able to do first-principles calculations on large (~105 to 107) numbers of static atoms. Modeling nonequilibrium systems accurately thus depends crucially on the use of accurate empirical potentials within KMC algorithms. The KMC simulations employ these potentials to set the relative rates for the possible moves (e.g., deposition, surface diffusion) that take place on each Monte Carlo step. To understand the basic trends and general principles, a detailed understanding of the potentials is somewhat less important, although care must be taken to properly identify essential properties. Furthermore, to improve the predictive powers of KMC and the ability of this method to provide quantitative results, much work needs to be done to improve our ability to extract these potentials from ab initio calculations or from techniques such as effective medium theory.

Modeling Growth of Submonolayer Structures

In the low-density limit, KMC can be used to model the self-assembly of small clusters of atoms deposited on a surface. Because the system sizes considered are relatively small (on the order of hundreds or thousands of atoms), it is in this intermediate regime that interaction potentials can be incorporated in the greatest detail and the quantitative predictive powers of KMC may be the greatest. Recent experimental progress on growth techniques for quantum wires and quantum dots should yield the next generation of smaller and faster devices. Thus, results obtained by modeling the self-assembly of small structures will have increasing technological relevance.

Systems Composed of Many Layers

Realistic simulations of the growth of many layers of atoms on a substrate pose difficult if not impossible challenges for the current generation of computers. However, with the development of faster, more powerful computers, there will be a new opportunity for numerical simulations to play a role in the development of new growth techniques, where, to date, results obtained by modeling have lagged significantly behind the experimental

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

forefront. While it is known experimentally that the long-time behavior is important in setting growth parameters and that the growth morphology changes as a system evolves from a few to many layers, with today 's generation of computers we are still limited to crystals on the order of 107 atoms (e.g., 10 layers of 1,000 × 1,000 atoms each).

Steps

As crystalline layers are deposited using MBE, step-like interfaces exist between regions of different numbers of layers. In some cases, growth that occurs at step edges is highly uniform, making such steps desirable. In fact, substrates are often cleaved at an angle to the crystalline axis in order to initiate growth on a stepped surface. However, in other situations, steps may aggregate on the surface or individual steps may become nonuniform, leading to defect trapping, grain boundaries, and inhomogeneous growth. A better understanding of how to manipulate the desirable properties of step growth and prevent the undesirable properties is of fundamental technological importance. A better understanding of step edge barriers and surface reconstruction (e.g., dimerization in Si/Ge) will be essential for achieving such understanding.

Strained Layer Superlattices

The misfit in lattice constant between the substrate and adlayer in growing films cannot be accounted for without the inclusion of elastic forces. Consideration of models that do not constrain atoms to periodic lattice positions will be essential for the study of strain relief and the formation (or elimination) of dislocations. To understand real heteroepitaxial film growth and the dependence of these effects on the thickness of the film, the interplay between elastic and relaxation effects must be determined.

Surfactants

A new technique for controlling growth involves the use of surfactants to aid in the growth process. Here the key discovery is that in many cases a small amount of a particular growth surfactant can lead to improved homogeneity at interfaces between similar or dissimilar materials. For example, a small amount of As, a surfactant used in the deposition of Ge on Si, causes the Ge to deposit uniformly on the Si surface, whereas without the As, the Ge on Si tends to clump, in effect not wetting the surface. Recent work on Fe/Cu (100) has demonstrated dramatic effects of surfactants on the nature of layer-like growth and even the structure of growing films. While the use of surfactants shows much promise experimentally, at this initial stage very little is known about the microscopic kinetic processes that make this method effective. Computational advances will enhance our ability to predict effective surfactants for specific growth processes and methods of improving the homogeneity of surfactant-aided deposition. Currently, one of the major difficulties lies in the fact that effective surfactants operate at low densities relative to the underlying crystal, and with today's generation of computers it is difficult to incorporate a sufficient number of atoms of the growing crystal into the simulations to obtain a realistic contrast in densities.

High-Speed Deposition

In industrial applications materials are much more commonly grown using CVD, which takes place at much higher densities and higher deposition rates than MBE techniques. To date, numerical modeling has focused on the comparably simpler problem of MBE. However, with substantially faster computers it may be possible to address issues encountered with CVD techniques, such as the inclusion of complex chemistry that is not readily accessible to experimental study. A complete set of reaction rates for silene deposition of silicon has been determined by quantum chemical calculations. This necessitated performing accurate calculations for the very extensive number of different reactions possible in the Si-H system. The modeling of other important

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

systems is seriously limited by the inability to perform all the extensive calculations necessary. With two orders of magnitude in computational power, it would be possible to deal with ternary systems and somewhat heavier atoms. However, the quantum chemical calculations scale with such a high power of the number of electrons that raw computing power alone will not solve the problem of heavier atoms. Either a significantly enhanced density functional theory or elimination of the core electrons using frozen orbitals or pseudopotentials will be needed to deal with that problem.

NANOENGINEERING OF THE DYNAMICAL BEHAVIOR OF MATERIALS

The ubiquitous interface between solid and fluid materials governs much of the desired (and undesired) behavior of technologically important systems and processes. At nanoscale size and time dimensions, collective atomistic dynamics dictates the complex interfacial performance. For nanotechnology it is this performance that must be engineered to some specific function. While the process of designing static interfacial functions is a well-developed engineering art, our ability to control interfacial dynamics is relatively primitive. It is easy to name some pressing examples. We need to engineer the material properties defining the solid-solid and solid-liquid interfaces to control their friction and wear; to optimize dry and wet lubrication; to reduce degradation from impact, fracture, and cavitation; and to design repetitive dynamical contact and separation processes associated with specific technological needs.

For example, the multibillion-dollar magnetic recording industry expects to increase areal recording densities in hard disk drives by about two orders of magnitude over the next decade by flying the recording heads substantially closer to the spinning disk than today 's 100-nanometer flying heights. They may even be in continuous contact with disk surfaces. This will place extreme demands on the tribology of the slider-disk interface. If future improvements in recording densities are to be achieved, novel thin-film recording media and lubricants will have to be developed to meet the increasingly severe tribological demands of this slider-disk interface. Key to these developments is a molecular-level understanding of contact wear, bonding failure of thin films due to static and dynamical loading, lubrication using a few monolayers of molecules, and capillary stiction created when breaking or making the lubricating contact. A major consequence of this lack of understanding is that in most practical applications today the materials design of the recording media, disk-head substrates, and lubricant are made mainly by trial and error. Many other emerging high-technology industries will depend on rugged, reliable, and long-lived miniaturized devices incorporating micromechanical moving parts. The success of these components, and the resulting technologies, will likely depend critically on the development of superior molecular thin films for coating and lubricating the tiny parts.

Classical continuum physics has historically provided engineers and technologists with most of the needed theoretical and computational tools, such as fluid dynamics and thermodynamics. As a technology enters the realm of nanoscale physics, however, this continuum approach is no longer valid. The nanophysics of materials presently belongs to the domain of condensed-matter physics, statistical mechanics, quantum chemistry, and, in the broadest sense, the science of nonlinear complex behavior. With the advent of scalable, massively parallel computers, the computational aspects of these disciplines can now have the potential to provide very powerful tools and immediately useful solutions to this emerging field of nanoengineering technology.

One area of need in computational materials science is the development of general computational tools for studying the dynamics of condensed phase interfaces, with the goal of making them useful and available to research design engineers and scientists working in a broad spectrum of activities related to materials

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

technology. This capability should grow with increasing applications through collaborative interactions with industrial researchers. The computational tools should include molecular dynamics and Monte Carlo computer simulation programs for atomic, molecular, and polymer systems, as well as visualization software so useful in allowing scientists to “see” the complex processes being simulated on the computer. Specific applications should be chosen so as to hasten tools development and to demonstrate the power and versatility of this computational approach to nanoengineering.

Initial applications could include (1) the dynamical fatigue and fracture of solids and thin films bonded to solid surfaces under load, (2) solid surface damage due to cavitation of a contacting fluid, and (3) thin liquid-film lubrication between sliding solid surfaces. These applications are discussed in the paragraphs below.

Although much work on fracture in brittle materials and at interfaces has been done over some 70 years, the mechanisms that govern the structure and dynamics of cracks are not well understood. An obvious difficulty is that we do not yet understand why cracks attain a limiting velocity that is about one half the velocity predicted by linear elasticity theory. Experiments suggest that dynamical instabilities of the crack tip may govern the crack velocity and typical morphology sequence (called “mirror, mist, and hackle”). In a typical fracture sequence an initially smooth and mirror-like fracture surface begins to appear misty and then evolves into a rough hackled region. In some brittle materials the crack pattern can also exhibit a wiggle of characteristic wavelength. All of these features are unexplained by continuum elasticity theory. For truly fundamental understanding we must go to the complex microscopic level, and molecular simulation could give us this capability.

Much experimental and theoretical work has been done on “cavitation”—the collapse of vapor bubbles onto solid surfaces that are submerged in liquids (e.g., ship propellers)—and the resulting damage to the surface, which can adversely affect not only the material itself but also the component's performance. Owing to experimental difficulties, few studies have examined the effects of cavitation in thin films, such as thin disk-lubricant films under the high shear forces typical of disk-drive operation. Thus, the intuition gained from hydrofoil cavitation experiments in high-speed water tunnels may not apply to such microtribological situations. Recently, a surface-force apparatus technique was used to observe cavitation in thin liquid films bounded by moving solid surfaces. The experiments indicate that under certain conditions the formation of a cavitating bubble is a much more violent and destructive event than its eventual collapse. This finding contradicts the currently held belief that cavitation damage is due solely to the extremely large implosive pressure generated at the moment the bubble collapses, a conclusion based on Rayleigh's classic 1917 paper. Because the thin liquid films are about 10 nanometers thick, a calculated molecular description is probably required to get an accurate physical understanding.

The friction and wear that occur between two rubbing surfaces can be greatly reduced by separating the surfaces with a film of lubricating molecules. The key properties that enable a molecular film to provide good lubrication are low shear strength and resistance to penetrating asperities. Nevertheless, a molecular picture of how molecules lubricate has yet to be developed. Molecular dynamics simulations could study the mechanics of adsorbed molecules. Surface forces would be calculated as a function of the surface coverage to see how the lubricant's molecular mechanics change as they go from being isolated on the surface to packed together in a complete monolayer. Atomic-force microscope experiments and scanning tunneling microscope experiments are being developed to study lubricant molecules on well-characterized surfaces. By combining these experimental results with simulations, we could hope to develop a detailed knowledge of how lubricant molecules can be designed to be

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

adsorbed on surfaces and provide enhanced performance. These studies should also provide fundamental insights into the nature of the chemical bonding of lubricant molecules to surfaces. This type of information should provide the necessary scientific underpinning needed to develop novel lubricant systems.

CHEMICAL DYNAMICS: SURFACE CHEMISTRY, CORROSION, EXPLOSIONS

Background

Computational chemistry is concerned with the structure and properties of different molecular species and the “reaction pathways” that connect them. In the Born-Oppenheimer approximation the electronic and nuclear motions are treated as decoupled. The electronic motion problem is solved for different arrangements of the nuclei, and the resulting electronic energies, as a function of nuclear position, define a potential energy surface that governs the nuclear motion. Armed with a potential energy surface, it is possible in principle to calculate the rates of dynamical processes, such as the rates of chemical reactions or the rates of conformational changes. The stationary points on potential energy surfaces have special significance: minima correspond to stable chemical species, while first-order saddle points (stationary points at which the Hessian has one direction of negative curvature) correspond naively to the barriers on pathways to chemical reaction.

Such an analysis results from an essentially “molecular” viewpoint. Applications to problems that involve infinite (or just very large finite) systems require some extension. For periodic systems it is possible to generalize the methods of molecular quantum mechanics to deal with translational symmetry, but this is ineffective or very inefficient in situations such as low-density chemisorption. One alternative, the method of “cluster abstraction,” is to represent the bulk system by a finite (usually relatively small) cluster. The study of heterogeneous catalysis then involves investigating the reaction path and energetics of the substrate molecule and a cluster representation of the catalyst. Sometimes attempts are made to saturate dangling bonds at the periphery of the cluster. A more elaborate approach is to use a cluster model for an explicit representation of part of the bulk and then to embed this finite description in a bulk model. The issues of accuracy of the bulk description and continuity between cluster and bulk description are key elements of embedding techniques.

The calculation of electronic wave functions and properties for molecules and finite clusters is the province of quantum chemistry. Quantum-chemical methods form a spectrum, from ab initio methods in which there are no adjustable parameters (in principle) to purely empirical methods in which a potential energy surface is built up entirely from functional forms with fitted or estimated parameters. Between these extremes are semiempirical methods, in which the electronic motion is described using the same (or approximately the same) equations as the ab initio methods, but these are parametrized using data fitted to the experiment or otherwise estimated.

Ab initio methods comprise two distinct classes. In one class, which characterizes the more traditional approach in chemistry (but not physics), an independent-particle model is first solved for the electronic motion (the Hartree-Fock equations). Refinement of this model involves the incorporation of electron correlation, omitted at the Hartree-Fock level, by perturbational or variational methods. The second class comprises density functional methods, in which the electronic structure treatment is based on considering the electron density rather than the wave function. The simplest density functional method assumes a local density model for the exchange correlation potential (i.e., LDA) but this can be refined by approximate methods that account for nonlocal effects. Broadly speaking, elaborate density-functional-based calculations for molecules are similar in computational effort to Hartree-Fock calculations but commonly yield better results

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

because they account for some electron correlation effects. Traditional methods of accounting for electron correlation in molecules yield higher accuracy but are more expensive. Local density model calculations can be formulated very efficiently by using pseudopotential methods and plane-wave basis sets, at least for some elements.

Armed with a potential energy surface for a given system, it is possible to compute reaction rates for chemical reactions or inelastic scattering (such as changes of the internal quantum state in the colliding systems or scattering of a molecule from a surface). For semiquantitative accuracy it is possible to solve the scattering equations within the framework of classical mechanics, in which the reacting system traces out trajectories in phase space. For small systems it is also possible to solve the scattering equations quantum mechanically, but this becomes very demanding computationally even for systems with three atoms.

A potential energy surface can also be used in following the time evolution of a system by molecular dynamics (MD) (most commonly Newtonian dynamics). In traditional MD calculations the potential is specified empirically in terms of a generalized force field. However, this is inadequate for representing more complicated phenomena such as the forming or breaking of chemical bonds. One of the difficulties is that because small (typically femtosecond) timesteps are used, simulating a process for more than a few picoseconds is very demanding, and nanoseconds are at the limit of what can be achieved. Since many chemical phenomena take place on a time scale of microseconds or milliseconds, there is a very considerable gap (as much as six orders of magnitude or more) between what can be achieved and what is desired. In addition to this problem of timescale, there are problems associated with multiple minima on potential surfaces (e.g., global optimization problems such as protein folding). It should be noted that MD simulations of liquids and macromolecules are widespread and very successful, especially in a biological context.

One obvious approach to avoiding the limitations that empirical potentials impose on MD is to incorporate some sort of electronic structure calculation (ideally nonempirical) into the MD calculation, so that the potential energy and forces are calculated explicitly at each geometry required. Such an approach—sometimes called quantum molecular dynamics (QMD)—is limited to relatively fast methods of electronic structure calculation, such as the local density model or semiempirical methods for MD, although the same strategy can be employed with more elaborate methods for classical scattering. The advantage of this approach is that it is unbiased (apart from the choice of electronic structure methodology), but it suffers from the same difficulties with multiple minima and the time scale as conventional molecular dynamics.

With current computing resources (i.e., 100 to 1,000 Mflops capability), it is possible to treat systems of up to 100 atoms using QMD. For calculations sampling only one or a few geometries, it is possible to treat systems of 50 or perhaps 100 atoms using traditional correlated electronic structure methods or density functional methods and larger using semiempirical methods. MD simulations using empirical potentials can be performed for millions of atoms.

Important Problems
Computational Methodology

There are many major unsolved problems in the development and application of computational chemistry methods. One of the most important is the extension of existing methods to much larger systems. This would not only allow treatment of larger molecules and clusters but would alleviate some of the difficulties in embedding approaches, since the abstracted system would be larger and the errors in the embedding procedure could be expected to be less significant. Most of the computational methods in use scale as N3 or worse (up to N7 scaling for the most elaborate traditional methods) in principle, although for

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

very large systems simple electrostatics indicates an N2 scaling. Some effort has begun on improving this situation, with a focus on methods scaling to order N. For large periodic systems, fast multipole methods are beginning to see some use. Of course, the ability to treat larger systems magnifies other problems, such as dealing with multiple minima. Hence, attempts to extend methods to treat larger systems should go hand in hand with methods for handling the global optimization problem.

A different methodological problem is related to the accuracy of methods. In addition to increasing the size of the system we can treat, it is important to be able to improve the accuracy of existing methods. Typically, the best quantum chemical calculations can attain an accuracy of a few kilocalories per mole (kcal/mol) in bond energies; this is somewhat outside the 1 kcal/mol accuracy of experimental thermochemistry. If theory is to be able to substitute for experiment in the estimation of molecular properties, significant improvement in accuracy (a factor of at least three) will be required.

At the interface of theoretical methodology and computer science is the development of algorithms and the implementation of different methods. While there is a long and extremely successful history of computational chemistry on scalar and vector computers, the use of parallel machines is still in its infancy. Some of the challenges of parallel computational chemistry can be met by suitable reprogramming of parts of existing codes, but there is little doubt that much of the underlying theoretical formulation has been devised exclusively with serial computation in mind. The most successful parallel computational chemistry methods will require at the least new algorithms and possibly different formulations.

Applications

Any scientific discipline can generate a list of applications that represent important challenges for computational scientists. In the context of materials science, several areas seem ripe for investigation using the techniques of computational chemistry to obtain a microscopic understanding. Perhaps the most important is chemistry at solid surfaces, under which heading the panel includes oxidation and corrosion as well as catalysis. If catalysts or corrosion protection agents are to be designed from first principles, it will be necessary to have a detailed microscopic understanding of the reaction mechanism being catalyzed or of the stepwise mechanism of corrosion. Since there are no empirical potentials that provide an accurate model of chemical reactions, it is difficult to see how such a microscopic understanding can be obtained without reliable first-principles calculations.

Another area with significant demands on reliable calculations is the determination of cluster structures (i.e., the geometries and relative energetics of cluster species) and of the growth of clusters —how the structure adjusts to the addition of one or more atoms. Using LDA-based QMD, this can be studied for clusters up to 50 or more atoms in size but only for certain atoms. Further, the reliability of these results depends entirely on the ability of LDA to describe the breaking and forming of chemical bonds. The correlation effects that arise when bonds are formed or broken are multiconfigurational in nature, and it seems unlikely that LDA can describe these quantitatively.

A different coupling of chemical reactions to bulk material is the propagation of shock waves, generated by a reaction, through materials. Of particular importance are shock waves generated by rapid energetic processes, such as detonation of an explosive, as described in the next subsection. For a qualitative understanding it may be possible to approximate the chemical kinetics by simple models, but for investigation of real materials a more reliable description of the elementary kinetics will be required. Typically, in any complex reaction scheme only a few (if any) of the rate constants can be measured or deduced experimentally, so it is possible for theoretically computed numbers to have considerable impact on combustion or explosion modeling.

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

Molecular properties other than structure and energetics can be calculated theoretically. For example, the nonlinear optical properties of molecules are determined by electric hyperpolarizabilities, related to the response of a molecule to a static or time-varying applied electric field. The calculation of such quantities can thus play a role in the design or improvement of nonlinear optical materials. Hyperpolarizabilities are strongly influenced by electron correlation and require very elaborate electronic structure calculations. Hartree-Fock values are quite inadequate, and density functional methods do not perform as well for these quantities as they do for other properties.

Computing Forecast

The implications of increases in computer power discussed in this section assume an increase of two to three orders of magnitude in real performance, resulting largely from scalable parallel architectures.

The pressing need for any research group wishing to take advantage of increased computer power is to adapt its codes to these parallel architectures in ways that can properly exploit scalability. Parallelization of molecular electronic structure codes, whether traditional or based on density functional methods, is often difficult because the programs are very large and were commonly developed by somebody other than the current user(s). Nevertheless, considerable efforts have been made to parallelize most approaches to molecular electronic structure calculations, and there is considerable expertise available in this area. Scalability has so far proven only fair—most of the obvious parallelization strategies scale well only for a limited number of processors (up to, say, 64). In addition, the memory demands of some parallel quantum chemical algorithms can become excessive.

The parallelization of reaction dynamics was explored in some depth in the earliest days of parallel computing. Classical trajectory methods can be treated rather easily, since individual trajectories are independent. Quantum scattering methods are considerably more complicated but again can be parallelized very effectively. The scalability of both approaches to reaction dynamics is good.

Parallel approaches to MD also have received considerable attention. Parallelization is rather straightforward. Nevertheless, the scalability of traditional empirical potential-based MD is only fair; performance is beginning to fall off noticeably around 128 processors. QMD using LDA and plane-wave basis sets has been parallelized by several groups, performing well and showing good scalability.

Assuming that it is possible to realize a factor of 100 to 1,000 increase in computer power in these various methods, we can estimate what the consequences will be for the chemistry that can be attacked. We can also identify other problems that will only be exacerbated by the increase in power.

From the perspective of electronic structure, the scaling of methods is (at the most optimistic) N2 to N3, so our hypothesized increase in computer power would allow systems of perhaps 10 times as many atoms to be treated (i.e., 1,000 rather than 100 atoms). While this scaling applies to QMD as well, it is likely that QMD calculations on 1,000-atom systems would run into severe problems with multiple minima. The other consequence of increased computer power is higher accuracy for existing systems. Using traditional methods, it should certainly be possible to achieve “thermochemical accuracy” (1 kcal/mol) fairly routinely for bond energies. It is not clear how increased computer power will influence the accuracy of density functional model calculations, since this will also depend on the development of better functionals.

The impact of such an increase in computer power on reaction dynamics may be more limited. Quantum scattering methods are limited to only a few degrees of freedom, so the increased power may be best used to examine better potential functions or for the inclusion of smaller terms, such as nonadiabatic (Born-Oppenheimer breakdown) effects or molecular fine structure. Classical trajectory methods may benefit from more extensive sampling, but the

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

real impact is likely to be an increased use of integrated electronic structure and classical dynamics methods, avoiding the approximate representation of the potential energy surface, analogous to QMD.

While classical MD will also benefit from an increase in computer power, the ability to handle much larger systems will be counterbalanced to some extent by problems of global optimization. The second major challenge in MD, the time-scale problem, will not be significantly affected by an increase in magnitude of two to three orders in computer power, since a brute force approach will still fall short by three to four orders of magnitude. This problem will require alternative physics approaches to make it tractable.

MOLECULAR HYDRODYNAMICS OF DETONATION

Background

The field of chemical detonations is at a crossroads. There are two vastly different pictures emerging of the way in which chemistry proceeds behind the initial shock wave. Both are consistent with the classical theory of detonations by Zel'dovich, von Neumann, and Doering (ZND)—namely, that following shock compression the explosive molecules react and the product molecules expand, represented by a pressure profile whose principal features are an almost instantaneous shock rise; a von Neumann spike, where reactants are heated and compressed; a reaction zone, where reactions occur accompanied by decreasing density and pressure; and a Taylor wave of the expanding product gases. The experimental picture is still shrouded in some mystery, since these rapid events at the shock front are very difficult to resolve on the subnanosecond time scale, though picosecond spectroscopy shows promise in the next few years of shedding some light on these features. It is, currently, the theory that is most unsettled.

There are two prevailing pictures of the reaction process. The first and most commonly held view is that the directed kinetic energy of the shock rise in density cannot be used immediately to cause chemical reactions, but rather must be fed up a chain of gradually increasing frequencies. It is well known that phonon modes whose frequencies differ significantly take a long time to come to equilibrium through anharmonic coupling. By analogy, proponents of this first picture argue that energy moves up a ladder of frequencies determined by translations, rotations, molecular torsions, and bond-bending modes, followed by bond vibrations—first weak, then intermediate, and, finally, the highest-frequency ones. This process, which relies on the concepts of equilibrium thermalization, is known as “up-pumping” and depends on “multiphonon states” and “doorway modes” to describe the gradual process of getting the kinetic energy from shock compression into thermally excited bonds.

By sharp contrast, the second view, which has been championed primarily by Anatoly Dremin of Russia, is that kinetic energy in the shock rise (he calls it the initial “overheat”) goes immediately into the strongest exothermic bonds, which then break the molecules into large fragments that then gradually break apart into the small-product molecules. The difference could not be more dramatic—gradual equilibration leading to chemical reaction versus nonequilibrium energy transfer to strong bonds and quick initial reactions, followed by gradual decomposition into products. Dremin's view is supported by evidence that there is a marked difference in the chemistry of molecules at an equilibrium state achieved by static rather than dynamic means; that is, under static compression and heating in a diamond cell apparatus, benzene (for example) can be chemically unaffected, while under shock compression to the same final state, decomposition can occur, with the degree of polymerization dependent on the duration of the shock pulse. Thus, shock chemistry is a direct consequence of the nonequilibrium nature of shock compression and is therefore distinct from equilibrium chemistry.

Between these two theoretical views of detonation, and at the same time providing an

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

alternative to 40 years of experimental insight, there is a new computer approach involving simulation of detonations at the molecular hydrodynamic level. The most promising development uses molecular dynamics to simulate a model diatomic explosive molecule (AB), whose interaction potential includes the inherent capability to react to form product molecules (2AB → AA + BB + energy). These simulations use reactive empirical bond order (REBO) potentials, from which chemically plausible potential surfaces are obtained by modifying the short-range attractive part of a given atom's interaction with another, depending on its local environment. If it has no neighbor, a bond can be formed; if it is already bonded, the third party is repelled. When a crystal of these AB molecules is shocked, all three features of the classical ZND theory of detonation are observed. In contrast to laboratory experiments, molecular dynamics simulations are carried out at the right time and distance scales to resolve these rapid events—a sharp shock rise, leading to a von Neumann spike, followed by a Taylor wave expansion. This research, including development of the REBO potentials, has been carried out at the Naval Research Laboratory.

The results shed new light on the detonation process, supporting a significant feature of Dremin's picture. In particular, the diatomic bond almost immediately loses its integrity upon shock compression in the overdriven, full-detonation case, and the bond vibrational energy is not in thermal equilibrium with the translational and rotational degrees of freedom before chemical reaction occurs. Exothermic chemistry can happen very quickly, without recourse to doorway modes or energy ladders of the more conventional picture. Of course, the model is diatomic, so there are no intermediate frequencies to warm up. However, a new triatomic model resembling ozone has been tested with similar results. It is very likely that the inclusion of even more atoms in the explosive molecule will widen the reaction zone, in agreement with some experimental conclusions.

One of the hallmarks of classical hydrodynamic detonation theory, confirmed experimentally, is the existence of a failure diameter below which a detonation will not propagate. The failure diameter is closely related to the length of the reaction zone in the material. The length of the reaction zone and hence the failure diameter can vary tremendously depending on the particular explosive. For example, the reaction zone length in RDX (cyclotetramethylenetetra-nitramine) is several millimeters and in nitromethane is tens of micrometers, while in PETN (pentaerythritol tetranitrate) it is so small that it has yet to be resolved experimentally using state-of-the-art nanosecond probes. Recent results demonstrate a failure diameter in the model AB solid of about 10 nanometers and indicate that the detonation velocity in the model varies with the radius of the explosive in a manner consistent with the classic theory of reactive flows. This result again confirms the ability of MD to study detonations at atomic resolution while treating enough atoms for long enough periods of time to link the results of the simulations to continuum theory. These results suggest that materials such as condensed phase ozone and nitric oxide have failure diameters as small as several nanometers.

The role of inhomogeneities in explosives can also be studied by molecular hydrodynamics. Studies have been done on the effect of so-called hot spots on the detonation process. Results indicate that the passage of a shock wave over a large void tends to heat up the system locally by spallation on one side and impact on the other. The role of crystal structure (or lack of it in the case of fluids) also is important in the question of explosive sensitivity, where there is experimental evidence of shear motion exciting chemical reaction in certain favorable packing directions. Closely related to this phenomenon is tribochemistry, where rubbing two surfaces together can cause chemical reactions to occur. Recently, tribochemical reactions have been observed in molecular dynamics simulations of friction between two diamond (111) surfaces

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

(terminated by hydrogen atoms and floppy hydrocarbon molecules using an REBO potential). While these are not energetic materials, it is clear that this kind of simulation can shed light on the issue of structural causes of explosive sensitivity.

Challenges for the Future

Challenges in the future involve both molecular complexity and hydrodynamic effects, which will impose larger scales in time and distance upon detonation simulations. For example, in order to see if the most energetic and stiffest bond breaks first upon shock compression— as it seems to for the simplest AB and O3 molecular models—an REBO potential should be developed for a molecule with more vibrational degrees of freedom (e.g., a generic ABC molecule where the AB bond is the stiffer energetic bond). Studies of shock waves in systems of large but unreactive molecules (i.e., lacking the pathways to chemical decomposition) confirm the usual up-pumping picture of vibrational energy. It is therefore reasonable to imagine that inclusion of chemistry will change all that. If the reaction zone and therefore the failure diameter increase as expected, it may become necessary to use scalable parallel computing resources.

As more complex molecules are used in these simulations, there will be even greater challenges and opportunities in the area of designing realistic potentials, as well as the need for increased computer power. Another ambitious undertaking would be to see if the cellular nonplanar structures observed experimentally will also appear in detonation simulations. These can arise from defects in solids and density fluctuations in fluids and from wave interactions due to edge effects. In real systems the distance scales, as in the case of reaction zones, cover many orders of magnitude.

There are two natural spinoffs from this work in detonations that have strong implications for the Navy in the safety of explosives: (1) tribochemistry, that is, reactions at surfaces initiated by friction (work, to date, has been only exploratory in nature), and (2) fracture chemistry induced at crack tips. Both fields of study will push computer resources to the limit, especially for realistic molecular models.

In summary, our understanding of energetic materials is on the threshold of a revolution, in no small way stimulated by computer chemistry experiments at the molecular scale. As these simulations are carried out on even larger and faster computers, more and more realism can be incorporated into them, both in the interaction potentials for modeling more sophisticated molecular species and in the ability to treat more complex flows (such as detonation failure and cellular structures) by expanding the accessible time and distance scales of the simulations.

STRENGTH OF MATERIALS, DEFECTS, HIGH-TEMPERATURE MATERIALS

Background

Stainless steel is not used to bring city utilities, water, or natural gas into houses. Economics requires that the cheapest material that will perform adequately and with a reasonable amortizable lifetime be used. Structural materials must be sufficiently strong and stable, which requires understanding metallurgical problems such as fracture, fatigue, creep, oxidation, corrosion, and embrittlement, to name a few. All these phenomena are exacerbated by enhanced mass transport at elevated temperatures, leading to phase changes, particle diffusion, ablation, and even chemical reaction. The motivation for the use of these materials at high temperatures arises from the greater energy efficiency associated with the higher temperature in the thermodynamic Carnot cycle. In a more mundane sense, material applications in turbines, aircraft jet engines, and nuclear reactors all require high-temperature materials. Even boilers, pressure vessels, and pipes may be included at their temperature extremes. Clearly, all these systems are of primary importance to the Navy

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

and earned the subject of “high-temperature materials” a place in the Navy's critical technologies. Solutions to these problems by metallurgists involve developing alloys having desirable properties that avoid or subvert failure. Alloys are composed of combinations of several metallic and nonmetallic elements, each adding some desirable aspect or preventing some deleterious phenomenon. Mechanical failure or fracture is controlled by the motion or rather the lack of dislocations. The ability of microscopic crystalline grains to slide over one another leads to ductility—the resistance to crack propagation—which is governed by dislocation movement; strength is added by preventing or pinning dislocation movement by incorporating foreign atoms or particles into the alloy. Superalloys often include over a dozen constituents to achieve this goal. Other systems include the myriad materials called stainless steels and refractory alloys based on four- and five-dimensional group V and group VI transition elements.

The second important issue for high-temperature materials is stability. In the operating environments of these materials, such as gas turbines, jet engines, or nuclear reactors, there are often trace elements such as sulfur, oxygen, sodium, carbon, and hydrogen. At elevated operating temperatures, chemical attack can readily occur, leading to oxidation, hot corrosion, and embrittlement. Coatings, either thin added layers or “home grown” as in chromium oxide coatings for stainless steel, are often used. The surface chemistry of inhibitors, promotors, and catalysis is relevant to these problems.

Computational Issues and Forecast

It is a longstanding but as yet unrealized goal to use theory to aid in materials development, with the idea that theoretical calculations can be performed more quickly and less expensively than experiments. Moreover, a truly fundamental understanding of materials behavior will permit exploration of novel materials and even their design from first principles. To use microalloying to improve NiAl properties, for example, will require a fundamental understanding of the mechanisms of its deformation and fracture and the effects of the alloying on its mechanical behavior. The impact of theory on materials development has been limited, however, owing to the lack of connection between the macroscopic behavior and the theories defining the fundamental properties of materials.

It is convenient to divide the length and time scales appropriate for materials behavior into the following four computational categories: (1) quantum mechanics, (2) classical molecular dynamics of individual atoms, (3) dynamics of multiatom defects such as dislocations, and (4) continuum mechanics (where nonequilibrium defect and atomistic effects are incorporated through empirical constitutive laws). There are computational limitations in each of these areas that lead to gaps between them and prevent integration into a single comprehensive theory. Recent advances in computer hardware and algorithms to extend the size and time scales of computations in each of these areas make it possible to imagine integrating them. For example, massively parallel molecular dynamics calculations are now feasible for tens of millions of atoms, thus extending three-dimensional calculations from length scales of nanometers to tenths of micrometers. The time-scale limitation in molecular dynamics is still measured in tens of thousands of vibrational periods (i.e., tens of nanoseconds), though discussion of “reality ” in molecular dynamics is dominated by details of the interatomic potential (i.e., information obtained from lower-length scales). Nevertheless, information from molecular dynamics feeds the next-higher-length scale, namely, dynamics of defects, and, ultimately, into continuum mechanics. Features of the atomistic mechanisms of complex flow will make themselves felt at the continuum level, though overlap of the neighboring length scales will provide the earliest contributions to our understanding of the mechanical behavior of materials. The challenge in modeling

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

mechanical properties of structural materials will be to develop a series of computational tools for the relevant length and time scales and then to integrate them, thereby providing a more complete capability for characterizing the behavior of new materials. However, before we can achieve the goal of designing these new materials, we need to understand strength and fracture properties beginning at the atomistic level.

A recent report entitled “Summary Report: Computational Issues in the Mechanical Behavior of Metals and Intermetallics” (Materials Science and Engineering, Vol. A159, 1992) deals extensively with these issues. Similarly, a recent article entitled “Alloys by Design” (Physics World, November, 1992) presents an assessment of the mechanical properties in terms of simplistic arguments such as bonding-antibonding densities of states separation and directional (d-states) versus metallic bonding (sp-states). What is needed is the ability to accurately and rapidly scan parameter space (composition, structure, lattice constants) and to visualize the results in terms of charge density, densities of states, total energies, or any other parameter to extract models, correlations, and concepts. These can then be used to make predictions and to assist the metallurgist. One can, for example, calculate static 0 K lattice constants and various elastic moduli of candidate structures to correlate with melting temperatures and other physical properties of interest. To calculate temperature effects (thermally assisted motion of atoms), very accurate energy surfaces are needed for a variety of components—again, “fast accurate potentials.” The successes of the embedded atom method have shown that even modestly realistic models can yield satisfactory qualitative and sometimes quantitative results. Various steps have been taken to improve these models. Until total energies can be realistically calculated “on the fly,” simplified model and semiempirical approaches are useful for ionic systems not involving d-electrons (e.g., elastic and thermal properties of MgO at high pressures and temperatures). For d-electrons, insight may be gained by generalized tight-binding methods, first for crystalline systems and eventually for disordered systems containing defects. It may be possible to approach alloys by design if order-N scaling is achieved. New algorithms and approaches—both evolutionary and revolutionary—will be needed to achieve such a goal.

COMPOSITES, POLYMERS, CERAMICS

Background

Intermediate between the atomic scale of quantum mechanics and the bulk or macroscopic scale of most applications is the mesoscopic regime. There, the entities of interest consist of tens to thousands of atoms with dimensions ranging from nanometers to microns. Although the focus generally is on utilizing these mesoscopic entities as building blocks of macroscopic materials, it should be pointed out that fabrication techniques are capable of creating designed structures of this size. Consequently, there are experimental realizations, and possibly applications, of such structures. Whereas the properties and responses of atomic building blocks are limited, those of the mesoscopic entities are far more diverse and adaptable. Exploiting this wider range of properties enables a very favorable approach to developing materials with given desirable properties, which is to first design the properties of these entities and then assemble them into macroscopic materials. In general, the resulting properties will not be average properties but instead will offer the possibility of taking the best features from each of the constituents. The presence of interlocking interfaces can give greatly enhanced mechanical strength, temperature resistance, or electrical properties in the proper circumstances. Or the material may be designed to be porous as a way of producing ultrafine filters or catalytic substrates.

What are these materials? Composites are constructed from clusters, fibers, or thin layers of diverse materials. When the clusters are

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

inserted into a macroscopically homogeneous material, they are referred to as X matrix clusters, where X characterizes the nature of the host material. When the materials are oxides or other refractories and the clusters are of micron size, they are known as ceramics. When the building blocks are long-chain molecules, they are polymers. Clusters of nanometer size (tens or hundreds of atoms) often exhibit a strongly peaked distribution of particle size and are referred to as nanophase materials. These materials can achieve even more special properties and are discussed separately.

These complex materials already have many vital applications. For example, they are the basis of all personnel armor (flak jackets, bulletproof vests, and so on) in use today. The new lightweight performance-enhancing structural materials for aircraft, and also automobiles, are composite materials. (It is interesting to note that the Japanese are doing their developmental applications in sports equipment, whereas U.S. manufacturers are concentrating on aerospace and defense developments.) There is a very large effort to utilize ceramic parts in engines to achieve higher performance and efficiency. Specialty ceramic pistons are already in the marketplace but currently only for racing engines. Many adhesives and protective coatings also fall into this class of materials. Paper is a fiber composite. Plastics are polymers. Special magnetic multilayers and also some nanophase inclusions in metal matrices exhibit giant magnetoresistance effects. These can be used as magnetic field detectors, offering definite advantages over the standard pickup coil technology. A large consortium (that includes heavy representation from the Department of Defense) has been created to develop high-temperature fiber composites as an enabling technology for improved gas turbine engines. Many new applications can be expected soon. Advanced ceramics and composites was one of the 22 critical technologies identified by the National Critical Technologies Panel in 1991 (see Report of the National Critical Technologies Panel, U.S. Government Printing Office, Washington, D.C., 1991). This led the Department of Energy to initiate a 10-year R&D effort to develop continuous fiber-reinforced ceramic composites.

Numerous phenomena can conspire to produce the interesting properties of these complex materials. Here, only a few are listed to convey the flavor of the extreme diversity involved. The easiest to understand from our everyday experience is the concept of plying or entanglement. Strength through plying has been applied to everything from automobile tires to carpentry. The effect is the same when the plies are made of mesoscopic entities. In fact, a layperson listening to a discussion of fiber composite material design might well believe the talk was about plywood design, but in three dimensions. And while weaving is not precisely at the molecular level, when dealing with cross-linked polymers, the idea is not far from the mark. Clearly, such complex structures will have a much larger fraction of the solid involved in interfaces. Those interfaces yield very different behaviors structurally and electronically than any bulk system. The special nature of the electron scattering in the plane of the interfaces is believed to be responsible for the observed giant magnetoresistance, for example, that is a likely candidate for improved magnetic sensors and computer disk heads. Small particles can exhibit structures, such as icosahedral, that cannot be geometrically accomplished in a bulk structure. Some of these are energetically quite favorable, so that very stable materials can be formed by using composite structures. On the other hand, a surrounding matrix or epitaxial layer can help stabilize a phase that exhibits a favorable property but that is only metastable in bulk. These building blocks are not so large that their quantum effects can be completely ignored. For macromolecular systems the only quantum effects of importance may be limited to molecular bonding. But there can be others. More subtly, quantum confinement can yield energy-level spacings that are significant compared to others, especially thermal energy scales, and these can modify all the transport

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

properties. For metallic materials these confinement effects can help select the magic numbers of atoms in the realized clusters and thus the distribution of the particle sizes observed.

Present Status and Critical Issues

The design of many structural composite materials has employed empirical models that have effectively used computational resources. However, assessments suggest that future successful applications of composite materials would rely heavily on mechanism-based modeling. As the newer composite materials are ever more complex and nonlinear, the traditional empirical characterizations are becoming increasingly more expensive and limited. A recent National Research Council report, Mathematical Research in Materials Science: Opportunities and Perspectives (National Academy Press, Washington, D.C., 1993), reviewed the effective media developments in this area, and so the focus maintained here is on the atomistic approaches.

Basic theoretical and computational capabilities are beginning to contribute to some useful studies. It can easily be envisioned that, with the current developments in large N (~1,000 atom) electronic structure capabilities in density functional theory, direct calculations will be available for single building block entities, an entity with surrounding environment, and even the simpler composite systems. Of particular interest will be cluster calculations that characterize how the small precipitate is influenced by the surrounding matrix and how it interacts back on that matrix, especially paying attention to the strain fields. The cases worked out using the more fundamental techniques will be useful both as examples in a statistical ensemble and as test cases for further, less fundamental techniques. Force (strength) and response function (electrical, optical, and magnetic) properties are tractable, but temperature effects in the electronic structure are suspect. The most probable approach to extend to larger systems is to utilize semiempirical tight-binding techniques. Those techniques can give some insight into operative mechanisms, but the danger is that they will most likely fail precisely where the material system becomes the most interesting. The most critical issue will be charge transfer.

Dynamical simulations are now capable of including millions of atoms when the interatomic forces can be characterized sufficiently simply. This is enough to include multiple instances of composite structure and thus study interactions through strain fields and so on. The number is far less, perhaps thousands, when using the somewhat more realistic tight-binding representation to characterize the interactions. Nonetheless, useful complexity can be incorporated. Such techniques will probably prove extremely useful to address a fundamental issue of ceramics: understanding how the strain field interacts with crack growth to (hopefully) suppress it. Research on fracture is an extremely active field, although generally not at the atomic level.

Future Theoretical Developments and Computing Forecast

A hundredfold increase in computer capability will, by itself, only ensure that the applications already discussed will be feasible for carefully chosen “targets of opportunity”; that is, problems will need to be chosen at least partially on the basis of feasibility rather than exclusively on the basis of scientific or practical interest. Increased computer capability will ease the restrictions but not remove them. Researchers will have to carefully abstract essential features for generic study. It can be hoped that progress toward order-N scaling will provide a significant boost in capability. However, it must be expected that such capabilities will depend on special circumstances and not have general applicability, such that problem choice will still be required. Alternately, intermediate precision approaches for electronic structure that are adequate to derive interatomic forces with less

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

effort should offer an alternative approach to improving problem size and thereby encompass greater reality. Both the generalized tight-binding parameterizations and the modeling of interactions such as those between deforming atomic systems that are being advanced at the Naval Research Laboratory are strong efforts in this direction. Again, these developments will not be universally applicable, but they will expand the sphere of what is possible.

The interface between basic components is an especially important factor. Fortunately, techniques are available for studying interface electronic structure that are realizable forms of the process known as an “embedding” (or “building block” or “architect”) approach. Embedding has long been a promising technique that has been realized in only a very few cases. Its basic intent is to deal only with a limited piece of the problem in detail while characterizing the rest as an environment that needs to be described in much less detail. By using Green's function matching or reduced variational freedom (Harris-Foulkes) away from an interface, it can be expected that the interface can be effectively isolated such that it can be treated with high precision. In this manner it will be possible to study realistic specific examples of interfaces. The examples for study will be established by using molecular dynamics or grand canonical Monte Carlo simulations. With such techniques, the impact of surface interactions can be calculated and characterized.

Further advances in embedding techniques would clearly have immense impact for composite materials. The most promising progress is being made where the electronic structure can be represented in atomic orbitals that can be expressed effectively as atomic charges. Thus, useful information will be made available through calculations for embedded clusters. To deal with interactions with long-range strain fields, and in the area of polymers, an exchange of ideas with molecular biologists (who face very similar problems with equally disparate needs for solutions) would be especially useful.

ALLOY PHASE DIAGRAMS

Obtaining new and useful properties by alloying dates back to the Bronze Age. New alloys are still being created to solve problems in all segments of our economy ranging from aerospace to electronics to heavy machinery to just about any endeavor that exploits materials properties. It is also necessary to develop replacement alloys for traditional alloys because one or more ingredient is of limited abundance (cost issue) or obtainable only in sensitive or unstable countries (political issues). However, alloy development is frequently economically unattractive because the traditional methods are quite costly. Despite its importance and history, the development of new alloys is still an empirical procedure based on making incremental improvements exploiting a large body of past experience. There has been relatively little guidance from the application of fundamental principles. Thus, great leaps forward are infrequent and more often than not result from accidental observations made when exploring a different problem.

The basic information necessary for systematic alloy development is the phase diagram specifying what crystal structures will occur in which temperature range for a given mixture of the constituents. There are other issues such as internal disorder, stresses, and microstructure, but the phase diagram is the starting point. Considerable progress has been made using empirical models. It should be recognized, however, that these empirical models can be reliably used only to interpolate between known data. Extrapolation to radically different situations is risky. First-principles calculations have exhibited some initial successes and seem capable of at least providing the basis to explain trends. (This is an important capability; for example, it is crucial to know how well Gd can serve as a surrogate for Pu since experiments can be performed only on the surrogate. Less dramatic examples also abound.) However, for the actual prediction of phase diagrams, there is some question about whether the base approximations are adequate to

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

provide the information to the accuracy needed. An alternate hybrid scheme might be to calculate the empirical model parameters from first-principles approaches. That is far less advanced because the definition of the empirical model parameters hides detail that must be considered when approached from the more fundamental side.

Although costly, binary and ternary (two-and three-element) phase diagrams can be, and often are, determined experimentally. Most interesting modern alloys, however, are a stew of elements blended to optimize multiple properties of the material. Not only are these alloy systems hard to characterize, it is even difficult to represent the data in an insightful way. The ability to calculate phase diagram information from first principles would aid by guiding experiments in specific cases and by suggesting critical coordinates with which to explore the data.

The state of the art in the context of alloy phase diagram computation consists of three principal approaches. All three involve model Hamiltonians of sufficient simplicity to allow computation of finite-temperature statistical mechanics.

The coherent potential approximation (CPA) replaces the ensemble of different atomic species on a lattice by a fictitious entity that scatters in an average manner consistent with the overall composition. The CPA is normally used to treat totally random alloys without local order or clustering. Attempts to incorporate local atomic relaxations, charge transfer, or clustering have just begun and remain to be fully tested. The CPA is well suited to the quantum mechanical description of the total-energy variation associated with compositional variations of chemical species on a common crystalline lattice, order-disorder, and chemical mixing energies, for example.

A second approach to the calculation of configurational energies is to use a finite set of ordered compounds, AnBm, to determine the parameters in the so-called cluster expansion. This expansion is exact and fairly rapidly converging. In this approach the total energy of a particular chemical configuration is expressed as a sum over contributions associated with the various local substructures, c (pairs, triangles, tetrahedra, and so on):

where P(c) is the probability of occurrence of the particular local configuration, c, in the alloy. Calculated total energies for ordered configurations (for which P(c) is known) permit Eq. 2.1 to be solved for the expansion parameters ε (c), which can then be used to describe complex disordered configurations at finite temperatures.

The CPA and cluster-expansion approaches have proven quite effective for compositional variations on a common crystalline lattice, so-called coherent alloys, but leave open the much greater challenge of finite-temperature systems exhibiting general geometrical variations and concomitant elastic effects. This has been addressed by using Monte Carlo treatments of models consisting of pair- and three-body classical potentials. This approach has been applied to bulk semiconductor alloys as well as epitaxial growth in these systems. In metallic systems, attempted applications based on electronic structure schemes are still very preliminary. Monte Carlo techniques can be particularly useful for consideration of more complex alloys such as ternary and quaternary systems.

The calculation of alloy phase diagrams is an area where extensive databases are an important aspect of the problem. Given increased computational resources, this would be a respectable endeavor. Beyond this, many of the desired improvements are limited by the need for conceptual advances. Within the CPA, a trivial parallelism achieved by spreading Brillouin zone k-points across the nodes can yield a remarkable advantage. More complicated, but feasible, is the incorporation of local cluster behavior. Monte Carlo schemes should also benefit significantly from parallelism.

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

MAGNETIC MATERIALS

Magnetic effects in condensed matter have been, and remain, a fertile source of both intellectual and technological interest. The cuprate high-Tc superconductors illustrate both. Computationally tractable theories of magnetism have, for the most part, been based on the local-spin-density (LSD) approximation, which has proven adequate for both qualitative understanding and quantitative predictions for many magnetic systems. Unfortunately, the list of systems for which the LSD approximation is in serious error is quite lengthy. These include transition-metal oxides, the cuprate superconductors, and systems involving localized f electrons, such as rare-earth compounds. It is also unfortunate that, unlike the theory of the optical properties, there is no practical extension of the LSD approximation analogous to the GW theory of excitations. Nonetheless, calculations based on the LSD approximation have been immensely useful in elucidating the origin of magnetic effects in metallic systems. There is much more useful scientific work to be done in this context and even more engineering and materials design work.

Magnetic effects can be categorized according to the relative importance of the spin-orbit interaction. Another important classification reflects the presence or absence of localized f electrons. The following are discussed below: (1) itinerant transition-metal systems, (2) itinerant systems for which spin-orbit effects are critical, and (3) systems containing localized f electrons.

Itinerant Transition-Metal Magnets

Calculations based on the LSD approximation have been remarkably successful in this context. The magnetic properties of even complex compounds are amenable to quantitative prediction. Opportunities in this context include the analysis of thermodynamic properties. The principal line of attack in this context has been the use of LSD-based calculations to obtain values appearing in phenomenological models. Examples of this approach include calculation of the total energies of ferromagnetically and antiferromagnetically ordered systems for the purpose of estimating Heisenberg exchange parameters.

The LSD approximation has been particularly useful in elucidating the strong coupling between the presence of spin polarization and atomic volume. The most dramatic manifestation of this coupling is the INVAR effect (zero thermal expansion), but less dramatic effects are quite common, including the elastic properties of the elemental transition-metal materials.

The reliability of the LSD approximation for itinerant transition-metal systems makes it a useful tool for computer-aided materials design. Technologies near the interface of science and engineering include high-magnetization alloys, magnetic multilayers, and magnetooptics.

Thermodynamics of Magnetic Order

The development within the LSD conceptual framework of the theory of noncollinear magnetism has offered two types of benefits. First, the theory has shown that the relatively small set of predominantly manganese-base compounds that exhibit noncollinear magnetic order at low temperatures can be understood in terms of conventional and itinerant magnetism. More importantly, the theory can be used to study the energetics of spin configurations arising in conventional collinear magnets at finite temperatures and the loss of long-range magnetic order at the Curie temperature, for example. The further development of this type of analysis for both scientific and technological objectives represents an opportunity for this field.

Spin-Orbit Effects

Two entire classes of magnetic effects require a significant spin-orbit coupling. The first is magnetic anisotropy, alignment, and coercivity. The second is magnetooptics. Both are vitally important to various technologies.

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

The spatial directions of magnetization and the energy required to alter it are the essence of magnetic recording and permanent magnets, such as those in electric motors. Magnetooptics is the basis for many scientific probes, such as the surface magnetooptical Kerr effect, and for writable optical disk storage. In the case of magnetooptics, there is substantial evidence that the LSD approximation contains the required physics. Parameter-free calculations are quite successful in predicting the Kerr rotation of modestly complex intermetallic compounds.

In the case of magnetic alignment, there is encouraging evidence that these effects are within the reach of LSD-based calculations, although the importance of orbital magnetism to these effects is not well known. Additionally, extraction of the desired, but quantitatively small, total-energy differences from computational noise has proven to be very difficult. But there has been recent progress on this front, and continued progress represents a near-term opportunity.

Localized f Electrons

Until recently, the analysis of rare-earth and actinide systems containing localized f electrons was thought to lie beyond the reach of LSD-based calculations. A relatively straightforward extension of the theory by Brooks, Johansson, and co-workers (M.S.F. Brooks, Physica 130 B:6, 1995; O. Eriksson, M.S.F. Brooks, B. Johansson, Physical Review B, Vol. 41, 7311, 1990) has proven very successful. While this advance permits the treatment of orbital magnetism and effects related to magnetic anisotropy, it does not extend to dynamical phenomena, such as the Kondo effect. The exploitation of this advance, in the context of hard magnets for example, is an opportunity for this field.

Highly Correlated Systems

The cuprate high-Tc superconductors are perhaps the most dramatic illustration of the limitations of the LSD approximation. The transition-metal oxides are another highly visible example. Even here, however, calculations based on the LSD approximations have been quite useful. The guide they provide to the interpretation of angle-resolved photoemission has been particularly useful, both in NiO and the high-Tc materials. The use of these calculations to estimate parameter values for more phenomenological theories also has been valuable and represents an ongoing opportunity.

Future Prospects and Opportunities

As the discussion above indicates, a theoretical extension of the LSD approximation remains an outstanding need as well as opportunity of the field. Nonetheless, the discussion also indicates that calculations based on the LSD approximation are often of considerable conceptual and practical utility. Exploitation of this theoretical framework as a guide to the design and development of new magnetic materials is particularly promising. The fact that prominent suppliers of materials design software will offer commercial software of this type is a measure of the practical opportunity. The fact that LSD-based calculations are parameter free means that the computation of chemical trends is often particularly reliable. Finally, the application of LSD-based theory to increasingly complex systems of technological interest ranks among the most straightforward ways to exploit anticipated increases in computational power.

STRONGLY INTERACTING SYSTEMS

Background

Electron correlation effects play a critical role in certain classes of materials, such as magnets and superconductors. The ab initio treatment of electron correlation effects in real materials remains one of the most challenging tasks, both conceptually and numerically. Prototypical examples of strongly correlated systems include high-Tc superconductors, transition-metal oxides, f-electron systems, and superfluid 3He and 4He. It is widely accepted

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

that superconductivity cannot be explained in terms of independent particles and that particle-particle interactions must form a central part of any explanation. In ordinary superconductors the Bardeen-Cooper-Schrieffer theory produces outstanding results. This successful theory to explain superconductivity led to a proliferation of mean-field theories that explain other phase transitions. The coherence length characterizing the width of the superconductor normal interface is large—102 to 104 times the lattice constant. This large coherence length means that fluctuations between the superconductor and normal phases are averaged over large volumes, so that a mean-field theory is valid. For all other phase transitions, the coherence length is comparable to the lattice distance, and the fluctuations around any mean-field approach are so large that mean-field theory is invalid. Accordingly, the treatment of these other phase transitions has required much more explicit many-body treatments.

An archetype of a strongly interacting model system is the Hubbard model on a lattice in which the onsite Coulomb interaction is comparable to the hopping energy between the sites. Given the apparent simplicity of this model, it is perhaps surprising that there are no generally accepted results for it except in one dimension or, more recently, in infinite dimensions. The study of this model has involved a large number of theoreticians using both the best numerical procedures and analytic approaches. Numerically, the growth in the number of configurations with the size of the lattice is comparable to the situation for the configuration-interaction method in atoms and molecules. To date, the small size of systems studied is such that there is no agreement on the ground states or thermodynamically stable states for the Hubbard model as a function of the density of electrons. Nonetheless, the model continues to attract attention because practically every important “real” strongly interacting system has been “mapped” onto the Hubbard models, albeit with uncertain accuracy. These systems include ferromagnets and antiferromagnets, cuprate superconductors, heavy-fermion systems, and even helium. There is growing recognition that these systems may require more realistic models. Examples of the necessary complications are (1) long-range Coulomb and exchange interactions, (2) many orbitals at each site, and (3) crystal-field and spin-orbit splitting of these orbitals.

Critical Issues

Failure to solve the Hubbard model has not dampened enthusiasm for it. On the contrary, interest is higher than at any point in its history. In fact, the model has been modified and extended in a variety of ways. It is possible to integrate over the higher energies of the Hubbard model to find an effective spin interaction, J, between spins on nearest neighbors. In the case of the planes in cuprate superconductors, a model has been developed that involves not only the interaction between the spins on coppers but also the hopping of holes (sited at coppers in the model but of course extending onto the adjacent oxygens). This so-called t-J model is a flourishing subfield of its own, whether or not it has any relevance to cuprate superconductors. Another direction has been to broaden the original Hubbard model. The modifications include (1) a single level, the others having been removed to higher energies by crystal-field splitting; (2) hopping between oxygens in addition to hopping between copper and oxygen; (3) Hubbard interactions not only on the copper but also on the oxygen; and (4) Hubbard-type interaction between adjacent copper and oxygens. Of course, there is little chance of solving this much more complicated model with its vastly enhanced parameter space. Nonetheless, such “realistic” models have engendered considerable interest.

In some systems a coupling of electrons to the phonons can simulate negative U values in the Hubbard model. These models tend to bind two fermions on a site to form a boson. These so-called real pairs behave very much as bosons that may now form a superfluid. Nonetheless, the bulk of interest is in the model. An alternate

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×

use for the model is to apply the original Hubbard model to bosons instead of fermions. In that case the interaction U suppresses multiple occupancy of sites and in the infinite U limit gives the “hard-core” boson mode. Such models have been used to model superfluid helium-four. Furthermore, the addition of disorder leads to the so-called dirty boson model used to describe, for example, the phenomenon where one class of cuprate superconductors becomes either superconducting or insulating as the temperature is lowered and depending on the degree of disorder.

A principal method to attack all such Hamiltonians is the quantum Monte Carlo (QMC) method, which is based on a stochastic approach. Unfortunately, the effort to construct a probability runs into a considerable obstacle in the Hubbard model; for some moves in the stochastic walk, the probability is negative, thereby precluding a probability interpretation. This so-called fermion sign problem has limited the application of QMC calculations, especially since it gets exponentially worse as the temperature is lowered. So far, every attempt to solve the fermion sign problem for lattice models has been either unsuccessful or so difficult to implement that it has not been attempted.

Another critical problem is a first-principles computation of the model parameters such as the onsite Coulomb interaction, let alone accurate estimates of what is being omitted in such a simple model. There have been pioneering studies using both quantum chemistry methods and local-density approximations, but these have clearly demonstrated the limitations of both. The connection from quantum chemical techniques to the construction of accurate models is an open question.

In addition to the lattice models described above, Monte Carlo methods (either variational or fixed-node diffusion) have been successfully applied to the treatment of real materials with long-range Coulomb interaction, such as the covalent semiconductor and metals. Extension of these methods to highly correlated systems is currently an active area of research. Another direction is a hybrid approach combining a Hubbard-type interaction with a standard local spin density functional Hamiltonian. This semiempirical approach allows interpretation of spectroscopic experiments.

Computational Forecast

Monte Carlo methods are intrinsically well suited to take advantage of the unprecedented increases in computing power afforded by emerging MPP environments. The computation time for path-integral Monte Carlo calculations scales as N3L, where N is the number of sites and L is the number of “time slices” in the computations. In going to lower temperatures, L will increase and necessitate larger N in order to see longer-range correlations. But, in fact, the fermion sign problem has prevented large-scale application to low-temperature systems.

At present, there is essentially no agreement on any features of the ground state or the phase diagram of even the simplest Hubbard model as a function of the concentration of electrons. At low temperatures, the ill-conditioned nature of the matrices requires extensive computation, growing exponentially on small data sets. These potential complexities are reminiscent of quantum chromodynamics on a lattice problem, which has spawned both the extensive use of parallel computers and the development of special computer architectures to achieve multiteraflop speeds. The variational and fixed-node methods do not suffer from the sign problems but provide only a variational solution. In contrast with path integral methods, these calculations scale as N3, where N is the number of electrons, and simulations with N~1,000 electrons have been performed.

Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 3
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 4
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 5
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 6
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 7
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 8
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 9
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 10
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 11
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 12
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 13
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 14
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 15
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 16
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 17
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 18
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 19
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 20
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 21
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 22
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 23
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 24
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 25
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 26
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 27
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 28
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 29
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 30
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 31
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 32
Suggested Citation:"Challenges in Materials Research for the Remainder of the Century." National Research Council. 1995. Computational and Theoretical Techniques for Materials Science. Washington, DC: The National Academies Press. doi: 10.17226/9025.
×
Page 33
Next: Computational Perspectives for the Remainder of the Decade »
Computational and Theoretical Techniques for Materials Science Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!