Chapter 2

Challenges in Materials Research for the Remainder of the Century

NEW MATERIALS

Background

Over the ages, new materials have been a dominant factor in driving advances in materials usage and materials technologies. Recent developments in materials science and condensed-matter physics are no exception. In the past decade, much of the progress in fundamental knowledge and technological applications in this area was related to unexpected discoveries of new materials with novel and desirable properties. Examples are numerous. Recent new materials that have generated much excitement include high-temperature copper oxide superconductors, fullerenes and fullerides, nanophase materials, superhard materials, semiconductor quantum wells and superlattices, magnetic supperlattices, and other artificial structures such as quantum dots and quantum wires. Many of these materials have already made the transition from being objects of basic research interest to those of practical applications. Quantum well lasers, high-Tc superconductor devices, and magnetic superlattices with giant magnetoresistance in recording heads are examples.

In the study of these new materials, theory and computation have played an important role in unraveling their properties. The theoretical approaches range from very empirical schemes to ab initio methods that require no experimental input. In some cases, predictions of new materials were made and subsequently confirmed by measurements. Researchers are thus in the early stages of using theory and computation to “design” materials.

Present Status and Critical Issues

Understanding new materials requires fundamental knowledge of their structure; phase stability; and various structural, electronic, vibrational, and mechanical properties. These quantities are intimately related to the electronic structure of the solid. Until the 1970s, many of these properties could be studied only empirically or by using model calculations. In general, theoretical predictions for specific real materials were lacking.

That situation has changed dramatically in the past decade. In particular, there have been many important advances in electronic structure theory and algorithmic development for the study of real materials. Among these advances are highly efficient methods for local density approximation (LDA) total energy calculations, new pseudopotentials, a first-principles molecular dynamics (e.g., Car-Parrinello) approach for dynamical and thermodynamical properties, realistic tight-binding total energy schemes, a first-principles method for electronic excitation (quasi-particle) energies, and quantum Monte Carlo methods for correlated electron effects. These theoretical advances, together with dramatic improvements in computing power, have permitted, in the past few years, the computation and prediction of a broad range of properties of real materials of increasing complexity.

The structural, vibrational, mechanical, and other ground-state properties of systems containing up to several hundred atoms can now be computed using the LDA. In these calculations the development of iterative schemes, together with new soft-core pseudopotentials, has made possible the study of very large systems. Select examples of recent successes include the unraveling of the normal-



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 3
Computational and Theoretical Techniques for Materials Science Chapter 2 Challenges in Materials Research for the Remainder of the Century NEW MATERIALS Background Over the ages, new materials have been a dominant factor in driving advances in materials usage and materials technologies. Recent developments in materials science and condensed-matter physics are no exception. In the past decade, much of the progress in fundamental knowledge and technological applications in this area was related to unexpected discoveries of new materials with novel and desirable properties. Examples are numerous. Recent new materials that have generated much excitement include high-temperature copper oxide superconductors, fullerenes and fullerides, nanophase materials, superhard materials, semiconductor quantum wells and superlattices, magnetic supperlattices, and other artificial structures such as quantum dots and quantum wires. Many of these materials have already made the transition from being objects of basic research interest to those of practical applications. Quantum well lasers, high-Tc superconductor devices, and magnetic superlattices with giant magnetoresistance in recording heads are examples. In the study of these new materials, theory and computation have played an important role in unraveling their properties. The theoretical approaches range from very empirical schemes to ab initio methods that require no experimental input. In some cases, predictions of new materials were made and subsequently confirmed by measurements. Researchers are thus in the early stages of using theory and computation to “design” materials. Present Status and Critical Issues Understanding new materials requires fundamental knowledge of their structure; phase stability; and various structural, electronic, vibrational, and mechanical properties. These quantities are intimately related to the electronic structure of the solid. Until the 1970s, many of these properties could be studied only empirically or by using model calculations. In general, theoretical predictions for specific real materials were lacking. That situation has changed dramatically in the past decade. In particular, there have been many important advances in electronic structure theory and algorithmic development for the study of real materials. Among these advances are highly efficient methods for local density approximation (LDA) total energy calculations, new pseudopotentials, a first-principles molecular dynamics (e.g., Car-Parrinello) approach for dynamical and thermodynamical properties, realistic tight-binding total energy schemes, a first-principles method for electronic excitation (quasi-particle) energies, and quantum Monte Carlo methods for correlated electron effects. These theoretical advances, together with dramatic improvements in computing power, have permitted, in the past few years, the computation and prediction of a broad range of properties of real materials of increasing complexity. The structural, vibrational, mechanical, and other ground-state properties of systems containing up to several hundred atoms can now be computed using the LDA. In these calculations the development of iterative schemes, together with new soft-core pseudopotentials, has made possible the study of very large systems. Select examples of recent successes include the unraveling of the normal-

OCR for page 3
Computational and Theoretical Techniques for Materials Science state properties of high-Tc superconducting oxides (e.g., structural parameters, phonon spectra, Fermi surfaces); prediction of new phases of materials under high pressures (e.g., superconductivity in the simple hexagonal phase of Si); prediction of a superhard material (C3N4); determination of the structure and properties of surfaces, interfaces, and clusters; and calculation of the structure and properties of the fullerenes and fullerides. The Car-Parrinello-type ab initio molecular dynamics approach and the less accurate but less time-consuming tight-binding molecular dynamic schemes have made possible the quantum mechanical calculation of the dynamics and thermodynamical properties of systems in the solid, liquid, and gaseous states. The first-principles quasi-particle approach based on the GW approximation (which evaluates the electron self-energy to the first order in the electron Green's function G and screened Coulomb interaction W) has been used to calculate electron excitation energies in solids that, in turn, are used for the quantitative interpretation of spectroscopic measurements. The excitation spectra of crystals, surfaces, and materials as complex as the C60 fullerites have been computed. Quantum Monte Carlo methods have yielded cohesive properties of unprecedented accuracy for covalent crystals and have provided the means to study highly correlated electron systems, such as two-dimensional electrons at semiconductor heterojunctions in a strong magnetic field. Very accurate determination of the properties of specific materials by itself is not sufficient, however, for the general goal of materials by design. A critical issue is how one can intelligently sample the vast phase space of taking different elements in various proportions over the periodic table to make a material of specific desired properties. Even with computers that are several orders of magnitude more powerful than existing machines, it would not be possible to sample the phase space adequately. Together with accurate methods, there must be guiding principles (e.g., those based on structure-property relationships) in the theoretical search for new materials. The discovery of general principles regarding materials behavior is thus as important in the theoretical design of materials as are accurate computational methods. Another important related issue is the process of starting from conception to the final synthesis of a useful new material. Because of the large phase space discussed above, it is unlikely that a material of optimal desired properties will be obtained on the first try. Many iterations involving theoretical prediction, experimental synthesis, and characterization are needed. A major challenge then is to find ways to accelerate convergence in this iterative process. A case in point is the recent provocative and potentially useful prediction of a new material, carbon nitride, which rivals or exceeds the hardness of diamond. In 1989, based on an empirical idea and through first-principles total energy calculations, a new compound, C3N4, was predicted to be stable and to have bulk modulus comparable to that of diamond. Its structural and electronic properties were predicted by using LDA. Subsequent to the theoretical work, several groups proceeded to synthesize and characterize this possible material in the laboratory. In 1992 independent experimental evidence was obtained from three groups lending support for the theoretical prediction. This case provides a concrete example of how ideas, computations, and experimental characterization may work together in the design of materials of intrinsic scientific interest and potential utility. Future Theoretical Developments and Computing Forecast In the past several years much effort has been devoted to developing algorithms to extend the applicability of the above new methods to ever larger systems. The LDA Car-Parrinello-type calculations have been successfully implemented on scalable parallel machines. Recent calculations of semiconductor systems using this method have reached the size of

OCR for page 3
Computational and Theoretical Techniques for Materials Science supercells containing ~1,000 atoms. Methods for calculating free energies using quantum molecular dynamics are being developed to study melting and other phase transitions. The quantum Monte Carlo approaches are now readily amenable for use with massively parallel machines. It is now feasible that the quasi-particle calculations could be implemented on massively parallel machines with an expected gain in efficiency and power (in particular with a real-space formulation) similar to the LDA-type calculations. Thus, the use of these methods on the new massively parallel machines will certainly greatly enhance our ability to investigate new and more complex materials. Another exciting recent development is the work on more efficient and new algorithms such as real-space methods (e.g., wavelets, finite differences) and methods that scale better for large systems. For example, there has been recent work on circumventing the scaling limitation of N3 for LDA calculations where N is the number of atoms in the system. Significant progress has been made by several groups in developing methods that would scale as N in these calculations. The success of these approaches would enhance our ability to compute and predict in the near future the properties of very large molecular and materials systems, including systems with perhaps tens of thousands of atoms. It should be noted that, although the LDA for ground-state properties and the GW method for excited-state properties have given impressive results for large classes of materials, there are some systems, such as highly correlated magnetic materials, to which these ab initio methods are less applicable at this time. This is because of the necessarily approximate treatments of many-electron exchange-correlation effects in these large-scale first-principles calculations for real materials. Theoretical developments for better treatment of many-electron effects will thus be another important direction for computational materials research. On the other end of the spectrum, further improvement of tight-binding molecular dynamics and related methods will enhance our ability in the near future to examine materials phenomena that cannot be modeled by several hundreds or thousands of atoms. SEMICONDUCTORS Recent technological advances ranging from pocket radios to video cassette recorders to powerful supercomputers were all made possible by our improved understanding of how to synthesize and process electronic materials (i.e., semiconductors). The development of new semiconductor technologies has played a significant role in contemporary economic competition. Semiconductor technology and related applications have played a major role in the past 50 years and will continue to do so in the foreseeable future with the development of new artificially structured semiconductor materials, such as superlattices and multiple quantum wells. In electronic materials research, scientific and technological advances are often intimately connected. The technological need for ultrapure, well-characterized semiconductors has resulted in the development of new experimental and theoretical methods. The application of these methods has tremendously improved our understanding of semiconductors such as silicon and gallium arsenide. It may be that our understanding of the crystalline state of semiconductors exceeds that of any other material. Often, semiconductor-related activities can be divided into (1) understanding the fundamental electronic structure and atomistic processes and (2) the modeling of fabrication, processing, and characterization of devices. The fundamental issues in semiconductor materials science research involve knowledge not only of the ideal crystalline material but also of the role of line and point defects, intrinsic and extrinsic impurities, dopants, grain boundaries, and surfaces. Much work in this area has been done, but much remains to be done. Methods for examining the electronic and structural properties are further developed than in other

OCR for page 3
Computational and Theoretical Techniques for Materials Science areas. There are a number of approaches to understanding the chemical bond in semiconductors. At the fundamental level, one can consider ab initio methods in which the only input is the atomic number, and perhaps the atomic mass, of the constituents. This approach is the most powerful one in that no experimental information is required and accurate techniques have been developed. For example, ab initio pseudopotentials constructed in the LDA have been used to model amorphous materials, clusters, surfaces, and liquids. The accuracy of these approaches is quite good. Typically, structural and vibrational properties can be determined within a few percent. Excitation spectra and optical properties are more complex, but these properties also can be addressed. Another fruitful approach is based on a tight-binding approach. This requires input either from experimental sources or ab initio procedures. However, it is more computationally tractable and has led to accurate descriptions of semiconducting materials and surfaces. At the other end of the spectrum are empirical methods. One empirical approach has been to use the pseudopotentials themselves as adjustable parameters. This approach is not appropriate for structural studies but can be used for the analysis of optical spectra. Often, only a few Fourier coefficients of potential are required to fit the potential to the optical spectrum of a crystal. For structural properties, empirical interatomic potentials can be used. This approach is conceptually difficult, though, since quantum mechanical interactions must be mapped onto classical interactions. Using such an array of tools, it is almost a routine procedure to examine such properties as lattice parameters, vibrational modes, and phase stability. Given the computational tools, software, and hardware to achieve a two-orders-of-magnitude increase in both speed and memory, what new and interesting problems would become feasible? With such an increase in computational power, attention could be focused on (1) static effects, such as extended and charged defects, amorphous semiconductors, grain boundaries, incommensurate overlayers, doping, and point defects, and (2) dynamical effects, such as diffusion, ion implantation and laser annealing, temperature effects, transport phenomena, and excited states. Even with a hundredfold increase in computing power there will remain limitations. For example, any dynamical simulation is not likely to be realistic in terms of experimental time frames. As an example, dynamical simulations rarely use time steps that are longer than a few femtoseconds. With such time scales it would take years of CPU time to model a system of a few hundred atoms for simulation of a second of “real” time. Despite this limitation, it may still be useful to examine the dynamical behavior of systems on a picosecond, or nanosecond, time frame. For example, the diffusion constants of semiconductor liquids have been accurately modeled on a picosecond time frame. Nonetheless, new methods for the “intelligent” sampling of phase space should be developed. A recent example of such an approach is based on “ genetic” algorithms. Another limitation is the size of the system to be examined. It is not likely that systems of more than a few hundred atoms will be routinely examined using ab initio methods. New algorithms that take full advantage of parallel architecture will need to be developed, along with methods that scale more efficiently with the size of the system. Also, new techniques will need to be developed to handle some fundamental issues. It would be desirable to examine the free energy of semiconductors to predict phase diagrams and other issues that involve thermal properties (e.g., thermal expansion coefficients, diffusion). Current methods are in the developmental stage, and there are questions as to the accuracy of local density methods for such applications. OPTICAL PROPERTIES Background Of the many properties of materials, the optical properties are undoubtedly among the most useful and interesting, especially for

OCR for page 3
Computational and Theoretical Techniques for Materials Science semiconductors. Many device applications, including semiconductor lasers and light-emitting diodes, rely on specific optical responses of a material. Linear and nonlinear optical spectroscopies are now common probes for investigating and characterizing the underlying microscopic nature of materials. With devices becoming increasingly smaller and dependent on manmade structures, the relationship between microstructure, defects, and the optical properties of materials is one of increasing importance. For photovoltaic applications the dominant issue today is the minority carrier lifetime. However, concern for more environmentally safe fabrication may drive a search for new candidate materials with the requisite characterization of their optical properties. Interest in thermo-photovoltaics for lower-temperature terrestrial heat sources will require characterization of materials with band gaps approaching half an electron volt. Hence, the ability to calculate optical responses is crucial for understanding and employing new materials, surfaces and interfaces, nanostructures, clusters, materials with defects, amorphous materials, and materials under various extreme conditions such as high pressures. Unlike the sharp optical spectra of atomic and molecular systems caused by transition between narrow energy levels, solid-state spectra arising from electron transitions between bands are broad and relatively featureless. To unravel these spectra for microscopic information necessarily requires extensive theoretical analysis. In addition, the electronion and electron-electron interactions in a solid are complex. The nonlinear optical properties are, of course, even more difficult to compute or predict because of the additional intermediate states involved. For linear optical properties an early successful approach was the empirical pseudopotential method developed in the 1960s that explained the optical spectra of many materials and led to a detailed description of the band structure and bonding nature of solids. This approach employs pseudopotentials with several empirical parameters. The resulting electronic structure is then used to derive response functions for comparison with optical and photoemission spectra. Similar calculations based on empirical band structures have been carried out for the nonlinear spectra but with less success. This and other similar empirical approaches require extensive experimental input and hence are less suitable for investigating new materials or systems such as surfaces and interfaces for which detailed experimental data on the structure and other properties are often much less available. Nonlinear optical materials are central to laser science and science using lasers. Uses range from upconversion of frequency to ultrafast detection schemes involving the interaction of an (attenuated) original probe laser with the scattered probe to data storage via the photorefractive effect. A nonlinear material may have a number of criteria that dictate its usefulness. For example, not only must the second-harmonic coefficient be large, but the two dielectric constants must permit (by phase matching) a large interaction volume for second-harmonic generation. Accordingly, there is a continuing search for materials with desired properties in an ever-broadening frequency range. The emergence of synthetically modulated nanostructures, such as superlattices and quantum wells, where the chemical composition can be varied on an atomic scale has had a large impact on the search for and discovery of novel optical materials with desirable properties without analogs in bulk. Semiconductor superlattices are currently of particular technological interest because they afford the possibility of tailoring the electronic structure by controlled modifications of the growth parameters—layer thickness, alloy compositions, strain, growth orientation, and so on. These controlled modifications of the chemical and structural parameters lead to precise control of the degree of quantum-confinement of the carriers and, as a result, lead to the tunability of the electronic and optical properties of synthetically modulated structures. Current applications where the flexibility of tuning the electronic structure of semiconductor

OCR for page 3
Computational and Theoretical Techniques for Materials Science superlattices serves as the basis for the design of materials and devices exhibiting desirable properties are numerous. This feature is used in the fabrication of semiconductor diode lasers, electrooptical modulators, nonlinear optical devices, new designs for long-wavelength infrared detectors based on III-V InAs/GaInSb strained-layer semiconductor superlattices, and blue-green diode lasers synthesized from wide-gap II-VI semiconductors and quantum wells such as CdZnSe/ZnSe and the like. In addition, there has been a recent resurgence in the use of conjugated polymers for micro- and optoelectronics applications with the discovery in 1990 that certain organic polymers could be stimulated by carrier injection to emit visible light. This constituted a fundamental breakthrough in solid-state physics that is of great potential industrial importance. This discovery has raised the possibility of new applications of polymers to electronics, particularly in flat-panel display technology. Present Status and Critical Issues Linear Optics Ab initio calculation of the optical response of a solid is conceptually and computationally challenging because of many-electron effects. Even at the simple level of neglecting electron-hole (excitonic) interactions, computation of the dielectric function requires knowledge of the excited electron (quasi-particle) energies and wavefunctions. Being based on ground-state theory, standard electronic structure methods such as the local density functional formalism, which yields excellent results for structural and other ground-state properties of many solids, do not give accurate electron excitation energies. Before the mid-1980s it was not possible to determine from first principles whether a crystal such as Ge was a semiconductor or a metal—let alone the quantitative value of its optical band gap and excitation spectrum. In general, LDA calculations underestimate the band gap by 50 percent or more, and typically overestimate Hartree-Fock gaps compared to experimental results. The reason for these discrepancies is that exchange-correlation (self-energy) effects can significantly modify the properties of the excited electrons from those of an independent-particle picture. Optical transition energies, even at the simplest level, need to be properly computed as transitions between quasi-particle states. First-principles calculation of the quasi-particle energies in real materials became possible in 1985 with the development of methods based on the GW approximation for calculating the electron self-energy. This approach is based on an evaluation of the self-energy operator expanded to first order in the dynamically screened (with local fields) Coulomb interaction and the electron Green's function. The method has been applied to a range of systems, including semiconductors and insulators, simple metals, surfaces, interfaces, clusters, materials under pressure, and materials as complex as the C60 fullerites. Nonlinear Optics Historically, new nonlinear optical materials (e.g., urea, lithium niobate, ZnGeP2) have been found empirically, and considerable time has been spent making large enough pure crystals to check that the material satisfies all the needed criteria. At the moment, most levels of theory can provide rough estimates of the properties of any proposed nonlinear optical material. There are no really first-principles approaches; indeed, most are semiempirical. Such methods are good at discussing possible systematics between related materials—that is, via interpolation —but they are not good at extrapolating to new classes of materials. Those few (nearly) first-principles calculations can reasonably consider only a few simple materials; even then, simplifying approximations are made. Future Theoretical Developments and Computing Forecast With computational effort typically several times more than that of an LDA band calculation, the GW quasi-particle approach at present remains the only successful ab initio

OCR for page 3
Computational and Theoretical Techniques for Materials Science method for calculating electron excitation energies in solids. Researchers are pushing forward in several different directions. One direction is continuing the applications of the full method and extending it to other systems. For example, this approach is just being refined for application to transition metals and transition-metal oxides. Another direction is developing algorithms and codes to reduce the computational effort in the calculations, such as reformulating the present k-space formalism for the self-energy operator to one of real-space formalism. At present, ab initio quasi-particle calculations are basically limited to systems with less than about 100 atoms. Yet another direction is developing less accurate but simpler methods for approximate self-energy calculations. Some of the models tried, with limited success, include the “scissors operator” approximation and simplified local forms for the self-energy operator. Calculation of the intensity of linear optical spectra actually requires computation of the two-particle Green's function with the proper vertex function included to take into account electron-hole interaction (or excitonic) effects. This kind of calculation has yet to be carried out from first principles for real materials. Previous theoretical investigations on this subject were mainly restricted to model tight-binding Hamiltonian studies. For semiconductors such as silicon, standard random phase approximation calculations for the dielectric function yield band-edge optical absorption strengths that are typically off by more than a factor of two. Clearly, this is an important issue and needs to be addressed before quantitative prediction of optical absorption can be achieved. Of course, the higher-order optical response of solids is even more difficult to calculate. The results are highly sensitive to the electronic excitation spectrum because of the multiple energy denominators. Current work in this area is basically at the level of using empirical band structure in an independent particle picture or using LDA results together with the “scissors operator” approximation. However, several groups are now working toward better treatment of these quantities. In the future we can expect a much wider class of nonlinear optical materials to be produced, including, for example, polymers and metallorganic materials. Nanostructures also are a source of nonlinear optical activity. The challenge to theorists will be to devise effective means to explore the parameter space of these materials. Meeting this challenge will require not only greatly enhanced computing capability but also improved understanding of the effect of electron-electron interactions (for self-energy and excitonic effects). SURFACES AND INTERFACES At the heart of essentially all modern-day critical technologies is the need to describe how atomically and chemically dissimilar materials bond to each other to form solid interfaces. The central importance of understanding the behavior of solid surfaces and interfaces can be readily demonstrated from a consideration of key industrial segments of the economy; metal-semiconductor, semiconductor-semiconductor, and semiconductor-oxide interfaces form the cornerstone of the microelectronics industry. Adhesion, friction, and wear (tribological phenomena) between solid surfaces are ubiquitous in all manufacturing processes. The microstructural evolution of grain boundaries is central to materials performance in metallurgy. An understanding of the chemical processes occurring at surfaces is key to the development of cost-effective semiconductor etching and catalysis processes. An understanding of the structure of surfaces and how they function as catalysts offers opportunities for the design of completely new types of catalytic systems leading to revolutionary applications. While the behavior of surfaces and interfaces often determines the basis for the performance of existing materials, one cannot overestimate the scientific and technological impacts of the design of new materials based on our ability to synthesize atomically structured materials. All synthetic, atomically modulated

OCR for page 3
Computational and Theoretical Techniques for Materials Science structures involve interfaces between thin layers of chemically different substances. Because of the dramatic effects caused by the interfaces (e.g., quantum confinement of the carriers), these atomically modulated structures exhibit novel properties—electronic, optical, magnetic, and mechanical—that have no analogs in the bulk constituents from which they are synthesized. It is the emergence of advanced crystal growth techniques that has allowed the atom-by-atom synthesis of novel materials exhibiting unique and unexpected properties. An important class of such artificial crystals is multilayer or “superlattice ” structures, which are synthesized by alternately growing atomically thin layers of chemically different materials in a periodic sequence. By adjusting growth parameters such as the thickness, composition, growth axis, and sequence of the layers, it is possible to “atomically engineer” the optical, electronic, magnetic, and mechanical properties of the resulting superlattice structure. It is this capability to atomically engineer the properties of synthetically modulated materials that has led to a revolution in the field of modern materials science by qualitatively altering our approach to materials design at the atomic scale. Critical Issues Surfaces and interfaces are complex, heterogenous, low-symmetry systems. Their description by accurate quantum-mechanical methods is a challenging task because the reduced symmetry of these systems implies that large unit cells must be utilized. Moreover, solid surfaces exhibit a much larger variety of atomic structures than their bulk counterparts. In fact, because of the different possible crystallographic orientations and the numerous metastable structural phases for a given orientation, the number of possible atomic structures is essentially infinite. The study of solid interfaces and chemisorbed species presents an even greater challenge with regard to the possible number of systems. Nevertheless, the close synergy between experimental and theoretical activities that is characteristic of the field of surface science has allowed rapid progress in the development of general physics-based guiding principles to predict the atomic geometry and electronic structure of solid surfaces and interfaces. The advent of ultrahigh vacuum technology and the accelerating theoretical developments in electronic structure calculations, coupled with the emergence of high-performance computing environments, have greatly enhanced our understanding of chemical bonding at solid surfaces and interfaces. Below is a discussion of critical issues germane to the theoretical description of surfaces and interfaces. Since the mid-1970s, there have been tremendous advances in our ability to describe the ground-state properties and phase transformations of bulk materials using ab initio methods. These same ab initio electronic structure methods have now been used to determine the atomic geometry and electronic structure of clean and adsorbate-covered surfaces. Modern surface science has greatly benefited from the continuous development of powerful methods, such as the density functional method, which, together with the efficient implementation of pseudopotential and all-electron formalisms, has enabled very accurate calculations of the ground-state properties of surfaces and interfaces. Moreover, significant conceptual developments in electronic structure theory have enabled dramatic increases in our ability to perform ab initio calculations of the static and dynamic properties of large-scale materials systems. Prominent among these developments are the advances made by Car and Parrinello in calculating structural and dynamical properties. Ab initio quasi-particle calculations based on the GW approximation have permitted the determination of the electronic excitation spectrum at surfaces, in excellent agreement with experimental spectroscopic observations. Also to be emphasized is the critical role played by empirical quantum mechanical methods, such as the tight-binding method, in the early determination of the atomic and electronic

OCR for page 3
Computational and Theoretical Techniques for Materials Science structures of clean surfaces. The experimental validation of these early theoretical predictions demonstrated the predictive power of quantum mechanical electronic structure methods and laid the foundation of modern theoretical surface science. In surface and interface science the role of computational materials physics is to complement experimental work by addressing critical issues that cannot be measured directly and to provide physical insight into observed phenomena. Foremost among these critical issues are the following: The nature and origin of atomic reconstructions at surfaces, interfaces, and grain boundaries; The electronic structure (electronic surface and interface bound states and resonances, chemisorption-induced surface states, and so on) of clean and adsorbate-covered surfaces, and interfaces; The attachment sites and binding energies of chemisorbed atoms and molecules on reconstructed surfaces; The effects of steps, defects, and impurities on the physical properties of surfaces and interfaces; The determination of the rectifying potential (Schottky barrier) at metal-semiconductor contacts; Determination of energy band offsets at a semiconductor heterojunction; and Prediction of novel electronic, optical, magnetic, and mechanical properties of semiconductor superlattices and metallic multilayer materials. Forecast and Impact of High-Performance Computing As in the case of most areas germane to the field of theoretical and computational materials physics, the advent of high-performance computing environments will have a significant impact on the field of surface science. In particular, the size and complexity of the system that can be described will increase with computer power and memory. Work in this area has already begun—for example, the ab initio density functional calculation of surface reconstruction in Si (111) - (7 × 7) performed on a parallel computer using approximately 1,000 atoms. As massively parallel processor (MPP) computing environments mature from the development phase to the production phase and become available to a wider user base, it is likely that similar large-scale calculations will be performed on a routine basis, bringing tremendous benefits to the field. Another aspect of surface and interface science that will greatly benefit from the wide availability of high-performance computing environments is the bridging of the length-scale gap by physics-based multiscale modeling, from the atomistic level (atomic geometry and electronic structure) to the continuum (elasticity, plastic deformation, and so on). Much theoretical work remains to be done in this area. As an illustration of the impact of multiscale modeling, consider the task of predicting the deformation under load and the eventual failure of a polycrystalline metal with incorporated atomic impurities that give rise to a wide spectrum of grain boundary strengths. Current continuum-based, finite-element methods cannot be used to perform simulations unless they are augmented to include microscopic processes involving dislocation movement. Consequently, significant effort should be expended in extracting atomic-level parameters from ab initio quantum mechanical calculations in order to augment constitutive models used in continuum-like simulations. For the particular case of predicting the microstructural evolution of grain boundaries in polycrystalline metals, essential atomic-level parameters are grain-boundary potentials, which are total-energy curves obtained from tensile and shear deformation of the boundary. The next section on thin-film growth describes strategies to integrate modeling at different length and time scales as they relate to simple growth.

OCR for page 3
Computational and Theoretical Techniques for Materials Science GROWTH OF ARTIFICIALLY ENGINEERED FILMS Recent technological advances have led to the development of electronic and photonic devices of smaller and smaller sizes. The fabrication of these structures requires high material uniformity and interfacial homogeneity on the atomic level, currently achieved by using such techniques as molecular beam epitaxy (MBE) and chemical vapor deposition (CVD). In addition, scanning tunneling microscopy (STM) has made it possible to observe the formation and shapes of small clusters of material at the initial stages of deposition and the layer-by-layer growth of a crystal that often takes place on sharply defined steps between adjacent layers of the material. These processes are governed by physics at the atomic scale and cannot be explained by traditional nucleation theory or phenomenological continuum approaches. The most useful theoretical approaches are computational in nature and involve ab initio calculations of interatomic potentials, molecular dynamics to probe the short time dynamics, and larger scale kinetic Monte Carlo (KMC) calculations, which are relevant to growth processes involving the deposition of many atoms. Figure 2.1 Schematic view of growth on a vicinal surface by MBE, deposition, and different kinds of diffusion processes. In some cases, such as silicon, surface reconstruction can produce inequivalent steps that are alternately rough and smooth. The KMC simulations take as input specified rates for atomic deposition and diffusion and then use those rates to simulate the kinetic processes in which atoms are added and are moved about on the growing surface (see Figure 2.1 ). However, the rates that are currently determined from a combination of experimental and theoretical information are not well established. While KMC has shown great

OCR for page 3
Computational and Theoretical Techniques for Materials Science promise in its ability to reproduce growth patterns observed experimentally, more detailed microscopic modeling of the relevant rates should be possible with the next generation of computers and would greatly enhance the predictive power of this technique. The STM also makes possible a direct comparison of theoretical and computational modeling of the kinetic processes governing the growth of these structures. The use of this experimental technique along with advances in computational capabilities will improve our understanding of the kinetics of growth and our ability to fabricate smaller and better technological devices in the next decade. Many of the problems that arise in modeling growth are similar to those encountered in the study of other nonequilibrium processes. The most interesting phenomena observed experimentally often involve behaviors that occur over a broad range of spatial and temporal scales. Realistic parameter regimes are impossible to achieve with the computers that are currently available, and, for systems that stretch the limits of the current capacity, it is difficult to run a sufficient number of realizations to ensure the statistical significance of the results. This makes it difficult to relate numerical results to observed behavior or, more ambitiously, to use simulations to predict future directions for experimental study. Anticipated improvements in machine performance and parallelized codes should allow these problems to begin to be addressed. Some specific issues of interest, starting with those associated with the smallest scales, are discussed below. Integrating the modeling done at different length and time scales is the foremost challenge to a comprehensive understanding of the process of growth. Determination of Phenomenological Potentials From a computational viewpoint, we are still far from being able to do first-principles calculations on large (~105 to 107) numbers of static atoms. Modeling nonequilibrium systems accurately thus depends crucially on the use of accurate empirical potentials within KMC algorithms. The KMC simulations employ these potentials to set the relative rates for the possible moves (e.g., deposition, surface diffusion) that take place on each Monte Carlo step. To understand the basic trends and general principles, a detailed understanding of the potentials is somewhat less important, although care must be taken to properly identify essential properties. Furthermore, to improve the predictive powers of KMC and the ability of this method to provide quantitative results, much work needs to be done to improve our ability to extract these potentials from ab initio calculations or from techniques such as effective medium theory. Modeling Growth of Submonolayer Structures In the low-density limit, KMC can be used to model the self-assembly of small clusters of atoms deposited on a surface. Because the system sizes considered are relatively small (on the order of hundreds or thousands of atoms), it is in this intermediate regime that interaction potentials can be incorporated in the greatest detail and the quantitative predictive powers of KMC may be the greatest. Recent experimental progress on growth techniques for quantum wires and quantum dots should yield the next generation of smaller and faster devices. Thus, results obtained by modeling the self-assembly of small structures will have increasing technological relevance. Systems Composed of Many Layers Realistic simulations of the growth of many layers of atoms on a substrate pose difficult if not impossible challenges for the current generation of computers. However, with the development of faster, more powerful computers, there will be a new opportunity for numerical simulations to play a role in the development of new growth techniques, where, to date, results obtained by modeling have lagged significantly behind the experimental

OCR for page 3
Computational and Theoretical Techniques for Materials Science (terminated by hydrogen atoms and floppy hydrocarbon molecules using an REBO potential). While these are not energetic materials, it is clear that this kind of simulation can shed light on the issue of structural causes of explosive sensitivity. Challenges for the Future Challenges in the future involve both molecular complexity and hydrodynamic effects, which will impose larger scales in time and distance upon detonation simulations. For example, in order to see if the most energetic and stiffest bond breaks first upon shock compression— as it seems to for the simplest AB and O3 molecular models—an REBO potential should be developed for a molecule with more vibrational degrees of freedom (e.g., a generic ABC molecule where the AB bond is the stiffer energetic bond). Studies of shock waves in systems of large but unreactive molecules (i.e., lacking the pathways to chemical decomposition) confirm the usual up-pumping picture of vibrational energy. It is therefore reasonable to imagine that inclusion of chemistry will change all that. If the reaction zone and therefore the failure diameter increase as expected, it may become necessary to use scalable parallel computing resources. As more complex molecules are used in these simulations, there will be even greater challenges and opportunities in the area of designing realistic potentials, as well as the need for increased computer power. Another ambitious undertaking would be to see if the cellular nonplanar structures observed experimentally will also appear in detonation simulations. These can arise from defects in solids and density fluctuations in fluids and from wave interactions due to edge effects. In real systems the distance scales, as in the case of reaction zones, cover many orders of magnitude. There are two natural spinoffs from this work in detonations that have strong implications for the Navy in the safety of explosives: (1) tribochemistry, that is, reactions at surfaces initiated by friction (work, to date, has been only exploratory in nature), and (2) fracture chemistry induced at crack tips. Both fields of study will push computer resources to the limit, especially for realistic molecular models. In summary, our understanding of energetic materials is on the threshold of a revolution, in no small way stimulated by computer chemistry experiments at the molecular scale. As these simulations are carried out on even larger and faster computers, more and more realism can be incorporated into them, both in the interaction potentials for modeling more sophisticated molecular species and in the ability to treat more complex flows (such as detonation failure and cellular structures) by expanding the accessible time and distance scales of the simulations. STRENGTH OF MATERIALS, DEFECTS, HIGH-TEMPERATURE MATERIALS Background Stainless steel is not used to bring city utilities, water, or natural gas into houses. Economics requires that the cheapest material that will perform adequately and with a reasonable amortizable lifetime be used. Structural materials must be sufficiently strong and stable, which requires understanding metallurgical problems such as fracture, fatigue, creep, oxidation, corrosion, and embrittlement, to name a few. All these phenomena are exacerbated by enhanced mass transport at elevated temperatures, leading to phase changes, particle diffusion, ablation, and even chemical reaction. The motivation for the use of these materials at high temperatures arises from the greater energy efficiency associated with the higher temperature in the thermodynamic Carnot cycle. In a more mundane sense, material applications in turbines, aircraft jet engines, and nuclear reactors all require high-temperature materials. Even boilers, pressure vessels, and pipes may be included at their temperature extremes. Clearly, all these systems are of primary importance to the Navy

OCR for page 3
Computational and Theoretical Techniques for Materials Science and earned the subject of “high-temperature materials” a place in the Navy's critical technologies. Solutions to these problems by metallurgists involve developing alloys having desirable properties that avoid or subvert failure. Alloys are composed of combinations of several metallic and nonmetallic elements, each adding some desirable aspect or preventing some deleterious phenomenon. Mechanical failure or fracture is controlled by the motion or rather the lack of dislocations. The ability of microscopic crystalline grains to slide over one another leads to ductility—the resistance to crack propagation—which is governed by dislocation movement; strength is added by preventing or pinning dislocation movement by incorporating foreign atoms or particles into the alloy. Superalloys often include over a dozen constituents to achieve this goal. Other systems include the myriad materials called stainless steels and refractory alloys based on four- and five-dimensional group V and group VI transition elements. The second important issue for high-temperature materials is stability. In the operating environments of these materials, such as gas turbines, jet engines, or nuclear reactors, there are often trace elements such as sulfur, oxygen, sodium, carbon, and hydrogen. At elevated operating temperatures, chemical attack can readily occur, leading to oxidation, hot corrosion, and embrittlement. Coatings, either thin added layers or “home grown” as in chromium oxide coatings for stainless steel, are often used. The surface chemistry of inhibitors, promotors, and catalysis is relevant to these problems. Computational Issues and Forecast It is a longstanding but as yet unrealized goal to use theory to aid in materials development, with the idea that theoretical calculations can be performed more quickly and less expensively than experiments. Moreover, a truly fundamental understanding of materials behavior will permit exploration of novel materials and even their design from first principles. To use microalloying to improve NiAl properties, for example, will require a fundamental understanding of the mechanisms of its deformation and fracture and the effects of the alloying on its mechanical behavior. The impact of theory on materials development has been limited, however, owing to the lack of connection between the macroscopic behavior and the theories defining the fundamental properties of materials. It is convenient to divide the length and time scales appropriate for materials behavior into the following four computational categories: (1) quantum mechanics, (2) classical molecular dynamics of individual atoms, (3) dynamics of multiatom defects such as dislocations, and (4) continuum mechanics (where nonequilibrium defect and atomistic effects are incorporated through empirical constitutive laws). There are computational limitations in each of these areas that lead to gaps between them and prevent integration into a single comprehensive theory. Recent advances in computer hardware and algorithms to extend the size and time scales of computations in each of these areas make it possible to imagine integrating them. For example, massively parallel molecular dynamics calculations are now feasible for tens of millions of atoms, thus extending three-dimensional calculations from length scales of nanometers to tenths of micrometers. The time-scale limitation in molecular dynamics is still measured in tens of thousands of vibrational periods (i.e., tens of nanoseconds), though discussion of “reality ” in molecular dynamics is dominated by details of the interatomic potential (i.e., information obtained from lower-length scales). Nevertheless, information from molecular dynamics feeds the next-higher-length scale, namely, dynamics of defects, and, ultimately, into continuum mechanics. Features of the atomistic mechanisms of complex flow will make themselves felt at the continuum level, though overlap of the neighboring length scales will provide the earliest contributions to our understanding of the mechanical behavior of materials. The challenge in modeling

OCR for page 3
Computational and Theoretical Techniques for Materials Science mechanical properties of structural materials will be to develop a series of computational tools for the relevant length and time scales and then to integrate them, thereby providing a more complete capability for characterizing the behavior of new materials. However, before we can achieve the goal of designing these new materials, we need to understand strength and fracture properties beginning at the atomistic level. A recent report entitled “Summary Report: Computational Issues in the Mechanical Behavior of Metals and Intermetallics” (Materials Science and Engineering, Vol. A159, 1992) deals extensively with these issues. Similarly, a recent article entitled “Alloys by Design” (Physics World, November, 1992) presents an assessment of the mechanical properties in terms of simplistic arguments such as bonding-antibonding densities of states separation and directional (d-states) versus metallic bonding (sp-states). What is needed is the ability to accurately and rapidly scan parameter space (composition, structure, lattice constants) and to visualize the results in terms of charge density, densities of states, total energies, or any other parameter to extract models, correlations, and concepts. These can then be used to make predictions and to assist the metallurgist. One can, for example, calculate static 0 K lattice constants and various elastic moduli of candidate structures to correlate with melting temperatures and other physical properties of interest. To calculate temperature effects (thermally assisted motion of atoms), very accurate energy surfaces are needed for a variety of components—again, “fast accurate potentials.” The successes of the embedded atom method have shown that even modestly realistic models can yield satisfactory qualitative and sometimes quantitative results. Various steps have been taken to improve these models. Until total energies can be realistically calculated “on the fly,” simplified model and semiempirical approaches are useful for ionic systems not involving d-electrons (e.g., elastic and thermal properties of MgO at high pressures and temperatures). For d-electrons, insight may be gained by generalized tight-binding methods, first for crystalline systems and eventually for disordered systems containing defects. It may be possible to approach alloys by design if order-N scaling is achieved. New algorithms and approaches—both evolutionary and revolutionary—will be needed to achieve such a goal. COMPOSITES, POLYMERS, CERAMICS Background Intermediate between the atomic scale of quantum mechanics and the bulk or macroscopic scale of most applications is the mesoscopic regime. There, the entities of interest consist of tens to thousands of atoms with dimensions ranging from nanometers to microns. Although the focus generally is on utilizing these mesoscopic entities as building blocks of macroscopic materials, it should be pointed out that fabrication techniques are capable of creating designed structures of this size. Consequently, there are experimental realizations, and possibly applications, of such structures. Whereas the properties and responses of atomic building blocks are limited, those of the mesoscopic entities are far more diverse and adaptable. Exploiting this wider range of properties enables a very favorable approach to developing materials with given desirable properties, which is to first design the properties of these entities and then assemble them into macroscopic materials. In general, the resulting properties will not be average properties but instead will offer the possibility of taking the best features from each of the constituents. The presence of interlocking interfaces can give greatly enhanced mechanical strength, temperature resistance, or electrical properties in the proper circumstances. Or the material may be designed to be porous as a way of producing ultrafine filters or catalytic substrates. What are these materials? Composites are constructed from clusters, fibers, or thin layers of diverse materials. When the clusters are

OCR for page 3
Computational and Theoretical Techniques for Materials Science inserted into a macroscopically homogeneous material, they are referred to as X matrix clusters, where X characterizes the nature of the host material. When the materials are oxides or other refractories and the clusters are of micron size, they are known as ceramics. When the building blocks are long-chain molecules, they are polymers. Clusters of nanometer size (tens or hundreds of atoms) often exhibit a strongly peaked distribution of particle size and are referred to as nanophase materials. These materials can achieve even more special properties and are discussed separately. These complex materials already have many vital applications. For example, they are the basis of all personnel armor (flak jackets, bulletproof vests, and so on) in use today. The new lightweight performance-enhancing structural materials for aircraft, and also automobiles, are composite materials. (It is interesting to note that the Japanese are doing their developmental applications in sports equipment, whereas U.S. manufacturers are concentrating on aerospace and defense developments.) There is a very large effort to utilize ceramic parts in engines to achieve higher performance and efficiency. Specialty ceramic pistons are already in the marketplace but currently only for racing engines. Many adhesives and protective coatings also fall into this class of materials. Paper is a fiber composite. Plastics are polymers. Special magnetic multilayers and also some nanophase inclusions in metal matrices exhibit giant magnetoresistance effects. These can be used as magnetic field detectors, offering definite advantages over the standard pickup coil technology. A large consortium (that includes heavy representation from the Department of Defense) has been created to develop high-temperature fiber composites as an enabling technology for improved gas turbine engines. Many new applications can be expected soon. Advanced ceramics and composites was one of the 22 critical technologies identified by the National Critical Technologies Panel in 1991 (see Report of the National Critical Technologies Panel, U.S. Government Printing Office, Washington, D.C., 1991). This led the Department of Energy to initiate a 10-year R&D effort to develop continuous fiber-reinforced ceramic composites. Numerous phenomena can conspire to produce the interesting properties of these complex materials. Here, only a few are listed to convey the flavor of the extreme diversity involved. The easiest to understand from our everyday experience is the concept of plying or entanglement. Strength through plying has been applied to everything from automobile tires to carpentry. The effect is the same when the plies are made of mesoscopic entities. In fact, a layperson listening to a discussion of fiber composite material design might well believe the talk was about plywood design, but in three dimensions. And while weaving is not precisely at the molecular level, when dealing with cross-linked polymers, the idea is not far from the mark. Clearly, such complex structures will have a much larger fraction of the solid involved in interfaces. Those interfaces yield very different behaviors structurally and electronically than any bulk system. The special nature of the electron scattering in the plane of the interfaces is believed to be responsible for the observed giant magnetoresistance, for example, that is a likely candidate for improved magnetic sensors and computer disk heads. Small particles can exhibit structures, such as icosahedral, that cannot be geometrically accomplished in a bulk structure. Some of these are energetically quite favorable, so that very stable materials can be formed by using composite structures. On the other hand, a surrounding matrix or epitaxial layer can help stabilize a phase that exhibits a favorable property but that is only metastable in bulk. These building blocks are not so large that their quantum effects can be completely ignored. For macromolecular systems the only quantum effects of importance may be limited to molecular bonding. But there can be others. More subtly, quantum confinement can yield energy-level spacings that are significant compared to others, especially thermal energy scales, and these can modify all the transport

OCR for page 3
Computational and Theoretical Techniques for Materials Science properties. For metallic materials these confinement effects can help select the magic numbers of atoms in the realized clusters and thus the distribution of the particle sizes observed. Present Status and Critical Issues The design of many structural composite materials has employed empirical models that have effectively used computational resources. However, assessments suggest that future successful applications of composite materials would rely heavily on mechanism-based modeling. As the newer composite materials are ever more complex and nonlinear, the traditional empirical characterizations are becoming increasingly more expensive and limited. A recent National Research Council report, Mathematical Research in Materials Science: Opportunities and Perspectives (National Academy Press, Washington, D.C., 1993), reviewed the effective media developments in this area, and so the focus maintained here is on the atomistic approaches. Basic theoretical and computational capabilities are beginning to contribute to some useful studies. It can easily be envisioned that, with the current developments in large N (~1,000 atom) electronic structure capabilities in density functional theory, direct calculations will be available for single building block entities, an entity with surrounding environment, and even the simpler composite systems. Of particular interest will be cluster calculations that characterize how the small precipitate is influenced by the surrounding matrix and how it interacts back on that matrix, especially paying attention to the strain fields. The cases worked out using the more fundamental techniques will be useful both as examples in a statistical ensemble and as test cases for further, less fundamental techniques. Force (strength) and response function (electrical, optical, and magnetic) properties are tractable, but temperature effects in the electronic structure are suspect. The most probable approach to extend to larger systems is to utilize semiempirical tight-binding techniques. Those techniques can give some insight into operative mechanisms, but the danger is that they will most likely fail precisely where the material system becomes the most interesting. The most critical issue will be charge transfer. Dynamical simulations are now capable of including millions of atoms when the interatomic forces can be characterized sufficiently simply. This is enough to include multiple instances of composite structure and thus study interactions through strain fields and so on. The number is far less, perhaps thousands, when using the somewhat more realistic tight-binding representation to characterize the interactions. Nonetheless, useful complexity can be incorporated. Such techniques will probably prove extremely useful to address a fundamental issue of ceramics: understanding how the strain field interacts with crack growth to (hopefully) suppress it. Research on fracture is an extremely active field, although generally not at the atomic level. Future Theoretical Developments and Computing Forecast A hundredfold increase in computer capability will, by itself, only ensure that the applications already discussed will be feasible for carefully chosen “targets of opportunity”; that is, problems will need to be chosen at least partially on the basis of feasibility rather than exclusively on the basis of scientific or practical interest. Increased computer capability will ease the restrictions but not remove them. Researchers will have to carefully abstract essential features for generic study. It can be hoped that progress toward order-N scaling will provide a significant boost in capability. However, it must be expected that such capabilities will depend on special circumstances and not have general applicability, such that problem choice will still be required. Alternately, intermediate precision approaches for electronic structure that are adequate to derive interatomic forces with less

OCR for page 3
Computational and Theoretical Techniques for Materials Science effort should offer an alternative approach to improving problem size and thereby encompass greater reality. Both the generalized tight-binding parameterizations and the modeling of interactions such as those between deforming atomic systems that are being advanced at the Naval Research Laboratory are strong efforts in this direction. Again, these developments will not be universally applicable, but they will expand the sphere of what is possible. The interface between basic components is an especially important factor. Fortunately, techniques are available for studying interface electronic structure that are realizable forms of the process known as an “embedding” (or “building block” or “architect”) approach. Embedding has long been a promising technique that has been realized in only a very few cases. Its basic intent is to deal only with a limited piece of the problem in detail while characterizing the rest as an environment that needs to be described in much less detail. By using Green's function matching or reduced variational freedom (Harris-Foulkes) away from an interface, it can be expected that the interface can be effectively isolated such that it can be treated with high precision. In this manner it will be possible to study realistic specific examples of interfaces. The examples for study will be established by using molecular dynamics or grand canonical Monte Carlo simulations. With such techniques, the impact of surface interactions can be calculated and characterized. Further advances in embedding techniques would clearly have immense impact for composite materials. The most promising progress is being made where the electronic structure can be represented in atomic orbitals that can be expressed effectively as atomic charges. Thus, useful information will be made available through calculations for embedded clusters. To deal with interactions with long-range strain fields, and in the area of polymers, an exchange of ideas with molecular biologists (who face very similar problems with equally disparate needs for solutions) would be especially useful. ALLOY PHASE DIAGRAMS Obtaining new and useful properties by alloying dates back to the Bronze Age. New alloys are still being created to solve problems in all segments of our economy ranging from aerospace to electronics to heavy machinery to just about any endeavor that exploits materials properties. It is also necessary to develop replacement alloys for traditional alloys because one or more ingredient is of limited abundance (cost issue) or obtainable only in sensitive or unstable countries (political issues). However, alloy development is frequently economically unattractive because the traditional methods are quite costly. Despite its importance and history, the development of new alloys is still an empirical procedure based on making incremental improvements exploiting a large body of past experience. There has been relatively little guidance from the application of fundamental principles. Thus, great leaps forward are infrequent and more often than not result from accidental observations made when exploring a different problem. The basic information necessary for systematic alloy development is the phase diagram specifying what crystal structures will occur in which temperature range for a given mixture of the constituents. There are other issues such as internal disorder, stresses, and microstructure, but the phase diagram is the starting point. Considerable progress has been made using empirical models. It should be recognized, however, that these empirical models can be reliably used only to interpolate between known data. Extrapolation to radically different situations is risky. First-principles calculations have exhibited some initial successes and seem capable of at least providing the basis to explain trends. (This is an important capability; for example, it is crucial to know how well Gd can serve as a surrogate for Pu since experiments can be performed only on the surrogate. Less dramatic examples also abound.) However, for the actual prediction of phase diagrams, there is some question about whether the base approximations are adequate to

OCR for page 3
Computational and Theoretical Techniques for Materials Science provide the information to the accuracy needed. An alternate hybrid scheme might be to calculate the empirical model parameters from first-principles approaches. That is far less advanced because the definition of the empirical model parameters hides detail that must be considered when approached from the more fundamental side. Although costly, binary and ternary (two-and three-element) phase diagrams can be, and often are, determined experimentally. Most interesting modern alloys, however, are a stew of elements blended to optimize multiple properties of the material. Not only are these alloy systems hard to characterize, it is even difficult to represent the data in an insightful way. The ability to calculate phase diagram information from first principles would aid by guiding experiments in specific cases and by suggesting critical coordinates with which to explore the data. The state of the art in the context of alloy phase diagram computation consists of three principal approaches. All three involve model Hamiltonians of sufficient simplicity to allow computation of finite-temperature statistical mechanics. The coherent potential approximation (CPA) replaces the ensemble of different atomic species on a lattice by a fictitious entity that scatters in an average manner consistent with the overall composition. The CPA is normally used to treat totally random alloys without local order or clustering. Attempts to incorporate local atomic relaxations, charge transfer, or clustering have just begun and remain to be fully tested. The CPA is well suited to the quantum mechanical description of the total-energy variation associated with compositional variations of chemical species on a common crystalline lattice, order-disorder, and chemical mixing energies, for example. A second approach to the calculation of configurational energies is to use a finite set of ordered compounds, AnBm, to determine the parameters in the so-called cluster expansion. This expansion is exact and fairly rapidly converging. In this approach the total energy of a particular chemical configuration is expressed as a sum over contributions associated with the various local substructures, c (pairs, triangles, tetrahedra, and so on): where P(c) is the probability of occurrence of the particular local configuration, c, in the alloy. Calculated total energies for ordered configurations (for which P(c) is known) permit Eq. 2.1 to be solved for the expansion parameters ε (c), which can then be used to describe complex disordered configurations at finite temperatures. The CPA and cluster-expansion approaches have proven quite effective for compositional variations on a common crystalline lattice, so-called coherent alloys, but leave open the much greater challenge of finite-temperature systems exhibiting general geometrical variations and concomitant elastic effects. This has been addressed by using Monte Carlo treatments of models consisting of pair- and three-body classical potentials. This approach has been applied to bulk semiconductor alloys as well as epitaxial growth in these systems. In metallic systems, attempted applications based on electronic structure schemes are still very preliminary. Monte Carlo techniques can be particularly useful for consideration of more complex alloys such as ternary and quaternary systems. The calculation of alloy phase diagrams is an area where extensive databases are an important aspect of the problem. Given increased computational resources, this would be a respectable endeavor. Beyond this, many of the desired improvements are limited by the need for conceptual advances. Within the CPA, a trivial parallelism achieved by spreading Brillouin zone k-points across the nodes can yield a remarkable advantage. More complicated, but feasible, is the incorporation of local cluster behavior. Monte Carlo schemes should also benefit significantly from parallelism.

OCR for page 3
Computational and Theoretical Techniques for Materials Science MAGNETIC MATERIALS Magnetic effects in condensed matter have been, and remain, a fertile source of both intellectual and technological interest. The cuprate high-Tc superconductors illustrate both. Computationally tractable theories of magnetism have, for the most part, been based on the local-spin-density (LSD) approximation, which has proven adequate for both qualitative understanding and quantitative predictions for many magnetic systems. Unfortunately, the list of systems for which the LSD approximation is in serious error is quite lengthy. These include transition-metal oxides, the cuprate superconductors, and systems involving localized f electrons, such as rare-earth compounds. It is also unfortunate that, unlike the theory of the optical properties, there is no practical extension of the LSD approximation analogous to the GW theory of excitations. Nonetheless, calculations based on the LSD approximation have been immensely useful in elucidating the origin of magnetic effects in metallic systems. There is much more useful scientific work to be done in this context and even more engineering and materials design work. Magnetic effects can be categorized according to the relative importance of the spin-orbit interaction. Another important classification reflects the presence or absence of localized f electrons. The following are discussed below: (1) itinerant transition-metal systems, (2) itinerant systems for which spin-orbit effects are critical, and (3) systems containing localized f electrons. Itinerant Transition-Metal Magnets Calculations based on the LSD approximation have been remarkably successful in this context. The magnetic properties of even complex compounds are amenable to quantitative prediction. Opportunities in this context include the analysis of thermodynamic properties. The principal line of attack in this context has been the use of LSD-based calculations to obtain values appearing in phenomenological models. Examples of this approach include calculation of the total energies of ferromagnetically and antiferromagnetically ordered systems for the purpose of estimating Heisenberg exchange parameters. The LSD approximation has been particularly useful in elucidating the strong coupling between the presence of spin polarization and atomic volume. The most dramatic manifestation of this coupling is the INVAR effect (zero thermal expansion), but less dramatic effects are quite common, including the elastic properties of the elemental transition-metal materials. The reliability of the LSD approximation for itinerant transition-metal systems makes it a useful tool for computer-aided materials design. Technologies near the interface of science and engineering include high-magnetization alloys, magnetic multilayers, and magnetooptics. Thermodynamics of Magnetic Order The development within the LSD conceptual framework of the theory of noncollinear magnetism has offered two types of benefits. First, the theory has shown that the relatively small set of predominantly manganese-base compounds that exhibit noncollinear magnetic order at low temperatures can be understood in terms of conventional and itinerant magnetism. More importantly, the theory can be used to study the energetics of spin configurations arising in conventional collinear magnets at finite temperatures and the loss of long-range magnetic order at the Curie temperature, for example. The further development of this type of analysis for both scientific and technological objectives represents an opportunity for this field. Spin-Orbit Effects Two entire classes of magnetic effects require a significant spin-orbit coupling. The first is magnetic anisotropy, alignment, and coercivity. The second is magnetooptics. Both are vitally important to various technologies.

OCR for page 3
Computational and Theoretical Techniques for Materials Science The spatial directions of magnetization and the energy required to alter it are the essence of magnetic recording and permanent magnets, such as those in electric motors. Magnetooptics is the basis for many scientific probes, such as the surface magnetooptical Kerr effect, and for writable optical disk storage. In the case of magnetooptics, there is substantial evidence that the LSD approximation contains the required physics. Parameter-free calculations are quite successful in predicting the Kerr rotation of modestly complex intermetallic compounds. In the case of magnetic alignment, there is encouraging evidence that these effects are within the reach of LSD-based calculations, although the importance of orbital magnetism to these effects is not well known. Additionally, extraction of the desired, but quantitatively small, total-energy differences from computational noise has proven to be very difficult. But there has been recent progress on this front, and continued progress represents a near-term opportunity. Localized f Electrons Until recently, the analysis of rare-earth and actinide systems containing localized f electrons was thought to lie beyond the reach of LSD-based calculations. A relatively straightforward extension of the theory by Brooks, Johansson, and co-workers (M.S.F. Brooks, Physica 130 B:6, 1995; O. Eriksson, M.S.F. Brooks, B. Johansson, Physical Review B, Vol. 41, 7311, 1990) has proven very successful. While this advance permits the treatment of orbital magnetism and effects related to magnetic anisotropy, it does not extend to dynamical phenomena, such as the Kondo effect. The exploitation of this advance, in the context of hard magnets for example, is an opportunity for this field. Highly Correlated Systems The cuprate high-Tc superconductors are perhaps the most dramatic illustration of the limitations of the LSD approximation. The transition-metal oxides are another highly visible example. Even here, however, calculations based on the LSD approximations have been quite useful. The guide they provide to the interpretation of angle-resolved photoemission has been particularly useful, both in NiO and the high-Tc materials. The use of these calculations to estimate parameter values for more phenomenological theories also has been valuable and represents an ongoing opportunity. Future Prospects and Opportunities As the discussion above indicates, a theoretical extension of the LSD approximation remains an outstanding need as well as opportunity of the field. Nonetheless, the discussion also indicates that calculations based on the LSD approximation are often of considerable conceptual and practical utility. Exploitation of this theoretical framework as a guide to the design and development of new magnetic materials is particularly promising. The fact that prominent suppliers of materials design software will offer commercial software of this type is a measure of the practical opportunity. The fact that LSD-based calculations are parameter free means that the computation of chemical trends is often particularly reliable. Finally, the application of LSD-based theory to increasingly complex systems of technological interest ranks among the most straightforward ways to exploit anticipated increases in computational power. STRONGLY INTERACTING SYSTEMS Background Electron correlation effects play a critical role in certain classes of materials, such as magnets and superconductors. The ab initio treatment of electron correlation effects in real materials remains one of the most challenging tasks, both conceptually and numerically. Prototypical examples of strongly correlated systems include high-Tc superconductors, transition-metal oxides, f-electron systems, and superfluid 3He and 4He. It is widely accepted

OCR for page 3
Computational and Theoretical Techniques for Materials Science that superconductivity cannot be explained in terms of independent particles and that particle-particle interactions must form a central part of any explanation. In ordinary superconductors the Bardeen-Cooper-Schrieffer theory produces outstanding results. This successful theory to explain superconductivity led to a proliferation of mean-field theories that explain other phase transitions. The coherence length characterizing the width of the superconductor normal interface is large—102 to 104 times the lattice constant. This large coherence length means that fluctuations between the superconductor and normal phases are averaged over large volumes, so that a mean-field theory is valid. For all other phase transitions, the coherence length is comparable to the lattice distance, and the fluctuations around any mean-field approach are so large that mean-field theory is invalid. Accordingly, the treatment of these other phase transitions has required much more explicit many-body treatments. An archetype of a strongly interacting model system is the Hubbard model on a lattice in which the onsite Coulomb interaction is comparable to the hopping energy between the sites. Given the apparent simplicity of this model, it is perhaps surprising that there are no generally accepted results for it except in one dimension or, more recently, in infinite dimensions. The study of this model has involved a large number of theoreticians using both the best numerical procedures and analytic approaches. Numerically, the growth in the number of configurations with the size of the lattice is comparable to the situation for the configuration-interaction method in atoms and molecules. To date, the small size of systems studied is such that there is no agreement on the ground states or thermodynamically stable states for the Hubbard model as a function of the density of electrons. Nonetheless, the model continues to attract attention because practically every important “real” strongly interacting system has been “mapped” onto the Hubbard models, albeit with uncertain accuracy. These systems include ferromagnets and antiferromagnets, cuprate superconductors, heavy-fermion systems, and even helium. There is growing recognition that these systems may require more realistic models. Examples of the necessary complications are (1) long-range Coulomb and exchange interactions, (2) many orbitals at each site, and (3) crystal-field and spin-orbit splitting of these orbitals. Critical Issues Failure to solve the Hubbard model has not dampened enthusiasm for it. On the contrary, interest is higher than at any point in its history. In fact, the model has been modified and extended in a variety of ways. It is possible to integrate over the higher energies of the Hubbard model to find an effective spin interaction, J, between spins on nearest neighbors. In the case of the planes in cuprate superconductors, a model has been developed that involves not only the interaction between the spins on coppers but also the hopping of holes (sited at coppers in the model but of course extending onto the adjacent oxygens). This so-called t-J model is a flourishing subfield of its own, whether or not it has any relevance to cuprate superconductors. Another direction has been to broaden the original Hubbard model. The modifications include (1) a single level, the others having been removed to higher energies by crystal-field splitting; (2) hopping between oxygens in addition to hopping between copper and oxygen; (3) Hubbard interactions not only on the copper but also on the oxygen; and (4) Hubbard-type interaction between adjacent copper and oxygens. Of course, there is little chance of solving this much more complicated model with its vastly enhanced parameter space. Nonetheless, such “realistic” models have engendered considerable interest. In some systems a coupling of electrons to the phonons can simulate negative U values in the Hubbard model. These models tend to bind two fermions on a site to form a boson. These so-called real pairs behave very much as bosons that may now form a superfluid. Nonetheless, the bulk of interest is in the model. An alternate

OCR for page 3
Computational and Theoretical Techniques for Materials Science use for the model is to apply the original Hubbard model to bosons instead of fermions. In that case the interaction U suppresses multiple occupancy of sites and in the infinite U limit gives the “hard-core” boson mode. Such models have been used to model superfluid helium-four. Furthermore, the addition of disorder leads to the so-called dirty boson model used to describe, for example, the phenomenon where one class of cuprate superconductors becomes either superconducting or insulating as the temperature is lowered and depending on the degree of disorder. A principal method to attack all such Hamiltonians is the quantum Monte Carlo (QMC) method, which is based on a stochastic approach. Unfortunately, the effort to construct a probability runs into a considerable obstacle in the Hubbard model; for some moves in the stochastic walk, the probability is negative, thereby precluding a probability interpretation. This so-called fermion sign problem has limited the application of QMC calculations, especially since it gets exponentially worse as the temperature is lowered. So far, every attempt to solve the fermion sign problem for lattice models has been either unsuccessful or so difficult to implement that it has not been attempted. Another critical problem is a first-principles computation of the model parameters such as the onsite Coulomb interaction, let alone accurate estimates of what is being omitted in such a simple model. There have been pioneering studies using both quantum chemistry methods and local-density approximations, but these have clearly demonstrated the limitations of both. The connection from quantum chemical techniques to the construction of accurate models is an open question. In addition to the lattice models described above, Monte Carlo methods (either variational or fixed-node diffusion) have been successfully applied to the treatment of real materials with long-range Coulomb interaction, such as the covalent semiconductor and metals. Extension of these methods to highly correlated systems is currently an active area of research. Another direction is a hybrid approach combining a Hubbard-type interaction with a standard local spin density functional Hamiltonian. This semiempirical approach allows interpretation of spectroscopic experiments. Computational Forecast Monte Carlo methods are intrinsically well suited to take advantage of the unprecedented increases in computing power afforded by emerging MPP environments. The computation time for path-integral Monte Carlo calculations scales as N3L, where N is the number of sites and L is the number of “time slices” in the computations. In going to lower temperatures, L will increase and necessitate larger N in order to see longer-range correlations. But, in fact, the fermion sign problem has prevented large-scale application to low-temperature systems. At present, there is essentially no agreement on any features of the ground state or the phase diagram of even the simplest Hubbard model as a function of the concentration of electrons. At low temperatures, the ill-conditioned nature of the matrices requires extensive computation, growing exponentially on small data sets. These potential complexities are reminiscent of quantum chromodynamics on a lattice problem, which has spawned both the extensive use of parallel computers and the development of special computer architectures to achieve multiteraflop speeds. The variational and fixed-node methods do not suffer from the sign problems but provide only a variational solution. In contrast with path integral methods, these calculations scale as N3, where N is the number of electrons, and simulations with N~1,000 electrons have been performed.