Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
4 Research Tools, Methods, Infrastructure, and Facilities In the past decade, significant advances have been made in the characterization (Section 4.1), synthesis and processing (Section 4.2), and computational (Section 4.3) capabilities available to materials researchers. These new tools have enabled previously unachievable materials insights, and this is especially true when used in combinationâfor example, in situ measurement and control of novel synthetic strategies or advanced data analytics techniques utilized simultaneously with ad- vanced imaging diagnostics (Section 4.4). Development of these tools is a research frontier in its own right meriting further investment. This chapter highlights a number of methodological advances and the impact they have had on the materi- als community. One consequence of continually improving tools is the need for infrastructure reinvestment to ensure the availability of state-of-the-art tools (Sec- tion 4.5). Novel modalities for such investment are discussed. Last, the current and emerging capabilities available at intermediate-scale facilities as well as national user facilities are highlighted (Section 4.5). 4.1 CHARACTERIZATION TOOLS 4.1.1 Electron Microscopy Transmission electron microscopy (TEM) is a key technique in all areas of materials science because it helps reveal how a materialâs internal structure is determined by synthesis and processing and how it correlates with its physical 162
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 163 properties and performance. Imaging, diffraction, and spectroscopy can all be car- ried out across length scales ranging from interatomic distances to micrometers, often within a single transmission electron microscope and on the same sample. The past decade has seen tremendous advances in instrumentationâin par- ticular, spherical and chromatic aberration correctors, monochromators, and new detectors (see Figure 4.1). One example is the continued evolution of the aberra- tion-corrected microscopes. Advanced aberration-corrected scanning transmis- sion electron microscopes (STEMs) can now achieve 0.5 Ã resolution in TEM and FIGURE 4.1âImprovements in the spatial resolution in light and electron microscopy. SOURCE: Reprinted by permission from Springer Nature: D.A. Muller, 2009, Structure and bonding at the atomic scale by scanning transmission electron microscopy, Nature Materials 8:263, Â© 2009.
164 F r o n t i e r s o f M at e r i a l s R e s e a r c h STEM modes along with 0.1 eV energy resolution.1,2 Other examples of advances afforded by spherical aberration correction include picometer-precision in deter- mining atom column positions and tremendous improvements in the quality of atomic-scale electron energy-loss and energy-dispersive X-ray spectroscopic im- ages, the latter in combination with new large solid-angle detectors. Understanding what sets the ultimate limit in the quest of further improving instruments to achieve even higher spatial resolution is a topic of ongoing research in the field. Advances in monochromators now allow for energy resolution of 30 meV (or better) in electron energy loss spectroscopy, sufficient to study phonons. Faster cameras and new types of sample holders developed over the past decade provide new opportunities for in situ studies of a wide range of processes in materials. Important ongoing develop- ments include high-speed pixel array detectors that allow for detecting scattered electrons as a function of position in the detector plane. These new detectors allow for utilizing new imaging modes (e.g., differential phase contrast to image electric polarization) and, more generally, improve visibility of features and interpretability of electron microscope images. In parallel with instrumentation, electron microscopy techniques have also made substantial advances. One of the main advantages of electrons for imagingâ namely, their strong interaction with matterâwas long thought to pose a challenge in the quantitative interpretation of image intensities. Truly quantitative interpretation of image intensities was demonstrated in the past decade, opening up quantitative analysis of atomic resolution images not only in terms of the position, but also the content, of atomic columns in a sample. Another highly active research area is three-dimensional (3D) imaging (electron tomography)3 of crystalline samples. A number of different approaches are actively being developed in the field. Such approaches have been applied to analyze the 3D positions of atoms in nanoparticles. Tomography has also been used in diffraction contrast imaging of crystal defects and, in combination with in situ straining, has allowed for dynamic visualization of the interactions of dislocations with grain boundaries.4 1 S.J. Pennycook, 2017, The impact of STEM aberration correction on materials science, Ultrami- croscopy 180(1):22-33. 2 Q.M. Ramasse, 2017, Twenty years after: How âAberation correction in the STEMâ truly placed a âA synchrotron in a microscope,â Ultramicroscopy 180(1):41-51. 3 E. Maire and P.J. Withers, 2014, Quantitative X-ray tomography, International Materials Reviews 59(1):1-43. 4 A. King, P. Reischig, S. Martin, J.F.B.D. Fonseca, M. Preuss, and W. Ludwig, 2010, âGrain Mapping by Diffraction Contrast Tomography: Extending the Technique to Subgrain Information,â in Chal- lenges in Materials Science and Possibilities in 3D and 4D Characterization Techniques: Proceedings of the RisÃ¸ International Symposium on Materials Science, hal-00531696, RisÃ¸ National Laboratory for Sustainable Energy, Technical University of Denmark.
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 165 4.1.2 Atom Probe Tomography Current and emerging research areas in the physical and life sciences increas- ingly require the capacity to quantitatively measure the structure and chemistry of materials at the atomic scale. This atomic-scale information enables nanoÂ science research across a wide range of disciplines including materials science and engineering, fundamental physics, chemical catalysis, nanoelectronics, and structural biology. Atom probe tomography (APT) is the only currently available material analysis technique offering extensive capabilities for simultaneous 3D imaging and chemical composition measurements at the atomic scale. It provides 3D âmapsâ that show the position and elemental species of tens of millions of atoms from a given vol- ume within a material, with a spatial resolution comparable to advanced electron microscopes (around 0.1-0.3 nm resolution in depth and 0.3-0.5 nm laterally) but with higher analytical sensitivity (<10 appm). The development of commer- cially available pulsed-laser atom probe systems now allows for the 3D analysis of composition and structure at atomic resolution in nonconductive systems such as ceramics, semiconductors, organics, glasses, oxide layers, and even biological materials in addition to metals and alloys. New focused ion beam (FIB) methods enable the fabrication of tailored site-specific samples for atomic-scale microscopy with much higher throughput than was previously possible. The latest generation of atom probe instruments now offers increased detec- tion efficiency across a wide variety of metals, semiconductors, and insulators, increasing the fraction of atoms detected from approximately 60 percent to approxi- mately 80 percent, and increasing the sensitivity. Faster and variable repetition rate Âdramatically increases the speed of data acquisition, and advanced laser control algoÂ rithms provide measurably improved sample yields. Atom probe experiments are inherently low throughput. The rate at which experiments can be conducted Âimitsl the outcomes that can be achieved, so improvements in the speed of acquisition and the yield of successful data sets promises an enormous step forward. This allows greater sensitivity per unit volume, which is extremely useful for the measurement of trace quantities of materials. This is a great help with geosciences applications such as mineral dating or measurement of nanoparticles or quantum devices. One opportunity includes the development of new detectors or detector tech- nologies that could push atom detection limits closer to 100 percent and raise the possibility of achieving kinetic energy discrimination, which would permit d Â econvolution of overlapping isotopes. Another significant opportunity lies in the development of multimodal instruments incorporating either a transmission elec- tron microscope or a scanning electron microscope (SEM) column directly into an APT system or vice versa, to provide real-time or intermittent imaging or diffraction
166 F r o n t i e r s o f M at e r i a l s R e s e a r c h data. This could allow assessment of specimen shape and crystallography during APT analysis, which could significantly increase reconstruction accuracy for com- plex heterogeneous materials. Additional opportunities include automation of pro- cedures such as specimen alignment and application-specific control, which could free the user from monitoring the acquisition and encourage optimized analysis conditions as different material types or interfaces are exposed during analysis. Currently, there is significant ongoing discussion about the development of APT standards. Such developments could help in establishing unified protocols for APT sample preparation, data collection processes, data reconstruction and analysis, and reporting of results worldwide. All these developments could lay the foundation for a bright future for APT as a characterization capability that can take not only materials scientists but also researchers from a variety of disciplines, including geology, biology, and solid-state materials, closer to the goal of achiev- ing the 3D composition, structure, and chemical state of a material atom by atom. Sophisticated data analysis tools are required to extend the reach of the technique beyond visualization, and extract the type of meaningful, quantitative information that is required for the purpose of materials design (e.g., for thermodynamic cal- culations or grain boundary engineering). Intensive research in this area promises to improve the potential to open up to the application of this powerful technique to a wide range of scientific research areas.5 4.1.3 Scanning Probe Microscopies Scanning probe microscopy (SPM) fundamentally relies on atomic interactions between a tip and a surface. In the two decades following its invention, advances focused on increasing spatial resolution, developing quantitative theory of sample- tip interactions and improving the robustness of signals. The first wave of SPM, extending the range of properties beyond surface structure, resulted in electric force microscopy, scanning kelvin force microscopy, piezoresponse force microscopy, scanning capacitance microscopy, and so on. The past decade witnessed a dramatic expansion of the properties that could be probed by exploiting the frequency dependence of imposed and detected signals, achieving low detection limits, increasing scan and detection speed, and manag- ing spatially and temporally resolved functional data sets. These advances allowed access to property functions rather than just constants at nm resolution, driving advances including the following: 5 A. Devaraj, D.E. Perea, J. Liu, L.M. Gordon, T.J. Prosa, P. Parikh, D.R. Diercks, et al., 2018, Three-dimensional nanoscale characterisation of materials by atom probe tomography, International Materials Reviews 63(2):68-101.
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 167 â¢ Not only capacitance and charge, but also the real and imaginary components of dielectric function in organic, inorganics, and biological systems from impedance probes; â¢ Spatially resolved quantum efficiency of photo-generated charge in solar cell materials; â¢ Ultrasonic force detection for subsurface imaging; â¢ Electrochemical strain and ionic diffusion in battery materials; â¢ Dynamic processes including surface diffusion and real-time nucleation and growth in phase transformations; â¢ Ferroelectric switching and domain wall dynamics; â¢ Flexoelectricity in water or ambient of organic or inorganic materials; â¢ Force modulation quantifying elastic moduli and energy dissipation locally; â¢ Nuclear magnetic resonance, spin-resolved STM, and spectroscopy of magnetic atomic particles; â¢ Multi-tip scanning tunneling microscope (STM) quantifying transport and electronic structure of nanotubes, graphene, and two-dimensional (2D) materials; and â¢ Switching of polaritons in 2D materials and others with near field infrared spectroscopy. Many of the recent SPM advances have been made routine and will continue to provide facile characterization to drive progress in materials research and applica- tion in the next decade. Some challenges on the horizon require additional advances. The expansion of in situ/in operando measurements that approach realistic conditions in terms of chemical environment, temperature, and pressure would eliminate the need to extrapolate simplified measurements in order to reach the conditions for real applications. Opportunity space in this area includes battery materials, fuel cells, corrosion, catalysis, thin film growth, and nanoelectronics fabrication. Imaging rates are continuously increasing, but they are not routinely at video rates. As more scanning probes achieve this speed, the range of dynamic processes that can be quantified will increase. While an optimal pathway is not yet clear, the potential implementation of quantum computation requires characterization of quantum mechanics-based be- havior in a variety of settings: quantum optics at multiple frequencies, electronic transport in various materials configurations, and manipulation of matter at the atomic scale. The materials sets for this application range from graphene and other van der Waals materials, to topological insulators, to silicon qubits. SPMs that use mechanical interactions to acquire subsurface imaging can produce tomographic images, taking characterization to an additional dimension.
168 F r o n t i e r s o f M at e r i a l s R e s e a r c h Taking property tomography to the next level will advance understanding in thin film heterostructures, cells and biological materials, composites, and solar cells. Many SPM techniques have evolved to probe structure and multiple property functions simultaneously, creating large data sets of interconnected information. Integrating the concepts of big data and machine learning could yield unexpected insight into complex behavior in functional materials. 4.1.4 Time-Resolved, Especially Ultrafast Methods In the past decade, significant advances in time-resolved, ultrafast methods have been achieved with picosecond resolution routine, femtosecond common, and attosecond emerging. These methods enable the study of atomic-scale dynamics of materials. Ultrafast, atomic-scale, dynamical motions underlie the performance of all functional materials and devices, and the ability to resolve them opens previ- ously untapped potential to enhance materials performance and create new func- tionality. Such methods are available through table-top systems owing to advances in laser technology as well as an emerging number of X-ray free electron laser user facilities, beginning with the Linac Coherent Light Source (LCLS) at Stanford Linear Accelerator Center (SLAC). Upgrades to LCLS are already under way to enhance its capabilities. More intense and coherent synchrotron sources are becom- ing available, which will also enable a variety of more time-resolved experiments. Applications of ultrafast spectroscopy have been very useful when investi- gating ultrafast biological processes, such as photo-induced proton and electron transfer or excitation energy transfer. The demonstrations have shown how the time-resolved spectroscopic techniques were useful in providing the understand- ing of such processes. All the electron transfer steps, especially the initial ones, are ultrafast, and early femtosecond pump-probe experiments revealed the final details of this process.6 Another more recent example of the impact of ultrafast methods is in atomic and ionic diffusion, which is fundamental for the functionality, synthesis, and stability of a wide range of materials. In particular, diffusion of electroactive ions in complex electrode materials is central to the function of fuel cells, batter- ies, and membranes used for desalination and separations. While much is known from first-principles modeling and simulation about how ions diffuse through a lattice,7 little is known experimentally about the atomic-scale processes involved in ion diffusion. Individual ion-hopping events between adjacent interstitial sites 6 W. Holzapfel, U. Finkele, W. Kaiser, D. Oesterhelt, H. Scheer, H.U. Stilz, and W. Zinth, 1990, Initial electron-transfer in the reaction center from Rhodobacter sphaeroides, Proceedings of the National Academy of Sciences U.S.A. 87(13):5168-5172. 7 G. Sai Gautam, P. Canepa, A. Abdellahi, A. Urban, R. Malik, and G. Ceder, 2015, The intercala- tion phase diagram of Mg in V2O5 from first-principles, Chemistry of Materials 27(10):3733-3742.
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 169 may approach approximately 100 fs time scales and are associated with significant changes in the crystal strain field,8 which in turn can influence the dynamics of neighboring ions. The large response of many complex materials to electromagnetic radiation raises the possibility of ultrafast control of those materials properties through the application of short, intense photon pulses. This new field of materials research is showing promise in a number of areas, including especially strongly correlated electron materials. One example of this is multiferroic materials, which show prom- ising potential applications in which magnetic order is controlled by electric fields. However, the underlying physics and ultimate speed of magnetoelectric coupling remains largely unexplored. Using ultrafast resonant X-ray diffraction revealed the spin dynamics in multiferroic TbMnO3 coherently driven by an intense few-cycle terahertz light pulse tuned to resonance with an electromagnon mode.9 The results show that atomic-scale magnetic structures can be directly manipulated with an electric field of light on a subpicosecond time scale. Applications of X rays to quantum materials research include ongoing efforts to understand high-temperature superconductivity, the recent detection and spatial mapping of spin currents using X-ray spectromicroscopy, and the direct demon- stration and discovery of new electronic phases of topological quantum matter with angle-resolved photoelectron spectroscopy. Despite this important progress in understanding fundamental material physics, the direct impact of X-ray tools on quantum information technologies has been very low to date. This is because the X-ray tools presently lack the spatial resolution to probe quantum matter on the relevant length scales. The combined spectral, spatial, and temporal sensitivity enabled by emerg- ing high brightness X-ray sources will dramatically change this situation. X-ray beams are currently typically 10-100 mm in size. In most cases, this is much larger than underlying quantum coherence length and any quantum information is av- eraged out. The new sources will enable powerful spectroscopic nanoprobes with few-nanometer spatial resolution. These nanoprobes will be able to measure the decoherence of wavefunctions, the influence of device morphology on emergent quantum phenomena, and the motion of quantum information at the heart of emerging quantum technologies. These experiments will investigate not only the spatial and temporal fluctuations of idealized, pure materials but also their mani- festation in real-world devices. 8 A. Van der Ven, J. Bhattacharya, and A.A. Belak, 2012, Understanding Li diffusion in Li-interca- lation compounds, Accounts of Chemical Research 46(5):1216-1225. 9 T. Kubacka, J.A. Johnson, M.C. Hoffmann, C. Vicario, S. De Jong, P. Beaud, S. GrÃ¼bel, et al., 2014, Large-amplitude spin dynamics driven by a THz pulse in resonance with an electromagnon, Science 343(6177):1333-1336.
170 F r o n t i e r s o f M at e r i a l s R e s e a r c h 4.1.5 3D/4D Measurements, Including In Situ Methods The past decade has seen tremendous growth in 3D and four-dimensional (4D) characterization capabilities that are specifically geared toward quantifying meso- scale microstructure and response under stimuli. This growth was made possible by significant advances in computer-based control, sensing, and data acquisition, and has resulted in novel experimental toolsets and methodologies that were not possible a decade ago. These advances have enabled a move from qualitative ob- servations to digital data sets that can be mined, filtered, searched, quantified, and stored with increased fidelity and operability. Mesoscale 3D and 4D characterization of materials with X rays can be divided into two subfields, tomography10 and diffraction-based microscopy. The former, commonly referred to as micro-CT, involves the collection of multiple radiographs with microscale resolution and computer-based reconstruction. Laboratory-based systems can readily produce 3D renderings of soft and lattice materials, but hard materials absorb X rays much more efficiently and require higher energy sources and in some cases synchrotron experiments. By comparison, diffraction-based 3D X-ray microscopy involves scanning a beam across a specimen and reconstructing the polycrystalline microstructure from reciprocal lattices. Variation in the place- ment of the detectors from near-field to far-field positions allows one to determine the orientation of individual voxels within the specimen, and close inspection of the far-field pattern facilitates the measurement of local elastic strains. Serial sectioning combined with the acquisition of optical or electron micro- graphs and/or orientation and chemical maps has emerged as an alternative way of collecting and constructing 3D data sets. FIB-SEMs allow one to shape and extract materials with submicron precision (see, e.g., Figure 4.2), but the milling rates of conventional FIBs are prohibitive for mesoscale studies. Faster techniques have emerged and now allow for 3D characterization of volumes ranging from cubic microns to fractions of cubic millimeters. For hard materials, the emergent technologies have involved ultrashort (femtosecond) laser ablation, plasma-source FIB (P-FIB), broad ion beam sectioning, and mechanical polishing. The matura- tion of diamond-knife microtome sectioning systems that operate inside the SEM has enabled the structural biology community to gather enormous 3D data sets from SEM imaging at the meso-scale, for example sectioning of the entire brain of a larval zebrafish.11 Microtome sectioning is also emerging in the study of soft materials and metals.12 10 S.R. Stock, 2010, Recent advances in X-ray tomography applied to materials, International Ma- terials Reviews 53(3):129-181. 11 D.G.C. Hildebrand, M. Cicconet, R.M. Torres, W. Choi, T.M. Quan, J. Moon, A.W. Wetzel, et al., 2017, Whole-brain serial-section electron microscopy in larval zebrafish, Nature 545(7654):345. 12 T. Hashimoto, G.E. Thompson, X. Zhou, and P.J. Withers, 2016, 3D imaging by serial block face scanning electron microscopy for materials science using ultramicrotomy, Ultramicroscopy 163:6-18.
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 171 FIGURE 4.2â An experimental FIB/SEM (focused ion beam-scanning electron microscope) tomog- raphy slice of a sample prepared by embedding silica beads in epoxy (a); the reconstructed 3D structure of the silica phase (b); air velocity field in a plane perpendicular to the inlet direction, computed for a 0.03 m/s inlet velocity and output atmospheric pressure boundary conditions (c). SOURCE: A. Rezikyan, 2016, FIB/SEM tomography of porous ceramics, Microscopy and Microanalysis 22(s3):1884-1885, reproduced with permission. After data collection, a number of steps are needed to extract meaningful infor- mation from a 3D data set. The typical flow of data processing involves registration, reconstruction, classification, and analysis. The raw data are often misaligned and distorted, and registration of fiducials must be employed to remove these distor- tions and misalignments. Reconstruction involves moving from the abstraction of a series of 2D images or maps to a 3D volume of data. In the case of serial sections, whatever property was measured at the surface of the slice is generally assumed to be consistent through the entire thickness of that slice. As long as the slice thick- ness is small compared to the changes in the measured features, this is a reasonable assumption, but one that the practitioner should be mindful of when considering measurements within the data. Classification involves unambiguous identification of features of interest within the volume. This can be trivialâfor example, precipi- tates with large density differences can easily be identified by backscatter imaging or tomography, and grain boundaries can be highlighted by placing a threshold on the orientation gradient in an orientation map. But, in many cases, the contrast between two regions of interest are not easily differentiated. Human intelligence is extremely well optimized for pattern recognition and classification and can ac- cept a very high level of anomalies within an image and through context and prior knowledge can easily infer and identify the features of interest within a set of im- ages. However, the computer-based methods for determining regions of interest in most imaging modes do not have this context, and the segmentation of the volume is often much more difficult than expected. The final step is that of analysis of the structures, which can be a wide variety of measurements including size, arrangement, shape metrics, crystallographic
172 F r o n t i e r s o f M at e r i a l s R e s e a r c h orientation textures and gradients. One of the greatest difficulties in 3D data pro- cessing and analysis is the lack of well-developed software packages and tools for 3D analysis. Materials researchers must develop custom codes and pipelines. While developing custom processing tools can have certain advantages, it is counterbal- anced by the current massive duplication of effort across separate groups, which is further compounded by the lack of standards for data descriptions and file formats that would make interoperable tools easier to develop. As an example, the devel- opment of the DREAM.3D software package13 has been extremely beneficial in reversing this trend. Initially developed for the analysis of 3D electron backscatter diffraction data, the platform continues to evolve to analyze multispectral data, providing a set of standards for data and processing formats and documentation. As an example of the success of these methods, 3D data sets of polycrystalline microstructures have been obtained for a variety of aerospace aluminum, titanium, and nickel alloys, and recent in situ 4D synchrotron experiments have elucidated the importance of residual stress and the redistribution of stresses during plastic deformation.14 A compact ultra-high-temperature tensile testing instrument, fab- ricated for in situ X-ray microtomography using synchrotron radiation, has been used to obtain real-time X-ray microtomographic imaging of the failure mecha- nisms of ceramic-matrix composites under mechanical load at temperatures up to 2300Â°C in controlled environments.15 It should also be noted that X-ray diffraction studies of hard materials have historically been conducted in multiuser synchro- tron facilities, but significantly enhanced laboratory-scale systems have emerged in recent years and hold the promise for much more widespread availability and use of this technique. At the same time, improvements in experimental tools and accompanying modeling of mechanical properties at nanoscale to micron-scale dimensions have enabled mechanical properties to be quantified at a variety of length scales down to ~100 nm, enabling the quantitative study of micro- and mesoscale unit deformation processes with unprecedented spatial precision. Similarly, a variety of techniques have been reported that allow bulk physical properties such as thermal diffusivity to be accurately measured in micrometer-scale depths. This allows more com- prehensive assessments to be made of the mechanical and physical properties of surface-modified materials treated by case hardening or ion implantation/plasma 13 M. Groeber and M. Jackson, 2014, DREAM.3D: A digital representation environment for the analysis of microstructure in 3D, Integrating Materials and Manufacturing Innovation 3:5, doi:10.1186/2193-9772-3-5. 14 Various works of Carnegie Mellon University, the Air Force Research Laboratory, Los Alamos National Laboratory, and Japanese groups. 15 A. Haboub, H.A. Bale, J.R. Nasiatka, B.N. Cox, D.B. Marshall, R.O. Ritchie, and A.A. MacDowell, 2014, Tensile testing of materials at high temperatures above 1700Â°C with in situ synchrotron X-ray micro-tomography, Review of Scientific Instruments 85(8):083702.
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 173 processing. In addition, these technique advances allow improved quantitative evaluations of the physical and mechanical properties of the near-surface regions of ion-irradiated materials as a proxy to neutron irradiation conditions that could be difficult, costly, and time-consuming (multiple-year experiments) to obtain. As an example, in situ measurements and 3D X-ray characterization of individual grains in polycrystalline bulk materials have paved the way to a better understanding of microstructural heterogeneity and localized deformation in irradiated materials. Such information is critical to the prediction of material aging and degradation in nuclear power plants and the design of new radiation-resistant materials for next-generation nuclear reactors. For instance, researchers have studied in situ heterogeneous deformation dynamics in neutron-irradiated bulk materials using high-energy synchrotron X rays to capture the micro- and mesoscale physics and link it with the macroscale mechanical behavior of neutron irradiated materials of relevance to reactor design. It is clear that volumetric characterization at the meso- to macroscale via de- structive and nondestructive experimental methods have matured tremendously in the past decade, enabling workflows that provide high-fidelity microstructural information across multiple length scales in a diverse range of material systems, but there are still many barriers that have limited its utilization within the materials Â community to first adopters and domain specialists. For both destructive and non- destructive workflows, a sorely needed advancement over the next decade is the in situ data analysis of the data collection procedure. The overwhelming majority of volumetric data collection is performed asynchronously and often independent of the analysis and ultimate utilization of the information. âSmartâ data collec- tion, where data are refined in key regions to provide additional resolution where required, or additional modalities to provide other attributes of materials state are needed. Dynamic sampling approaches, where data are collected efficiently and iteratively based on prior training using machine learning methods, have ap- peared in the literature for 2D data collection using a single modality,16 and these methods will provide greater benefit in 3D because of the exponential growth in collection time. Other examples include the ability to detect anomalies and other rare features in data collection using lookup tables and dictionary-based approaches, which may potentially allow for refining analysis dynamically for unknown features based on prior knowledge of the expected structure. Furthermore, truly incorporating and integrating multimodal âcostlyâ information into 3D experimentsâbased on instrumentation price, acquisition time, or surface preparation requirementsâcan only realistically be achieved using such integrated approaches. Other needed advancements include utilization of machine learning in the classification of 16 Charles A. Bouman, School of Electrical and Computer Engineering, Purdue University.
174 F r o n t i e r s o f M at e r i a l s R e s e a r c h microstructure, the development of efficient collection methods that more directly measure properties of interest, the collection and use of more signalsâultrasound, contact methods (nano-indents), continuing to push the development of larger volumes to generate higher level statistics, and closed loop material removal for serial sectioning. 4.2 SYNTHESIS AND PROCESSING TOOLS Given the increase in characterization tool capability and capacity over the past decade, there has been a corresponding need to advance synthesis and processing capabilities. These advanced tools not only facilitate accelerated materials discovery but also enable materials control with resolution consistent with advanced mea- surements. Often these synthetic advances are facilitated by advanced computa- tional methods for predicting new materials. 4.2.1 Precision Synthesis Full realization of the promise of precision materials synthesis (size, shape, composition, architecture, etc.) across length scales will transform materials sci- ence in a revolutionary way. Specific examples emerging of the possibilities and power of precision synthesis include molecular engineering of catalytic materials for selective reactivity, control of electrochemical energy conversion with atomi- cally precise materials, new biodegradable polymers with control of degradation rate via sequence control, precision placement of nitrogen vacancy center defects in diamond to create materials for quantum information, and self-assembly of peptide amphiphiles into fibrous and micellar structures with extraordinary bioactivity. These are the tip of an iceberg beginning to appear. Realization of the full âicebergâ is a surpassingly ambitious objective but is an enormously promising direction to invest in. It means not only putting every atom in a material where you want it, but also knowing where you want to put the atoms, and why, and being able to determine whether you have really put them there. Therefore, what sounds at first hearing to be a âsynthesisâ challenge is actually a challenge for synthetic materials chemistry, theory, simulation, and instrumental characterization. The âacross length scalesâ part of this challenge brings several new aspects into this synthesis challenge. Researchers need to gain the understanding necessary to predict how precision control structure at the atomic and molecular level plays out in macroscopic behavior and properties. They need to understand what level of precision is needed to achieve particular goals. Furthermore, as the Department of Energy (DOE) Basic Energy Sciences Basic Research Needs report on synthesis science points out, âThe challenge of mastering hierarchy [meaning across length scales] crosscuts all classes of syntheses. In interfacial, supramolecular,
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 175 biomolecular, and hybrid matter, hierarchy is the characteristic feature that leads to function.â17 This goal will require different synthesis techniques and chemical tools, all operating at different length scales. For example, covalent electronic chemistry would be used to make the building blocks, followed by noncovalent assembly of the building blocks into larger structures. Of course, this is research that is being done already, but without the kind of real-time instrumental monitoring and con- trol needed to achieve optimum results. Atomic layer deposition and molecular beam epitaxy (MBE) are atomic-level equivalents of additive manufacturing (AM) at the macroscale. Hierarchical synthesis processes that have not traditionally been viewed as kinetic processes, such as self-assembly, should be considered as such; the fact that they are driven by thermodynamics does not mean they do not have kinetic tra- jectories (often sluggish ones). Kinetically stabilized and metastable phases may be the desired product. A wide variety of materials types will be incorporated into one finished product, necessitating wider understanding or, more likely, collaboration on the part of the synthesizers, in order to master the creation of these materials. 4.2.2 3D Structures from DNA Building Blocks DNA origami is the folding of a long strand of DNA, the scaffold, into na- noscale objects through the use of short-chain DNA oligonucleotides, the so-called strands. The past decade has seen significant advances in the design toolbox to build 3D structures, and with each development the number of degrees of free- dom increases and this enables construction of more intricate shapes. The first approach to 3D structures was achieved by bundling DNA helices in a honeycomb structure. Curved objects were achieved by adding or deleting the strands between the helical scaffold bundles. Another approach involved the use of curved rings to enable control of the shape. From the bundled helical structures came wire-frame designs in either a grid-iron pattern or triangular mesh. This advance has impor- tant implications for biomedical applications because structures created through these approaches offer higher resistance to cation depletion under physiological conditions. The advances in scaffold structures has been complemented by design platform software packages such as caDNano, DAEDALUS, and vHelix. DNA LEGO-like bricks (Figure 4.3) were developed in the past decade.18 Each brick consists of four short single DNA strands; two head and two tail domains. Using 17See U.S. Department of Energy, 2017, Basic Research Needs Workshop on Synthesis Science for Energy Relevant Technology, https://science.energy.gov/~/media/bes/pdf/reports/2017/BRN_SS_Rpt_ web.pdf, Figure 6. 18 Y. Ke, L.L. Ong, W.M. Shih, and P. Yin, 2012, Three-dimensional structures self-assembled from DNA bricks, Science 338:1177-1183, doi: 10.1126/science.1227268.
176 F r o n t i e r s o f M at e r i a l s R e s e a r c h FIGURE 4.3â LEGO-like building blocks. (a) Four domains of single-stranded DNA brick. (b) Rep- resentation of the four possible different orientations as LEGO bricks. The tail domains (1 and 4) are represented by the protruding pins and the heads by the holes in the blocks. The possible connections are between blocks 1 and 3 and 2 and 4, and the shape of the protruding part must match the shape of the hole. (c) The connection of a south and east block to form the required 90-degree angle, and the joining of two complementary domains a and a*. (d) Connecting the blocks to form a structure. The first and fifth blocks are the same. SOURCE: From Y. Ke, L.L. Ong, W.M. Shih, and P. Yin, 2012, Three-dimensional structures self-assembled from DNA bricks, Science 338:1177-1183, doi: 10.1126/ science.1227268, reprinted with permission from AAAS.
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 177 a selection of bricks from a menu of preformed motifs, it is possible to self-assemble almost arbitrarily complex 3D structures without the need to use a scaffold structure. The origami structures have been used as templates for producing Au nanoÂ articles, Au nanorods, and quantum dots; molds in which to synthesize p ÂnanoÂparticles; meshes for microlithography; and biosensors; and for drug delivery. In addition, active systems, walkers, machines, and factories have already been demonstrated on origami platforms.19 Perhaps an area that has not been given much attention is the possibility that when using DNA as a structural material, there might be unintended potential for DNA to behave in a signaling fashion. Furthermore, it was demonstrated that origami was possible with long-chain RNA scaffolds. This was first demonstrated by using DNA staples and later using RNA staples. 4.2.3 2D Shape-Changing Materials A class of reconfigurable metamaterials that has advanced considerably over the past decade is one based on 2D films that fold or bend into predetermined 3D structures. These materials have potential applications to biomedical devices (e.g., self-deployable stents), energy storage (e.g., stretchable Li-ion batteries), robotics, and architecture (smart window coverings to control light reflection). Advances in the ability to fabricate 3D structures at the micro- and nanolength scales were achieved.20 This includes advances in AM techniques and inks based on metals, metal oxides, biomaterials, and biocompatible polymers. Other approaches to de- velop 3D nanostructures exploit bending and folding of thin plates by the actions of residual stresses or capillarity effects and self-actuating materials. It is also pos- sible to generate 3D structures that respond to an external stimulus such as heat or water. Origami (folding) and kirigami (cutting and folding)-inspired designs have expanded the structures that can be formed and increased the materials space to in- clude important ones needed for future advanced technologies. An example of the processing of a mechanically guided scheme is shown schematically in Figure 4.4. The formation of robust 3D structures requires minimization of the overall strain as well as the formation of strain concentrations. The minimization of the strain has been achieved through the use of finite element modeling, which has shown that the length and width of the kirigami cut are important. It turns out that longer cuts are better than shorter ones, as stress concentrations are avoided, and wider 19 H. Gu, J. Chao, S.-J. Xiao, and N.C. Seeman, 2010, A proximity-based programmable DNA nanoÂ scale assembly line, Nature 465:202-205, doi: 10.1038/nature09026. 20 Y. Zhang, F. Zhang, Z. Yan, Q. Ma, X. Li, Y. Huang, and J.A. Rogers, 2017, Printing, folding and assembly methods for forming 3d mesostructures in advanced materials, Nature Reviews Materials 2:17019.
178 F r o n t i e r s o f M at e r i a l s R e s e a r c h FIGURE 4.4âSchematic illustration of steps for fabricating three-dimensional mesostructures by controlled mechanical buckling of two-dimensional precursors formed using lithographic techniques. SOURCE: Reprinted by permission from Springer Nature: Y. Zhang, F. Zhang, Z. Yan, Q. Ma, X. Li, Y. Huang, and J.A. Rogers, 2017, Printing, folding and assembly methods for forming 3D mesostructures in advanced materials, Nature Reviews Materials 2:17019, Â© 2017. cuts better than narrower ones, as the maximum strain is reduced. The minimiza- tion of the strain owing to the introduction of kirigami cuts is shown in Figure 4.5 for 2D square silicon membranes.21 This illustration of finite element modeling is advancing the design of robust 3D structures. One application of kirigami structures has been as window blinds to control the sunlight entering a room and thus creating an adaptive energy saving structure. To control the tilt of the louvres, a lattice of linear cuts on an elastomeric sheet is augmented by notches on one side or the other of the sheet, a technique referred to as kiri-kirigami to indicate cuts on cuts.22 When stretched, the cuts open into diamond-shaped holes bounded by narrow segments that undergo out-of-plane buckling and twisting whose directions are controlled by the placement of the notches. Importantly, in this design the direction of the twist of the joints is inde- pendent of the loading direction. The reflectance of the stretched sheet depends on the direction of the twist. This can be controlled passively through mechani- cal stretching or having cuts on the front and back in the kiri-kirigami structure. Figure 4.6(a) shows a schematic illustration of the effect of loading in one and two 21 Y. Zhang, Z. Yan, K. Nan, D. Xiao, Y. Liu, H. Luan, H. Fu, et al., 2015, A mechanically driven form of kirigami as a route to 3d mesostructures in micro/nanomembranes, Proceedings of the National Academy of Sciences U.S.A. 112(38):11757-11764. 22 Y. Tang, G. Lin, S. Yang, Y.K. Yi, and R. Kamien, 2017, Programmable kiri-kirigami metamateri- als, Advanced Materials 29:1604262.
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 179 FIGURE 4.5â Finite element analysis of the strain developed on folding two-dimensional square of silicon without and with kirigami cuts. Both structures included bilayers of silicon/polymer, and the prestrain was 80 percent. The color represents the magnitude of the strain. SOURCE: Y. Zhang, Z. Yan, K. Nan, D. Xiao, Y. Liu, H. Luan, H. Fu, et al., 2015, A mechanically driven form of kirigami as a route to 3D mesostructures in micro/nanomembranes, Proceedings of the National Academy of Sciences 112(38):11757-11764. FIGURE 4.6â Kiri-kirigami structure. (a) Cutting and notch pattern. (b) and (c) When stretched in different ways, controllable and repeatable local tilting can be achieved. (d) A room with window wells where incoming light creates areas in shadow and areas cast in harsh light. (e) The same room but with windows covered by a carefully controlled kirigami structure that casts an even soft light in the room. SOURCE: Y. Tang, G. Lin, S. Yang, Y.K. Yi, and R. Kamien, 2017, Programmable kiri-kirigami metamaterials, Advanced Materials 29:1604262, Â© 2016 WILEYâVCH Verlag GmbH & Co. KGaA, Weinheim.
180 F r o n t i e r s o f M at e r i a l s R e s e a r c h directions on the distortion of the unit cell. Here it can be seen that the orientation of the twist is always the same, with images of light actually entering a room with windows without and with the kiri-kirigami louvres. Window treatments based on this design have been tested to determine the ability to use such structures to control light. These experimental realizations of origami-kirigami structures have been ac- companied by sophisticated theory, an example of which focuses on the special case in which kirigami cuts are constrained by the geometry of a honeycomb and are characterized by the disclinations and dislocations they create in that lattice.23 An end product of this work is an algorithm24 for arrangement of kirigami cuts that can produce any topographic shape. Similarly, an algorithm has been developed that successfully yields the folding pattern to produce any polyhedral shape in the least number of folds. 4.2.4 Additive Manufacturing Several fabrication innovations have transformed the approach for producing complex components. During the past decade, AM of metallic components has transformed from a fledgling research effort to a high-visibility commercial activ- ity, with particularly high impact for aerospace and medical implant applications. AM is the ability to deposit materials layer by layer or point by point to fabricate complex components directly from computer-aided design models. An example is provided in Box 2.6. The materials palette for AM extends beyond metals to polymers, ceramics, composites, and biomaterials, employing a variety of differ- ent techniques for assembly. Even apartment buildings have now been additively manufactured in China.25 As indicators of the increased interest in AM, the current industrial growth rate is approximately 30 percent per year,26 and the number of peer-reviewed publications per year more than quadrupled between 2006 and 2016. In pursuit of AM for mainstream applications, four goals support more rapid and widespread usage of this technology: 1. Enhancing AM components performance through materials development; 23 See L. Hardesty, 2017, âOrigami Anything: New Algorithm Generates Practical Paper-Fold- ing Patterns to Produce Any 3-D Structure,â MIT News, June 21, http://news.mit.edu/2017/ algorithm-origami-patterns-any-3-D-structure-0622. 24 C. Modes and M. Warner, 2016, Shape-programmable materials, Physics Today 69(1):32-38. 25 See 3ders.org, 2015, âWinSun China Builds Worldâs First 3D Printed Villa and Tallest 3D Printed Apartment Building,â January 18, http://www.3ders.org/articles/20150118-winsun-builds-world- first-3d-printed-villa-and-tallest-3d-printed-building-in-china.html. 26 D.L. Bourell, 2016, Perspectives on additive manufacturing, Annual Review of Materials Research 46:1-18.
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 181 2. Developing new methodologies for certifying additive components for use; 3. Developing integrated computational materials engineering capabilities together with high-throughput characterization techniques to accelerate the development to deployment cycle of AM; and 4. Developing new processes and machines with increased deposition rates, build volumes, and mechanical properties. Materials development, certification, and integrated characterization and modeling (goals 1-3) represent an important investment opportunity because of the potential to enable disruptive change in manufacturing. AM is still a very small market, at approximately $4 billion worldwide and growing at approximately 34 percent per year (compared to $11 trillion per year in worldwide manufacturing), but it is estimated to be valued at $33 billion by 2023.27 AM systems are limited by the costs of materials, rates of fabrication, reliability of processes, integration with other processes, and limitations in layer-by-layer depo- sition. Historically, AM has been limited to small build envelopes at low deposition rates with limited materials that are sold by the equipment manufacturers. Next- generation systems explore controls, hardware, feedstock condition, and software to develop new machines with high deposition rates, large build volumes, and im- proved properties using the low-cost feedstocks available. As systems are enhanced, additional applications are possible, increasing the number of companies interested in the technology, and its potential impact. New control and robotic systems are driving advances in AM that have the potential to include out-of-plane deposi- tion, resulting in true AM and the ability to deposit multiple materials in the same machine. A wide range of feedstocks, including irregular particulate morphologies and other product forms, have the potential to lower overall cost of the materials involved. Development in smart or enhanced feed mechanisms will also increase reliability and the overall cost impact. All of these enhancements are directed at achieving transformation of AM from a 1- to 2-sigma process (30 to 70 percent success rate) to a 6-sigma process (3.4 failures per million). Achieving this goal will require specific focus on identification of design rules for new processes, develop- ment of robust tool paths via advanced slicing software to minimize residual stress and part defects, development of machine design, process monitoring and control to improve reliability and repeatability of the deposition process, and layer-by-layer inspection and adaptive process control to correct part defects during manufactur- ing. The improvements in AM processes and machines will proceed in parallel with 27 See Markets and Markets, â3D Printing Market by Offering (Printer, Material, Software, Service), Process (Binder Jetting, Direct Energy Deposition, Material Extrusion, Material Jetting, Powder Bed Fusion), Application, Vertical, Technology, and GeographyâGlobal Forecast to 2024,â https://www. marketsandmarkets.com/Market-Reports/3d-printing-market-1276.html, accessed May 12, 2018.
182 F r o n t i e r s o f M at e r i a l s R e s e a r c h the materials developments that expand the potential uses of the technology. One example of an improvement needed is better ability to control the surface finish. 4.2.5 Cold Gas Dynamic Spraying Cold gas dynamic spraying, commonly referred to as âcold spray,â is a solid- state material deposition process that uses powder particles sprayed at high velocity onto a substrate. The powder particles plastically deform upon impact, creating a metallurgical bond between the powder and the substrate. The process utilizes an accelerated gas stream (N2, He, or air) to propel particles at speeds ranging from 300-1200 m/s toward a substrate, resulting in solid-state particle consolidation and rapid buildup of material. While cold spray is a fundamentally solid-state process, it is performed over a range of temperatures that can reach more than 50 percent of the melting point of the material. The advantages of cold spray are low thermal impact to the substrate, no combustion fuels/gases, no melting of the coating mate- rial, and a resultant coating with high density and moderate compressive residual stresses. More importantly, cold spray can be applied for additive repair in a field environment. As cold spray deposition technology rapidly advances, many critical and in- triguing scientific questions are uncovered and remain to be answered. The actual physics present at a single-particle impact is still a very open question. New, fascinating, fundamental experiments using laser-shock acceleration combined with ultra-high-speed cameras are allowing for the imaging and measurements of single-particle impact dynamics. These new measurements are providing crucial experimental data to support, inform, and validate the many theoretical models that have been and are being developed to describe the cold spray deposition process. The sudden impact of metallic particles also causes the grain/crystallite structure to be reduced by an order of magnitude in size, but the mechanism by which this transformation takes place in less than 100 ns is still unknown. To this point, the community really needs more quantitative information about the micro- and nanostructures generated by the cold spray process. 4.2.6 Nonequilibrium Processing Materials made either by nature or by manufacturing are seldom the result of equilibrium processes. Among the exceptions are crystals and alloys that are thermodynamically stable and formed by slow cooling. Living systems make their wide variety of functional materials from proteins. They continuously produce these proteins through an active process using ribosomes and genetic informa- tion to assemble specified amino acids. More than half of their metabolic energy is consumed in protein production. Manufacturing processes use energy to heat,
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 183 cool, mix, stress, chemically change, and otherwise manipulate constituents into materials with desired properties that are not found in thermodynamically stable form. Material quenches and production techniques such as MBE produce meta- stable materials. Although researchers are growing progressively more sophisticated in using far-from-equilibrium processing, from semiconductors to AM, the underlying science is missing. Thermodynamics and statistical mechanics provide rules for averaging the well-known dynamics of classical or quantum states to obtain the macroscopic properties of materials and many-particle systems in equilibrium. There is a solid foundation in laws that are universal, easily applied, and well tested over the past two centuries. The answers result from extremizing free energies and related state functions. There are no such universal governing principles or a set of free-energy-like functions to be extremized for nonequilibrium processesâfor example, in some instances, dissipation is maximized; in others, it is minimized. The need for a deeper understanding of nonequilibrium phenomena is nowhere greater than in materials science. Recent advances, such as AM, present new challenges and opportunities for using and understanding nonequilibrium processes. âLaser-based AM processes generally have a complex nonequilibrium physical and chemical metallurgical nature, which is material and process dependent. The influence of material char- acteristics and processing conditions on metallurgical mechanisms and resultant microstructural and mechanical properties of AM processed components needs to be clarified.â28 The extreme temperature, solvent, or stress gradients imposed by such processing will provide insight into far-from-equilibrium reactions and the properties of the metastable materials they produce. Levitation is both a synthesis method and a characterization method because the influence of the container on the measurement is removed. Initial work in levitation was on metallic glasses, low-temperature melting materials to create high entropy, and nonequilibrium structures. Levitation methods have been expanded to include acoustic, aerodynamic, electromagnetic, and electrostatic, depending on the heating source and the sample size resulting. Both X-ray and neutron character- ization can be applied to levitating materials. Samples fabricated from levitation are highly nonstable, and one must consider whether evaporation plays a role during characterization of said sample. These sets of techniques are helping to elucidate structure pathways of materials from high-temperature through a variety of cooling routes. Characterization of containerless synthesized materials is extremely helpful for understanding and design of novel materials. 28 D.D. Gu, W. Meiners, K. Wissenbach, and R. Poprawe, 2012, Laser additive manufacturing of metallic components: Materials, processes and mechanisms, International Materials Reviews 57(3):133-164.
184 F r o n t i e r s o f M at e r i a l s R e s e a r c h 4.2.7 Single Crystal Growth Advances in fundamental and applied materials research have been driven by the development of numerous different families of crystalline materials with widely varied functionalities.29 The two main paths involve synthesis of (1) high purity but chemically simple and abundant materials and (2) systems with com- plex stoichiometries/structures that feature multiple tunable characteristic energy scales. Examples in the first category include germanium, silicon, and gallium ar- senide, while examples in the second category include strong rare-earth permanent magnets, high-temperature superconductors, and many other quantum materials. Despite the fundamental importance of crystal growth for many different scientific and commercial purposes, it remains the case that it is very often more of an art or technique than a science. Furthermore, many synthesis methods that are routinely used have made only marginal evolution during the past several decades. General categories include (1) solid-to-solid reactions, (2) liquid-to-solid reactions, and (3) vapor-to-solid reactions. Here, the focus is on methods to produce bulk crystals, but the arguments below lend themselves to thin films as well. For example, (1) includes solid-state reaction and spark plasma sintering; (2) includes molten flux growth, arc- and induction-furnace melting, Czochralski crystal pulling, and Bridg- man crystal pulling; and (3) includes vapor transport (e.g., using iodine) and thin film techniques. For a detailed review of most such methods, see B. R. Pamplinâs book first published in 1975.30 Note that this and other detailed accounts of these synthesis methods were already presented at least four decades ago, and in many ways the same methods remain the state of the art. This emphasizes the difficulty in developing new strategies but also highlights that this is an area where there is significant opportunity for transformative advances. Despite their usefulness, most crystal growth methods are limited by several practical problems that can now be addressed. First among them is that the pro- gression of events during crystal formation is not often quantitatively understood. Instead, most processes are developed by trial and error, and even the philosophy of synthesis is guided by qualitative experience of individuals or isolated groupsâ that is, through colloquial methodology. In order to advance beyond these limita- tions, it is necessary to develop routine methods that provide detailed knowledge about processes that occur during a reaction as well as active modeling that allows modification of a growth in real time. There have been limited recent attempts to do thisâfor example, where a crystal growth process is observed through neutron scattering, but this field is wide open for advances. Also important is that most 29 See National Research Council, 2009, Frontiers in Crystalline Matter: From Discovery to Technol- ogy, The National Academies Press, Washington, D.C., https://doi.org/10.17226/12640. 30 B.R. Pamplin, ed., 1980, Crystal Growth, 2nd edition, Pergamon Press, Oxford, U.K. https://www. elsevier.com/books/crystal-growth/pamplin/978-0-08-025043-4.
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 185 real materials harbor defects on all length scales that are difficult to characterize and even more difficult to correct. A simple example is seen for single crystals that are pulled from a melt (e.g., using the Czochralski technique), where chemical gradients along the growth axis are commonplace. For quantum materials, such variations often have a large impact on electronic and magnetic properties and a mastery of them is needed but undeveloped. An in-depth knowledge and control of the growth process would mitigate these problems and furthermore would open yet another tuning parameter to control a materialâs properties. In order to accomplish these goals, a multifaceted and well-funded push to develop new intersections for crystal growth within the materials research (MR) community is needed. In this scenario, crystal growth would be treated as a research area of its own that is not strictly associated with specific classes of materials or topics. Two promising directions of research are (1) methods for rapid-through- put/rapid-characterization and (2) synthesis methods under extreme conditions (e.g., applied pressure, magnetic fields, electric field, etc.). This research requires suites of materials analysis (e.g., neutron, X-ray scattering, and microanalysis) and computation collaborations to facilitate progress, plus ways to broadly distribute information. 4.3 SIMULATION AND COMPUTATION TOOLS The nature and benefits of computation capabilities in MR are extensive, but vary dramatically, depending on the material class or application. For example, the useful computational tools for the well-developed semiconductor and aerospace industries are very different from those needed for new materials, where there are still basic questions and no elaborate databases for mining or application of artificial intelligence. However, it is clear that computational capabilities, on both the large as well as the small scale, will continue to advance large expanses of the MR landscape. 4.3.1 Integrated Computational Materials Engineering and Materials Genome Initiatives Two initiatives began during the past decade that aimed to accelerate the timeline from development to deployment of a material, by highlighting the ben- efits of experiment and computation working together, and the need for materials computational design at all stages of the manufacturing process.
186 F r o n t i e r s o f M at e r i a l s R e s e a r c h One initiative, the Integrated Computational Materials Engineering (ICME) ap- proach, was detailed in a National Academies study of 2008.31 The ICME approach seeks to integrate materials models across different length scales (multiscale) and computational methodologies to capture the relationships between synthesis and processing, structure, properties, and performance. The initial successes of ICME were enabled by the existence of a wealth of data on specific materials systems that had been generated over prior decades of research.32 The second initiative began in 2011, when President Barack Obama launched the Materials Genome Initiative (MGI) with the intent âto discover, manufacture, and deploy advanced materials twice as fast, at a fraction of the cost.â33 Central to this vision was the equal weighting of, and integrated nature of, computational tools, digital data, and experimental tools. The latter included synthesis and pro- cessing as well as material characterization and property assessment. It was rec- ognized that in each area there was a need to develop new tools and capabilities to advance the field and to explore the intersection between each of the three. The initiative recognized the importance of data and the need to develop databases and the tools to interrogate and visualize it. This need is particularly important to the success of the initiative as extensive and accessible databases exist for only a few material systems. It was important that the integrated tools were to be one framework over the seven stages of the materials development continuum (the seven stages are discovery; development; property optimization; systems design and integration; certification; manufacturing; and deployment).34 Following the direction indicated by these two initiatives has led to some notable successes.35 For example, Ford Motor Company used the ICME frame- work to obtain a 15-25 percent decrease in the product development time of cast aluminum power train products. For cast aluminum components, subtle changes in the manufacturing process or component design can lead to engine durability issues and program delays, but simulating the effect of manufacturing history on the engine durability allowed Ford to avoid these costly delays. 31 National Research Council, 2008, Integrated Computational Materials Engineering: A Transforma- tional Discipline for Improved Competitiveness and National Security, The National Academies Press, Washington, D.C., https://doi.org/10.17226/12199. 32 G.B. Olson, 2013, Genomic materials design: The ferrous frontier, Acta Materialia 61:771-781, http://dx.doi.org/10.1016/j.actamat.2012.10.045. 33 See Office of Science and Technology Policy, 2011, Materials Genome Initiative for Global Com- petitiveness, June, https://www.mgi.gov/sites/default/files/documents/materials_genome_initiative- final.pdf. 34 Includes sustainability and recovery. 35 See B. Obama, 2016, âThe First Five Years of the Materials Genome Initiative: Accomplish- ments and Technical Highlights,â https://www.mgi.gov/sites/default/files/documents/mgi-accom- plishments-at-5-years-august-2016.pdf.
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 187 Computational methods integrated with material property databases have suc- cessfully been used to recently develop two forms of steel that were licensed to a U.S. steel producer (by QuesTek Innovation, LLC), and then deployed into demanding applications. The first alloy was for the U.S. Air Force (USAF): Ferrium S53, an ultra-high-strength and corrosion-resistant steel that eliminates toxic cadmium plating, and is now flying as safety-critical landing gear on USAF A-10, T038, C-5, and KC-135, and on numerous SpaceX rocket flight-critical components. The sec- ond alloy was for the U.S. Navy: Ferrium M54, an upgrade from legacy alloys, which offers more than twice the lifetime of the incumbent steel while saving $3 million in overall program costs and is now deployed on their T-45 safety-critical hook shank component. As seen in Box 2.2 in Chapter 2, the time from development to deployment was reduced from 8.5 years for Ferrium S53 (deployment in 2008) to 4 years for Ferrium M54 using only one design iteration (qualification in 2014). QuesTek has designed, also using integrated methods, a third steel: Ferrium C64, which is a best-in-class gear steel that allows for increased power density, fuel ef- ficiency, and lift of military helicopters. This steel has been patented and is now available for purchase. Other successful areas of integrated computation-experiment-data have been in new materials for batteries,36 and many other companies, such as Boeing, use integrated approaches for advanced metals and other material discovery and de- ployment. An example of the acceleration of the discovery of materials through a combination of quantum mechanical calculations, synthesis, and experiments is the design and optimization of liquid crystal sensors.37 These sensors work on the principle of the selective displacement of liquid crystal molecules by analytes that results in an optically detected transition of the liquid crystal. Liquid crystals are in general sensitive to ultraviolet light, poisons/pollutants, and strain. There are now many liquid crystal sensors. They hold promise as inexpensive, portable, and wearable sensors for key applications (e.g., poisonous gas detection). One challenge regarding the MGI is that it has tended over time to become an interaction between materials modeling, computing, and data communications, leaving experiment behind. Vast databases have been created of purely computa- tional results, without experimental validation of stability or accuracy. In addition, without experimental results, the materials cannot be easily tested or deployed for manufacture. Some of these issues are addressed further in Section 4.3.5 on databases. Another challenge is that it is difficult to maintain modeling continu- ity across the full range of seven stages from discovery to deployment unless the 36 See, for example, G. Ceder, âThe Ceder Group,â http://ceder.berkeley.edu/, and the many suc- cesses on that website. 37 H. Hu, Z. Lu, and W. Yang, 2007, Fitting molecular electrostatic potentials from quantum me- chanical calculations, Journal of Chemical Theory and Computation 3(3):1004-1013.
188 F r o n t i e r s o f M at e r i a l s R e s e a r c h research group is immersed in a development environment. The MGI goal would be furthered by increased university-industry interactions, which have widespread benefits, as explained elsewhere in this report. 4.3.2 Computational Materials Science and Engineering Over the past decade, there have been significant improvements in modeling materials on multiple length scales, including quantum mechanical, atomic, meso- scale (course-grained or phase field), and continuum scales, in addition to statistical methods. This would also include, for example, the Landau-Lifshitz-Gilbert equa- tion38 for magnetic materials, and progress on understanding Gilbert damping.39 These advances have been spurred on by the advances of physical science, such as the example just given, together with the vast increase in computing power over the past decade as well as the integration with experiment and data described in the last section. The area of quantum-level modeling has had perhaps the greatest ad- vancement and opportunity for future improvements, and will be summarized first. In a significant shift, electronic structure (i.e., density functional theory, DFT) computational software has become readily available in packages both commercial (CASTEP, VASP, WIEN2K) and open source (Quantum Espresso, Abinit).40 These packages are well-documented online, and some (e.g., VASP) have well-developed user interfaces. The calculations of material properties enabled by these packages have high fidelity. They are used to predict structure-property relationships for many material types, discover new structures, enhance the interpretation of ex- perimental data, and populate databases. When magnetic material properties are calculated with DFT, the addition of a âHubbard Uâ term can give very accurate properties of d-electron materials, and in the past decade there have been many developments in this area.41 Modern DFT packages can handle full 3D spin dependence (not just spin up or down), and including relativistic effects and spin-orbit coupling is now a matter of setting 38 For a recent review, see, for example, M. Lakshmanan, 2011, The fascinating world of the Landau- Lifshitz-Gilbert equation: An overview, Philosophical Transactions of the Royal Society A 369:1280. 39 L. Chen, S. Mankovsky, S. Wimmer, M.A.W. Schoen, H.S. KÃ¶rner, M. Kronseder, D. Schuh, D.Â Bougeard, H. Ebert, D. Weiss, and C.H. Back, 2018, Emergence of anisotropic Gilbert damping in ultrathin Fe layers on GaAs(001), Nature Physics 14:490. 40 For the five programs, see the CASTEP website at https://www.castep.org, the Vienna Ab initio Simulation Package (VASP) website at https://www.vasp.at, the WIEN2k website at http://susi. theochem.tuwien.ac.at/, the Quantum Espresso website at https://www.quantum-espresso.org, and the ABINIT website at https://www.abinit.org, all accessed June 6, 2018. 41 B. Himmetoglu, A. Floris, S. de Gironcoli, and M. Cococcioni, 2014, Hubbard-corrected DFT energy functionals: The LDA+U description of correlated systems, International Journal of Quantum Chemistry 114:14.
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 189 parameters in the input file. DFT is challenged when there are multiple sources of magnetism, such as the f-electron materials, which often have unfilled d-electron orbitals as well (for which an additional parameter J is often added, and even then comparison with experiment is often needed), and whenever there are many- body interactions (superconductivity, metal-insulator transitions, Kondo effects, complex oxides, etc.) not describable by single-particle states, as DFT is, based on functions of the local density. Many useful improvements of DFT have been developed in the past decade, including extensions of DFT to finite temperature, excited states, and time depen- dence. These often combine perturbation theory with standard methods (such as the GW approximation method) or go beyond it.42 These extensions add much calculation overhead. In addition, active work includes improvements to the ex- change functional used in DFT codes, as well as to the pseudopotentials used to estimate the inner cores of atoms in programs such as quantum espresso and VASP. Going beyond DFT to attempt to model many-body physics of complex ma- terials such as the rare earths is dynamical mean field theory (DMFT), which maps the lattice problem to a local impurity model. This lattice, even though it is a many-body problem, has a recognized set of solutions. The main approximation of this method is to assume that the lattice self-energy is independent of momen- tum, an approximation that becomes exact in the limit of infinite dimensions. This methodology, combined with DFT, has had some notable successes, includ- ing the phase diagram of plutonium,43 and the metal-insulator phase transition in the Bose-Hubbard model.44 By combining DMFT with time-dependent DFT (TDDFT), properties that depend on the time evolution of electronic states such as multielectron and hole bound states (excitons, trions, etc.) could be calculated. TDDFT has the advantage of being a theory of one time-argument function, the charge density. Quantum Monte Carlo (QMC) is another technique for studying materials with many-body effects. In general, it is an accurate and reliable method that is trivially parallelizable and thus high-performance computing (HPC)-friendly. It is also computationally expensive, complex to apply, and thus challenging. Among the different flavors of QMC, Diffusion Monte Carlo (DMC) is the most popular. It is a stochastic method that allows direct access to system ground states and sometimes of the excited states of a many-body system as well. In principle, DMC is an exact 42 S.X. Tao, X. Cao, and P.A. Bobbert, 2017, Accurate and efficient bandgap predictions of metal halide perovskites using the DFT-1/2 method: GW accuracy with DFT expense, Nature Scientific Reports 7:14386. 43 N. LanatÃ , Y. Yao, C.-Z. Wang, K.-M. Ho, and G. Kotliar, 2015, Phase diagram and electronic structure of praseodymium and plutonium, Physical Review X 5:011008. 44 P. Anders, E. Gull, L. Pollet, M. Troyer, and P. Werner, 2010, Dynamical mean field solution of the Bose-Hubbard model, Physical Review Letters 105:096402.
190 F r o n t i e r s o f M at e r i a l s R e s e a r c h method that maps the SchrÃ¶dinger equation to a diffusion equation. However, ap- proximations must be made for computational feasibility when modeling fermions (i.e., electrons). The most common approach is the fixed-node approximationâ nodes of the wavefunctions are kept fixed to those of the original trial wavefunction during the search for ground-state wavefunctions. For molecular solids, quantum chemistry methods tend to be more accurate. These include configuration interaction, coupled cluster, and multireference meth- ods. The trade-off for accuracy is that they are computationally formidable and not convenient for high-performance computing. Moving from the subatomic scale of the quantum methods to those on the atomic scale, a technique widely used in chemistry for molecules but also useful for especially surfaces of materials is molecular dynamics. This method uses force fields generated by DFT or other more approximate methods based on tables of vibrational analysis of bonds. Molecular dynamics can generate time dependence of atomic movements (for very short periods of time) at finite temperatures. A development to note of the past decade is the availability of highly modular, massively parallel shareware software to carry out large-scale atomic-scale simulations with high efficiency of dynamical properties of materials, including thermal conductivity, mechanical de- formation, and irradiation, and many other properties. An example of an molecular dynamics software package is LAMMPS (Large-Scale Atomic/Molecular Massively Parallel Simulator),45 developed by Sandia National Laboratory. Such atomic-scale simulations have been further enabled by the availability of multiphysics (multiple interacting physical effects) reactive force fields or potentials that provide frameworks for the study of heterogeneous material systems. The cataloging of reactive potentials, through efforts at the National Institute of Standards and Technology (NIST) and the OpenKIM project,46 are providing important ways to track performance and suit- ability of these empirical methods. Development of simple atomistic potentials with accuracies at the level of DFT and computational efficiency sufficient to undertake simulation of realistic (laboratory-level) large length- and time-scale simulations or for high-throughput calculations is necessary. Machine learning has helped develop such potentials, as described in Section 4.3.3. Mesoscale modeling has also had significant advancements over the past de- cade. One example is the software CALPHAD (Computer Coupling of Phase Dia- grams and Thermochemistry),47 which enables the prediction of phase diagrams 45 See Sandia National Laboratories, âLAMMPS Molecular Dynamics Simulator,â http://Lammps. sandia.gov, accessed January 5, 2018. 46 See NIST, âInteratomic Potentials Repository,â https://www.ctcms.nist.gov/potentials/, and Open- KIM, âKnowledgebase of Interatomic Models,â https://openkim.org, both accessed January 5, 2018. 47 CALPHAD is a computational methodology. See, for example, Thermo-Calc Software, âThermo- Calc,â http://www.thermocalc.com/products-services/databases/the-calphad-methodology/, or a free development at S. Fries, âOpen Calphad,â http://www.opencalphad.com/, both accessed January 5, 2018.
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 191 and thermodynamic behavior. Phase field modeling48 simulates materials growth and mesoscale structure-property relationships, and as a third example on the mesoscale, coarse-grained simulation methods have much improved their usability for modeling molecular materials such as polymers. Over the past decade, there has been an increasing translation of computa- tional tools to industrial application: the boxes in Chapter 2 provide examples in alloy development and industrial processing. Continuum-state variable process models are used in manufacturing for casting, forging, rolling, vapor deposition, machining, and so on. Further, CALPHAD and the DICTRA (diffusion module of computer code for Thermo-Calc) diffusion code and method are ubiquitous and heavily supported by industry. The transition of other toolsâfor example, phase field, kinetic Monte Carlo, and so onâare in progress. Progress has been made in physics-based multiscale models for mechanical behavior prediction, but these have yet to be adopted by industry. Intense effort has gone into developing and improving single- and multiscale methods that may be concurrent, hierarchical, or hybrid and that may be solved in parallel, sequentially, or in a coupled manner. These methods have enhanced both fundamental science studies of, for example, the mechanics of materials and the physics and chemistry associated with materials growth, and applied engineering efforts associated with, for example, the manufacture of material parts in industry. The ability of these methods to make full use of improvements in computer archi- tecture varies with method type, length scales, and time scales. Time-dependent materials have made great strides over the past decade (e.g., modeling creep), but as is the case for classical molecular dynamics simulation, as system size increases, there are rapid increases of computational demands of time and memory. Conse- quently, redesigning software or using combinations of methods (e.g., molecular dynamics and Monte Carlo) are important. Other advances have been achieved through co-design of experiment and com- putational infrastructure, which has come to the fore in the past decade. Examples include progress in image recognition for microstructure identification and the use of the parallel advances in brightness and power from scattering methods, such as X-ray and neutron scattering, and computational materials science that promise to advance the field of scattering science by elevating the interrogation of data from scattering experiments. A grand challenge in computational materials science is to design the electronic structure of materials directly from first principles, to go from physical/mechani- cal properties to structure and atomic constituents, rather than the usual other way around. Technical roadblocks in the capabilities of computational methods are being addressed in federally supported research centers, as well as in selected 48 L.Q.Chen, 2002, Phase-field models for microstructure evolution, Annual Reviews of Materials Research 32:113-140.
192 F r o n t i e r s o f M at e r i a l s R e s e a r c h academic centers and private companies, and much progress in the coming decade is expected toward this goal. 4.3.3 Machine Learning for Materials Discovery Over the past decade, both supervised and unsupervised machine learning algorithms have been used to calculate materials properties, explore materials compositional space, identify new structures, discover quantum phases, and iden- tify phases and phase transitions. Although training is usually necessary, once set up, these models are able to calculate a wide range of properties, with high ac- curacy, at large scale, and at speeds orders of magnitude faster than conventional computational methods. Supervised machine learning algorithms that have been applied to materials include random forests, kernel ridge regression, and multilayer perceptron artificial neural networks. These methods allow the mapping of a sec- tion of featuresâfor example, atomic positionsâto output values such as material properties and performance; their goal is to map output to input. Machine learning methods have been used to efficiently and effectively explore materials space, considering all possible structures and combinations to identify new structures as well as ones with specific properties. For example, a machine learning model based on kernel ridge regression was used to calculate, with DFT accuracy, the formation energies of the 2 million possible elpasolitesâan isometric- hexoctahedral quaternary (Al, F, K, and Na) mineral. This method identified the most strongly bound elpasolites, proposed a new elpasolite order based on the energy, and identified 128 structures with 90 unique stoichiometries. This is one ex- ample from many in which machine learning was used to efficiently computation- ally screen many polomorphs to identify unique crystal properties (see Figure 4.7). Normally, DFT is used to calculate the potential energy surfaces that are the necessary inputs to molecular dynamics or Monte Carlo simulations, and the time and memory demands of these programs significantly limit the length of time a large system can be simulated. By introducing the concept of a symmetry-function- value for each atom that reflects the local environment of that atom, a generalized neural net representation of DFT potential energy surfaces can be developed. This method provides the energy and forces as a function of all atomic positions for large system sizes and was several orders of magnitude faster than DFT.49 The method can be used with nonperiodic systems and can be used to describe all types of bonding. There are now for molecular dynamics, for example, artificial neural networks potentials and Gaussian approximation potentials. Support vector machines and the Spectral Neighbor Analysis Potential are extensively used. The 49 J. Behler and M. Parrinello, 2007, Generalized neural-network representation of high-dimension- al potential-energy surfaces, Physical Review Letters 98:146401.
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 193 FIGURE 4.7â Elpasolite is AlNaK2F6 and its crystal structure is common among inorganic materials. Substituting other elements in the form ABC2D6 gives the large family of elpasolites. The authors have used their machine learning model to predict the formation energies for all 2 Ã 106 elpasolites made up of all main-group elements up to Bi. The figure shows a heat map of the energies, with the scale in eV/atom shown on the right. In the lower left half of the figure separated by the diagonal white line, the vertical and horizontal axes represent elements for D and C, respectively, and in the upper right half of the figure, the vertical and horizontal axes represent elements for B and A, respectively, with the other two constituents in each case running over all values, giving the 2 million materials. SOURCE: F.A. Faber, A. Lindmaa, O.A. von Lilienfeld, and R. Armiento, 2016, Machine learning energies of 2 million elpasolite (ABC2D6) crystals, Physical Review Letters 117:135502, https://doi.org/10.1103/ PhysRevLett.117.135502, https://creativecommons.org/licenses/by/3.0/. most commonly used approach utilizes traditional fully connected, feed-forward, neural networks by mapping the arrangement of atoms as their input. The total potential energy of the system is broken into atomic energy contributions, whose local environment can be described using radical and angular Gaussian symmetry functions. These potentials, which are analytical and easily integrated in broadly used molecular dynamics simulation codes such as LAMMPS, facilitate large-scale and long-time-scale simulations with affordable computational cost. Unsupervised algorithms (such as principal component analysis [PCA] and deep-convolutional autoencoders) can be trained to categorize unlabeled data and can reveal patterns in high-dimensional material and chemical spaces that would otherwise be difficult to perceive.
194 F r o n t i e r s o f M at e r i a l s R e s e a r c h PCA is a statistical technique for data reduction in high-dimensional spaces. A linear rotation has been found that results in data with a smaller spread in the new space. A nonlinear form of PCA (via a deep convolutional autoencoder) has been used to explore phase transitions, such as a ferromagnetic Ising model of several thousand atoms.50 Remarkably, it showed that fully connected and convolutional neural networks can identify phases and phase transitions, as well as unconven- tional low-order states, in a range of condensed-matter models. Deep convolutional autoencoders belong to a relatively new class of âdeep learningâ algorithms based on multilayered artificial neural networks. Deep learn- ing methods let the algorithm itself decide on the relevant features, operating di- rectly on ârawâ data. These techniques evolved from the field of computer vision. In traditional machine learning, manual feature selection is required prior to training. Deep learning techniques do not require this step (although they require a much larger training set). As an example, a deep convolutional neural network was used on the Ising model and obtained accurate energies and magnetization for both a nearest-neighbor Hamiltonian and a long-range screened Hamiltonian. Training allowed the neural network to generalize to ânever before seen examples,â using observations from a limited number of configurations. Deep convolutional neural networks have been used for modeling large systems, with a method of domain decomposition into overlapping tiles for training and inference, and good results have been obtained for spin models and many-body quantum mechanical opera- tors with DFT accuracy. Recently,51 a deep convolutional neural network model, trained so that no manual feature selection was necessary, predicted ground-state energies of an electron in a random 2D potential to within chemical accuracies with no analytic form for either the potential or ground-state energy. 4.3.4 Quantum Computing as a Computational Materials Tool Chapter 2 describes the MR that has been under way for improved qubits. This section summarizes some of the research needed for quantum computing to become a functional, computational tool. Quantum computers use superposition and entanglement of quantum states to perform operations on many bits simultane- ously. With the appropriate initial state preparation (in itself a research challenge to do optimally), N quantum bits could hold the information of 2N classical bits. With enough error-corrected quantum bits (thought to be around 75-100), quan- tum computers could calculate properties of complex molecules well beyond the capabilities, both computational and memory, of foreseeable classical computers. 50 J. Carrasquilla and R.G. Melko, 2017, Machine learning phases of matter, Nature Physics 13:431. 51 K. Mills, M. Spanner, and I. Tamblyn, 2017, Deep learning and the SchrÃ¶dinger equation, Physics Review A 96:042113.
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 195 This will impact many fields, including medicine and agriculture. Likewise, there is promise for being able to simulate excited states, and the time and temperature dependence of quantum materials, areas also beyond classical computational ca- pabilities. Additional opportunities for quantum computing exist in the areas of optimization, cryptography, finance, and other fields. In the present day, qubits still have a considerable rate of decoherence, which affects the accuracy and gate depth, limiting the number of consecutive operations. Error correction, routine in classical computation, is a significant challenge for quantum computation. With current qubit quality, estimates are that millions of qubits would be needed to do complete error correction for a single logical qubit. Even partial error correction could take dozens of qubits. Nonetheless, significant progress is being made without fault-tolerant comput- ing. There are operating quantum computers of up to around 30 qubits in supercon- ducting (e.g., IBM, Google, Rigetti) systems, and of upward of 50 qubits in trapped ion systems (e.g., Professor C. Monroe, University of Maryland). IBM is notable for having put its quantum computer online,52 the first quantum computer ever avail- able in this way. There are also simulators of quantum computing online that run in the cloud on classical hardware, and these are limited by memory requirements to around 50 qubits. IBM was among the first to adopt this methodology; notable efforts, with large simulators, are also conducted by Microsoft. In addition, several quantum computing centers provide software kits and algorithms for using Python to submit jobs to the quantum computer. With this infrastructure, researchers are performing calculations, and papers are appearing in top journals. Some landmark examples are as follows. Program- ming for a tour de force calculation53 of three small molecules, the authors worked around the error, an indication of how computation on real quantum computers will need to be done in the foreseeable future. Using a one-dimensional (1D) chain of trapped alkali-metal atoms54 as a 51-qubit quantum simulator, researchers ob- served a quantum phase transition. In a system of 53 trapped ions,55 researchers studied the nonequilibrium dynamics of the transverse-field Ising model with long-range interactions. 52 See IBM, âIBM Qâ, https://www.research.ibm.com/ibm-q/, accessed December 3, 2018. 53 A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J.M. Chow, and J.M. Gambetta, 2017, Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets, Nature 549:242. 54 H. Bernien, S. Schwartz, A. Keesling, H. Levine, A. Omran, H. Pichler, S. Choi, A.S. Zibrov, M.Â Endres, M. Greiner, V. VuletiÄ, and M.D. Lukin, 2017, Probing many-body dynamics on a 51-atom quantum simulator, Nature 551:579. 55 J. Zhang, G. Pagano, P.W. Hess, A. Kyprianidis, P. Becker, H. Kaplan, A.V. Gorshkov, Z.-X. Gong, and C. Monroe, 2017, Observation of a many-body dynamical phase transition with a 53-qubit quantum simulator, Nature 551:601.
196 F r o n t i e r s o f M at e r i a l s R e s e a r c h As quantum computing technologies mature, they hold the potential to impact many materials areas. There are extensive research opportunities in modeling and algorithm development, as well as in experimental implementation, for these and other related applications. 4.3.5 Materials Databases: Achievements, Promise, and Challenges Materials databases have been available for decades, and they have been widely accessible on the Internet. Legacy efforts from the 1990s are largely characterized by the delivery of very high quality reference materials data that was hand-curated from the peer-reviewed literature, which gives some guarantee of quality. By con- trast, more modern repositories are able to meet a number of additional demands, including ease of use for nonexperts, assignment of a persistent identifier, broad accessibility, and sophisticated searchability. Application programming interfaces allow machine-to-machine interactions with minimal human involvement, and the capacity to be federated across multiple instances in geographically disparate loca- tions. Examples of general-purpose repositories include the NIST Materials Data Repository (https://materialsdata.nist.gov); the Materials Data Facility (https:// materialsdatafacility.org) sponsored by the NIST center of excellenceâthe Center for Hierarchical Materials Design; the DOE-funded Predictive Integrated Struc- tural Materials Science (PRISMS) Materials Commons (https://materialscommons. org); and numerous resources associated with the DOE Energy Materials Network (https://energy.gov/eere/energy-materials-network/energy-materials-network). There are also databases for specific materials types, such as catalysts, polymers, phase data, electronic properties, and more. NIST has a database of kinetic proper- ties that are important for processing and material evolution. Additional databases that contain a digital representation of microstructure are being developed. Some of the achievements of the past decade in using machine learning, a heav- ily data-driven technique, to advance materials understanding were summarized in Section 4.3.3. Other uses of materials data, combined with data mining tools, are to search for new materials compositions. Such searches could include new thermo- electric compounds, and for AM, new aluminum alloy compositions. Materials data combined with data mining can also be applied for microstructure development based on processing conditions. More recently, researchers have been combining these tools with robotics and in situ process monitoring and characterization to build autonomous research apparatuses like the Autonomous Research System developed by Air Force Research Laboratory researchers to determine optimum growth conditions for carbon nanotubes. A most telling example of the rising value of materials data is the emergence of for-profit institutions that provide data services. For example, Citrine Informatics provides free data repository services for those researchers willing to share their
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 197 data with others as well as data analysis tools and services to commercial customers based on the vast wealth of materials data they are accumulating. Another source of vast databases is databases created with density functional techniques of various materials and their properties. This leads to the question of the validity and verifiability of the data in databases. As more calculations of known materials are performed and stored in data- bases, it is becoming possible to calculate the distribution of errors as well as the average error in calculations of certain physical properties. These known errors, to- gether with enthalpy calculations of various competing phases, allow an estimation of how likely it is that a new material calculation will give a stable material at various temperatures. Databases include the Materials Project56 of thermodynamic struc- ture-property relationships, which is being actively used for materials selection. An active area of research, which has seen considerable growth over the past decade, is in the development of unambiguous, well-defined methods of verifica- tion, validation, and uncertainty quantification of computational results. In addi- tion to collaborating with others who are disciplinary experts, materials scientists and engineers are ensuring that new methodology development takes place hand in hand with experimental work. The need for verification, validation, and uncertainty quantification increases in complexity when methods that span multiple length and time scales are combined, even as the need for robust assessment of these quantities increases as these scales approach those associated with processing and manufacturing. In other words, the closer researchers get to âmaterials by design,â the more important it is to have not only predictions but also engineering-level understanding of their associated errors. Data-driven approaches are poised to dramatically increase the productivity of materials research, but realizing the true potential is predicated on the develop- ment of a seamless Materials Data Infrastructure (MDI) that allows for the storing, sharing, searching, analysis, and learning from data spread over multiple sites. An underpinning goal of the MDI is to create a digital thread through the material life cycle, from discovery through recycling, where critical information is seam- lessly passed digitally along the life cycle in order to reduce barriers of information transfer, thus reducing the relearning currently needed at each stage. One idea that has been suggested is to apply blockchain to data storage for security, timestamp- ing, data version control, attribution, and so on, leading to more secure, reliable databases. FAIR data principles are important in making data findable, accessible, in- teroperable, and reproducible. To address issues around findability and accessibil- ity, NIST has worked with the international community to create the Materials 56 See K. Persson, âThe Materials Project,â https://materialsproject.org/, accessed September 11, 2018.
198 F r o n t i e r s o f M at e r i a l s R e s e a r c h Resource Registry, allowing organizations to advertise metadata about organiza- tions, data collections, application programming interfaces, informational sites, and materials software, using set schema to describe their resources, similar to that used in the astronomy community to enable the Virtual Observatory. In addition, several organizations have developed integrative or e-collaborative platforms that use web-based technologies to manage the materials data life cycle, implementing the FAIR principles. The goal of these platforms is to ease ingestion of data and metadata from experiments or simulation, and enable manual or au- tomated data analysis, data search, and data publication. NISTâs lead in developing schemas will be very helpful in providing standards and criteria for describing and exchanging data. However, challenges remain. The prevalence of proprietary data formats used in scientific instrumentation works against these data being used by others, and in general the difficulty in extracting scientific data, and even more so the supporting metadata, from instrumentation makes data sharing a monumental and laborious challenge for individual investiga- tors. The same lack of metadata holds for data produced by many computational programs as well. Solutions at implementing the FAIR principles will vary by discipline and will evolve toward relatively stable metadata schemas only after a considerable shake- out period. For relatively simple systems, schemas already existâfor example, the Crystallographic Information File format used by crystallography. For a general approach, NIST has developed a Materials Data Curation System, which requires each subdiscipline to define a schema that describes the type of data to be curated. Much effort still remains. 4.4 INTEGRATION OF SYNTHESIS, CHARACTERIZATION, AND MODELING While alluded to in discussions throughout the report, the integration of the different types of tools used by materials researchers presents its own set of op- portunities and challenges, some of which are discussed in this section. 4.4.1 High-Throughput Screening High-throughput screening, in which thousands of experimental samples are subjected to simultaneous testing under given conditions, was first recognized as a tool in the 1980s at GEC Hirst Research Center. Over the next 30 years, high throughput was not highlighted as a tool for materials researchers. In the past Â ecade, more and more materials researchers are using foundational d
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 199 high-throughput tools to further their own work.57,58 High-throughput efforts can be separated into discovery and optimization, each with its own risks and rewards. Discovery (primary screening) is intended to sample broad and diverse areas. Op- timization (secondary screening) accelerates development of new materials. The risk in the discovery arena is a higher number of false positives and false negatives than individual screening. In optimization, accuracy is usually sacrificed for speed, in that traditional characterization techniques do not always maintain pace with high-throughput synthesis.59 Over the past few years, screening has evolved from relatively simple materials and 1D and 2D synthesis (e.g., nanoparticles and thin films) to more and more complex materials and 3D synthesis (e.g., thick film, solution-based methods, AM, etc.). Improvements in spectroscopic techniques and image analysis are the first characterization methods to maintain pace with materials synthesis. Characteriza- tion and data analytics can still be rate limiting. A 2014 publication on high-throughput synthesis and characterization of bulk metallic glasses highlights the progress made in that area.60 More than 3,000 al- loy compositions were analyzed for both glass-forming ability and thermoplastic formability, an indication of their ability to respond to strain, through a creative methodology. Wells were made in a silicon wafer substrate, and then three confo- cal sputtering targets were used to deposit compositions in the Cu-Y-Mg family. Freestanding membranes were created by removing the silicon from the backside. These are low glass-transition temperature (Tg) glasses, and gases can generate enough pressure at 100Â°C to elastically deflect the membrane in a short period of time. The height of the final membrane yields an indication of the thermoplastic formability and can be quantified (see Figure 4.8). It is clear that high-throughput screening will be of increasing importance and that it will change how much of the MR of the future is conducted.61 57 E.B. Svedberg, âInnovation in Magnetic Data Storage Using Physical Deposition and Combinato- rial Methods,â in Combinatorial and High-Throughput Discovery and Optimization of Catalysts and Materials (R.A. Potyrailo and W. Maier, eds.), Critical Reviews in Combinatorial Chemistry, Vol. 1, CRC Press, Boca Raton, Fla. 58 E. Chunsheng, D. Smith, E. Svedberg, S. Khizroev, and D. Litvinov, 2006, Combinatorial synthesis of Co/Pd magnetic multilayers, Journal of Applied Physics 99:113901. 59 W.F. Maier, K. StÃ¶we, S. Sieg, 2007, Combinatorial and highâthroughput materials science, An- gewandte Chemie International Edition 46(32). 60 S. Ding, Y. Liu, Y. Liu, Z. Liu, S. Sohn, F.J. Walker, J. Schroers, 2014, Combinatorial development of bulk metallic glasses, Nature Materials 13:494. 61 See, for example, H. Shibata, M. McAdon, R. Schroden, G. Meima, A. Chojecki, P. Catry, and B. Bardin, 2014, âHeterogeneous Catalysis High Throughput Workflow: A Case Study Involving Pro- pane Oxidative Dehydrogenation,â Chapter 4, pp. 173-196, in Modern Applications of High Through- put R&D in Heterogenous Catalysis, Bentham e-Books, https://ebooks.benthamscience.com/.
200 F r o n t i e r s o f M at e r i a l s R e s e a r c h FIGURE 4.8â Image shows parallel blow-forming setup of compositional membranes and its real- ization. (a) Schematics of the parallel blow-forming setup. The relative thermoplastic formability is given by the final height of the membrane after deformation. SOURCE: Reprinted by permission from Springer Nature: S. Ding, Y. Liu, Y. Li, Z. Liu, S. Sohn, F.J. Walker, and J. Schroers, 2014, Combinatorial development of bulk metallic glasses, Nature Materials 13(5):494-500, Â© 2014. 4.4.2 Predictive Experimental Materials Design and Combined Experimental/Computational Analysis Accompanied by advances in first-principles calculations, molecular-dynamics simulations, machine learning, and other data analytics tools, predictive material design is fast becoming the norm, accelerating materials discovery. There is, of course, necessary caution to be applied and further advances to be made before predictive modeling attains the kind of robustness necessary for industrial ap- plications. For example, for the new generation of accurate but still approximate calculations, what are their errors? Standard and well-understood protocols have been developed to some extent, particularly in quantum chemistry, but many new techniques have spotty testing. It would be useful to have a clear suite of experimen- tal properties that would serve as a test-bed. These large amounts of experimental data need to be collected in a coherent way. Among the many as-yet unanswered questions are the following: How do researchers encode all the differences in pro- cessing material samples? How do researchers extract whatâs relevant from whatâs not? Can researchers predict whether a substance can be doped (solubility), and
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 201 what the effects of the doping will be? So far, multiple length scales and the need for high accuracy have made this an extremely challenging problem. This is a challenge both in terms of theoretical and computational techniques and in interfacing those efforts with experiment, since out-of-equilibrium behav- ior is highly dependent on the initial conditions. Many materials being used are actually metastable, which raises a separate set of questions: How can researchers predict whether a metastable material can be formed and use that to design not only new materials but also experimental processes? Often what is computed from first principles is not quite what is measured experimentally. For example, while some techniques can compute angle-resolved photoemission spectroscopy spectra, they typically do so for systems under ideal conditions of temperature and pressure and for controlled geometry. In operando measurements, for example, confirm that ma- terials undergo restructuring under ambient conditions. Can researchers develop computational techniques that bridge the pressure, temperature, and material gap? Despite ever-increasing fidelity of and access to experimental facilities, oppor- tunities still exist to fine-tune experimental conditions, accelerate analysis of the data, and move toward high-throughput screening of materials (see the preceding section) by coupling the instruments to computational facilities. Advancements in this area will not only improve the ability to fully interrogate the terabytes of data that can be acquired from a single experiment in a short time period but also change experimental conditions so as to maximize the utility and descriptiveness of the data that are collected. The idea would be to carry out real-time computational analysis of experimental data. For example, while in operando measurements are being taken of a chemical reaction on a nanocatalyst surface, digitized images of the catalyst under reaction conditions, the vibrational frequencies of the reaction intermediates and X-ray photoelectron spectroscopy data of the system could all be made available to a computational âbeam line,â which would calculate the same quantities for the ârealâ geometry and state of the catalyst. Real-time comparison of the two data sets (experimental and computed) would allow both sides to tweak conditions (parameters) until the desired result is obtained. Such a scenario is achievable, given the advances made in the past decade, discussed earlier in this chapter, in tools that can interrogate materials with atomic-scale precision and computational techniques that aim to predict material structure and dynamics under laboratory conditions. An early vision of conjugated experimental and computational analysis was proposed by the European Theoretical Spectroscopy Facility,62 which promotes standardization of computational codes, libraries, and tools to facilitate broad usage particularly by experimentalists. These ideas could be taken further by integrating 62 See the European Theoretical Spectroscopy Facility website at https://www.etsf.eu/, accessed March 9, 2018.
202 F r o n t i e r s o f M at e r i a l s R e s e a r c h such analysis-ready computational facilities to experimental beam lines. Given adequate resources, the United States could set the stage for real data analysis for accelerated materials discovery. 4.5 INFRASTRUCTURE AND FACILITIES Infrastructure for modeling and simulation, synthesis and processing, and property assessment and characterization is central to materials research. This section outlines the need for such facilities within universities and for large-scale national facilities. Deficiencies in current funding modes for the operation of state-of-the-art characterization facilities at universities, and for the acquisition of instrumentation that falls in the well-known funding gap between $4 million and $100 million are highlighted.63 The impact that national user facilities, stewarded primarily by DOE and to a lesser extent by the National Science Foundation (NSF), National Institutes of Health, and other agencies, have had and continue to have on materials research is highlighted. The case for continued investment in develop- ing new instrumentation and techniques for MR is made, both through university investment and at national user facilities. 4.5.1 Research Infrastructure The field of MR and engineering is a highly research-instrumentation-intensive discipline. Enormous demands on research infrastructure exist in all subfields, ranging from instrumentation to synthesize materials and characterize their struc- ture and properties, to the fabrication devices, applications, and systems. Often, a single researcher uses highly complicated instrumentation in all three of these areas, often in the span of a few days. For example, a single researcher active in the area of electronic materials may use sophisticated instrumentation to synthesize thin films of a new electronic material, followed by characterization of the micro- structure of the film by techniques such as X-ray diffraction or TEM, followed by measurements of physical properties, such as electrical resistance or magnetization, and then proceed to fabricating devices in a clean room and characterization of the devices using state-of-the-art measurement techniques. In the course of a typical project, instrumentation costing many millions of dollars is used by the student. Over the past 10 years, the ever-rising costs of acquiring and maintaining state- of-the-art research infrastructure combined with the dire lack of funding avenues for instrumentation have culminated in a situation that can only be described as 63 See the findings and recommendations of Chapters 3 and 4 of the report National Acade- my of Sciences, National Academy of Engineering, and Institute of Medicine, 2006, Advanced Re- search Instrumentation and Facilities, The National Academies Press, Washington, D.C., https://doi. org/10.17226/11520.
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 203 a crisis for all of materials science and engineering. Most extramural research at universities is funded by federal agencies, private foundations, and industry, who do not provide funding for the instrumentation that is needed to carry out the research that they fund. While the Department of Defense (DoD) has a mechanism to fund instruments through its Defense University Research Instrumentation (DURIP) program, the funding level of typical DURIP grants is much too small to pay the cost of many of the instruments used in materials science and engineering. This leaves the NSF as the main sponsor of research instrumentation (through its Major Research Instrumentation, or MRI, program). This program is extremely competi- tive, and the chances of getting funded are so low that it is inadequate to support the research instrumentation needed at major research universities. As a result, the burden to support research instrumentation has been shifted largely to the universities. Today, the mechanism by which universities support research instrumentation is mainly through start-up funds of new faculty. As a result of the crisis in extramural funding for research instrumentation, start-up funds have risen enormously over the past decade, reaching levels of above a mil- lion dollars for beginning assistant professors in experimental hard matter sciences. These sharply rising start-up costs are affecting the number of faculty universities can hireâin particular, those universities that do not have large endowments. Furthermore, this mechanism of funding research infrastructure is completely inadequate in sustaining forefront research in the long term. Specifically, start-up funds are typically used to buy instrumentation within the first 5 years of a faculty memberâs career. If this remains the only the source for equipment funding, as the trend over the past decade suggests, instrumentation is bound to become out-of- date as academic researchers reach the years that are normally the peak productivity of their careers. Such a model is unsustainable if the United States is to remain at the forefront of materials science and engineering. DOE is the major supporter of research facilities at the National Laboratories. These laboratories are excellently equipped with state-of-the-art instruments. The use of DOE-supported scattering facilities (X rays, neutrons) is an integral part of several fields of academic research in materials. Even the U.S. portion of the Inter- national Space Station (ISS) was in 2005 designated as a national laboratory, sup- ported by NASA, conducting key materials research (see Box 4.1). However, these national facilities are no substitute and no practical means to address the crises in research infrastructure at universities, where much of the nationâs forefront research in materials is carried out. In particular, most MR requires the infrastructure to be located at a researcherâs institution. To give an example, a typical materials project that involves synthesis of new materials requires a constant and immediate feed- back loop between synthesis, structure, and property measurements that may go through many cycles within a short period of timeâthat is, a span of a few days.
204 F r o n t i e r s o f M at e r i a l s R e s e a r c h BOX 4.1 Materials Research Conducted on the International Space Station The International Space Station (ISS; Figure 4.1.1) is a multination facility that was assembled between 1998 and 2011 and has been continuously occupied since 2000. The ISS hosts a wide variety of laboratory facilities to enable discovery and innovation, with the U.S. portion being designated as a National Laboratory in 2005. Current ISS facilities for materials science research include: Low Gradient Furnace, Solidi- fication and Quenching Furnace, Microgravity Science Glovebox, Pore Formation and Mobility Investigation Apparatus, Solidification Using a Baffle in Sealed Ampoules (modification to include CVD capabilities for 2D material growth is under study), Coarsening in Solid/Liquid Mixtures Apparatus, Transparent Alloys Experiment, Electromagnetic Levitator, Electrostatic Levitation Fur- nace, 3D Polymer Printer, and MISSE (Materials on ISS Experiment, which hosts external exposure experiments). The following are recommended new materials research facilities as described in the Mate- rialsLab1 Strategic Plan: Granular Materials Facility, Brazing and/or Welding Facility, Electrolysis of Molten Glasses, Diffusion Measurements, Float Zone Furnace, Biomaterials Facility, and 3D Bioprinting. The primary selection criteria for proposals submitted to the ISS National Laboratory is that the research must require the persistent microgravity of the ISS or other unique property of the space environment and pass applicable feasibility and safety requirements. Investigators wanting to learn more about opportunities in materials research on the ISS may contact either NASAâs ISS Program Office or the Center for the Advancement of Science in Space.
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 205 FI GURE 4 .1 .1 âNASA and CASIS (the Center for the Advancement of Science in Space is the managing entity of the International Space Station (ISS) U.S. National Laboratory) support a ma- terials science program on the ISS. Investigators in the United States have access to this orbiting laboratory for conducting long-duration experiments in microgravity, where effects including convection, buoyancy-driven flow, and sedimentation are nearly negligible (see the ISS U.S. Na- tional Laboratory website at http://www.spacestationresearch.com/, accessed March 6, 2018). As an example, a project to develop better flame-retardant textiles compared combustion in Earthâs gravity to that in microgravity, where diffusion-dominated flames result. The image on the right shows this difference between a candle flame on Earth and on the ISS. The relative ease of ac- cess to the external space environment from the ISS also enables a variety of materials exposure experiments important to the space technology community. SOURCE: Left: NASA, 2011, âWe Can See Clearly Now: ISS Window Observational Research Facility,â Earth Observatory, March 4, https://earthobservatory.nasa.gov/features/EarthKAM. Right: NASA, 2013, âStrange Flames on the ISS,â June 18, https://science.nasa.gov/science-news/science-at-nasa/2013/18jun_strangeflames; courtesy of NASA. 1 NASA, 2016, âNASA Selects 16 Proposals for MaterialsLab Investigations Aboard the In- ternational Space Station,â August 2, https://www.nasa.gov/feature/nasa-selects-16-proposals- for-materialslab-investigations-aboard-the-international-space.
206 F r o n t i e r s o f M at e r i a l s R e s e a r c h It is not feasible to carry out this research if it requires remote facilities, and this is true for most materials research. Last, the dire situation of deteriorating research infrastructure at universities also has more direct consequences on the nationâs economy. In particular, many large research universities operate open-access or shared user facilities, such as clean rooms and material characterization facilities, which are open to outside entities such as companies. In essence, the universities also function as incubators, resources, and technology transfer opportunities for small and large companies, including start-ups. 4.5.2 General Laboratory Infrastructure The equipping of an experimental laboratory within a university is usually ac- complished when the faculty member first joins the university. A similar dynamic occurs at a National Laboratory or in industry, where more base infrastructure exists but the opportunities for equipment refresh are limited. Although there are opportunities at funding agencies for researchers to secure grants to acquire new equipment, the number of opportunities and the total level of funding available is limited. Similarly, at universities, there are limited resources available to replace or upgrade intermediate-scale equipment or to revitalize physical infrastructure (HVAC, laboratory modernization, etc.). A consequence of these limitations is that acquisition of everyday experimental capabilities, such as furnaces, tensile frames, lasers, and so on, is a challenge, which means these laboratories become obsolete over time. The difference is particularly striking when visiting universities labora- tory facilities in other countries, in which a stronger commitment to infrastructure revitalization often is evident. 4.5.3 Midscale Instrumentation/Facilities Midscale research facilities include many of the characterization, synthesis, and processing facilities discussed in earlier sections, and complement the national user facilities discussed below. Many research universities operate characterization and fabrication facilities,64 and these are supported in part by the university or on a full-cost recovery mode from users. The acquisition cost of new instruments, while significant, has become just one component in meeting the escalating costs 64 See, for example, National Science Foundation, 2018, Science and Engineering Indicators 2018, https://www.nsf.gov/statistics/2018/nsb20181/report/sections/academic-research-and-development/ highlights#infrastructure-for-academic-r-d.
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 207 of operating a user facility.65 The annual maintenance costs and the need for dedi- cated technical staff are becoming increasingly expensive. While the federal agency initiatives, such as the DOE BES nanoscience centers and the NSF Materials In- novation Platforms, are to be commended, the loss of such facilities at universities will limit progress especially in the area of instrument and technique development. There is a pressing need for a new national strategy on how to make available new instruments to a wide user base, meet the operational costs, and continue to stimulate creativity and development.66 Midscale funding for facilities,67 in the range between a beam line or large magnet (tens of millions of dollars) and a full facility (in the billion-dollar range), has been a recognized challenge for some time.68 The ability to study materials at extreme conditionsâfor example, in high magnetic fields while under extreme pressure, at very low or high temperature, with light or neutron scattering, or with scanning probesâhas become an important direction of MR on a global scale and is a prime example of the midscale funding gap. The capability to produce the desired environment is often beyond the scale of what a principal investigator can afford and also not within the budget of large-scale user facilities. For example, at MagLab, designing and building midscale projects such as the SC 32 Tesla and the series connected hybrid 41.5 Tesla magnet are in the several tens of millions of dol- lars today, with running costs in the millions.69 Another example is the challenge of developing next-generation high field magnets, the funding for which typically also falls in the $4 million to $100 million midscale range. Novel growth ability also falls into this category, especially when addressing integration. For example, the control of interfaces of complex quantum hetero- structures or devices becomes extremely critical; the interface often dictating the properties of the heterostructures. Control of the interface will also help us to understand the wavefunctions on either side of the interface, leading to our abil- ity to design and synthesize functional quantum structures. This ability requires 65 See Chapter 3, Instrumentation and universities outlines the various costs for facilities, in Na- tional Academy of Sciences, National Academy of Engineering, and Institute of Medicine, 2006, Advanced Research Instrumentation and Facilities, The National Academies Press, Washington, D.C., https://doi.org/10.17226/11520. 66 National Research Council, 2006, Midsize Facilities: The Infrastructure for Materials Research, The National Academies Press, Washington, D.C. 67 Johns Hopkins University, 2016, âWorkshop on Midscale Instrumentation to Accelerate Progress in Quantum Materials,â http://physics-astronomy.jhu.edu/miqm/. 68 See, for example, National Science and Technology Council, 1995, Final Report on Academic Research Infrastructure: A Federal Plan for Renewal, National Science and Technology Council, Washington, D.C. 69 See The National High Magnetic Field Laboratory, âMagnet Science & Technology,â https:// nationalmaglab.org/magnet-development/magnet-science-technology, accessed December 3, 2018.
208 F r o n t i e r s o f M at e r i a l s R e s e a r c h the integration, in a single system, of an advanced toolset for materials synthesis, interface and surface control, and in situ characterization. 4.5.4 Nanoscale Science Research Centers In the past decade, a number of nanoscale science research centers have emerged whose scale is beyond those of midscale facilities at research universities. On a national scale, the DOE Office of Scienceâs Scientific User Facilities Division operates five Nanoscale Science Research Centers (NSRCs) as user facilities that are located at National Laboratories, and the NSF operates the National Nanotechnol- ogy Coordinated Infrastructure at 16 universities. The National Cancer Instituteâs Nanotechnology Characterization Laboratory is another midscale facility, as is the NISTâs Center for Nanoscale Science and Technology. The five NSRCs are user centers for interdisciplinary research at the nanoscale. Each center has particular expertise and capabilities in selected areas, such as synthesis and characterization of nanomaterials; catalysis; theory, modeling, and simulation; electronic materials; nanoscale photonics; soft and biological materials; imaging and spectroscopy; and nanoscale integration. This array of centers, sponsored by various agencies, have been very success- ful, as judged by the high oversubscription of their use and by the many important results that they have produced and reported. This leads to the conclusion that the expansion of this type of center would be a valuable asset to promote MR in the United States. Such facilities not only empower U.S. researchers but also attract valuable international exchange and collaborations. Furthermore, organized col- laboration and planning among existing and new centers as to the nature and types of facilities that they acquire and maintain would also be valuable. 4.5.5 X-Ray Light Sources From the early days of synchrotron science, it was clear that light sources would have a large impact in the field of materials science. As is well documented, that early promise has been fulfilled many times over. Further, light sources have been improving at a rapid rateâthe rate of increase in brightness of X-ray sources in the past 30 years exceeds that of Mooreâs law for transistors in the same period. The fulfillment of the promise, and the exciting future possibilities, of synchrotron light sources has been amply documented, for example, in these DOE reports: Next Generation Photon Sources for Grand Challenges in Science and Energy, Report of the Workshop on Solving Science and Energy Grand Challenges with Next-Generation
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 209 Photon Sources,70 and Report of the BESAC Subcommittee on Future X-Ray Light Sources.71 This rapid improvement in brightness has driven ever more impactful uses of X rays to study the structure and function of materials. Most recently, the United States has seen the commissioning of the new National Synchrotron Light Source II (Brookhaven National Laboratory [BNL]), with brightness significantly higher than the other U.S. synchrotron light sources, and future upgrades are planned for two of the existing sources, the Advanced Photon Source Upgrade (APS-U) and the Advanced Light Source Upgrade, as shown in Figure 4.9. The ac- celerator lattice design for a diffraction-limited synchrotron pioneered at Max-IV in Sweden is being implemented as upgrades to existing synchrotrons or new f Âacilities such as the Beijing high-energy synchrotron, which will be collocated with a major supercomputing and quantum materials effort. A principal development of the past decade has been the emergence of X-ray free electron lasers as a complement to synchrotrons, notably the LCLS and the future LCLS-II and its high-energy upgrade, LCLS-II-HE, at SLAC in the United States. New X-ray free electron lasers have been built or are under construction at Deutsches Elektronen-Synchrotron (DESY, Germany), PSI (Switzerland), CAS Shanghai (China), and elsewhere. Collectively, these new ultrabright sources will drive further advances in the techniques, enabling transformative studies of materials with nanoscale resolution while under operating conditions and on ultrafast time scales. The United States had a significant fraction of all the world-leading capabilities 20 years ago, but that lead has eroded and todayâs landscape is one of intense competition from both Europe and Asia. In addition to the increasing brightness of X-ray light sources, a second techno- logical development has revolutionized the use of X rays for materials scienceâthe use of area detectors. This has facilitated the very rapid taking of data, allowing surveys of large swathes of reciprocal space to be undertaken and for new imaging modalities, such as coherent diffraction imaging, to be developed. The former has enabled, for example, tomographic studies of crystallographic grain orientation or studies of small distortions in crystal structures, while the latter has enabled, for example, studies of strain fields in nanoparticles and nanoscale semiconductors. The broad photon energy range available (from the far infrared to hard X ray) and the intense brightness of the beams, which allows the photon beams to be tailored to specific experimental geometries and environments, makes X-ray light 70 See U.S. Department of Energy, 2009, Next-Generation Photon Sources for Grand Challenges in Science and Energy: Report of the Workshop on Solving Science and Energy Grand Challenges with Next-Generation Photon Sources, https://science.energy.gov/~/media/bes/besac/pdf/Ngps_rpt.pdf. 71 See U.S. Department of Energy, 2013, âReport of the BESAC Subcommittee on Future X-ray Light Sources,â July 25, https://science.energy.gov/~/media/bes/besac/pdf/Reports/Future_Light_Sources_ report_BESAC_approved_72513.pdf.
210 F r o n t i e r s o f M at e r i a l s R e s e a r c h FIGURE 4.9â Time average brightness curves for selected existing (solid lines) and future (dashed curves) U.S. and international light sources. The plot illustrates the competitiveness of the international scene in both synchrotron and free electron laser light sources. SOURCE: Deutsches Elektronen-Syn- chrotron (DESY), Media Database, https://media.desy.de/DESYmediabank/ConvertAssets/Peak_Bril- lianz.jpg, Â© DESY. sources near-ideal probes of the structure and function of materials. The robust suite of experimental capabilities represented by APS, ALS, BNL, LCLS, and the Stanford Synchrotron Radiation Lightsource have enabled important contributions in quantum materials (superconductivity to graphene), energy storage (solid elec- trolyte interphase formation), self-assembly, advanced microelectronics (extreme ultraviolet lithography and strain engineering), as well as studies in extreme en- vironments (high pressure and high magnetic fields) described elsewhere in this report. As the field moves toward the capability to fully integrate computational materials science, synthesis, and advanced manufacturing for real-world perfor- mance, the microscopic characterization of structure and dynamics enabled by the next generation of instruments and upgraded sources will provide the crucial link to enable materials by design.
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 211 4.5.6 Neutrons The past decade has seen a revitalization of neutron sciences in the United States. A major stimulus has been the beginnings of operation of the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL), which now operates routinely at powers of over 1 MW and produces the worldâs highest peak flux neutron pulses for beam research on materials. The SNS first target station now operates with 19 specialized state-of-the-art instruments dedicated to MR, spanning techniques aimed at structures at the meso-, nano-, or atomic-length scales, and dynamics on the micro- to picosecond time scales. At the same time, the continuous reactor sources of neutrons, including the NIST Center for Neu- tron Research and the High Flux Isotope Reactor at ORNL, have seen significant improvements in the availability of cold neutron instrumentation. This has greatly increased the ability to perform unique investigations of âlarge-scale structures,â such as those found in polymers, biomaterials, and solid-state nanoscale systems, as well as measurements of low-energy dynamics with excellent resolution and signal to noise. The large amount of data produced in modern neutron scattering instruments has in turn produced its own set of challenges, and at present researchers are start- ing to see early benefits of coupling high-performance computing and neutron scat- tering data analysis. This trend is also present for X-ray sources and microscopies, tools for integrating large volumes of data with modeling and simulation in real time to guide experiments is leading to closer coupling of experiment and theory. The advances in neutron scattering sources and instrumentation discussed above have enabled substantial scientific advances spanning the range from funda- mental discoveries about the nature of novel matter to new materials with specific technological applications. Unconventional superconductivity has been a forefront materials problem for three decades, and within the past decade received a major impetus with the surprising discovery of iron-based superconductors. Almost immediately, neutron diffraction was used to elucidate the magnetic structure of parent compounds of several families of iron-based materials, showing that the ordering wave vector in most cases differed from the older cuprates. Detailed mapping of the phase diagrams uncovered regions of phase separation and evidence for stripe order. Inelastic neutron scattering showed the existence of a resonant magnetic excita- tion in the superconducting materials and found a striking relationship between the transition temperature and resonant frequency across many different types of unconventional superconductors. The investigation of quantum materials is now a burgeoning forefront of re- search on solids. The field has been transformed by the developing understanding of the key role played by topology, and the possibility that topological materials or
212 F r o n t i e r s o f M at e r i a l s R e s e a r c h excitations might play an important future role in new technologies such as quan- tum computation. Neutron scattering has provided crucial information for much of this effort. The role of quantum fluctuations, and in particular fractionalized excitations, has been an overriding theme in the problem of quantum spin liquids. Inelastic neutron scattering has shown evidence for several different fractional exci- tations: spinons in Herbertsmithite,72 which is possibly an example of a Heisenberg quantum spin liquid; magnetic Majorana fermions in Î±-RuCl3, which is believed to be proximate to a Kitaev quantum spin liquid; and excitations that are formally equivalent to the elusive magnetic monopoles in so-called spin-ice. Evidence for topological structures in the form of magnetic skyrmion lattice was discovered by small-angle neutron scattering, and skyrmions now represent a promising new direction for spintronics applications. Neutron diffraction measure- ments of magnetic structures in various multiferroic materials have shown how chirality and frustration may play a role in multiferroic phenomena. Polarized neutron reflectivity has been an especially valuable tool for studying buried interfaces, and it has shown how exchange bias in magnetic multilayers can lead to an interfacial region that is magnetically different from the bulk sur- roundings. In some cases, the interface can induce ferromagnetism in an interface layer one atom thick. Studies of interfaces between topological insulators and fer- romagnetic insulators showed that shallow ferromagnetic regions can be induced in the topological insulator, and such structures might enable magnetic control via the electric field for new types of technologies. Neutrons have had a large impact in the realm of functional materials, and par- ticularly thermoelectrics, where careful measurements of phonon anharmonicities have revealed a path to producing greatly improved materials. Similarly, a deeper understanding of the requirements for improvement in energy storage materials has come about through a combination of conventional and operando diffraction studies of batteries and related materials. Neutron imaging has been particularly useful for understanding the inner workings of fuel cells. The deep penetration of neutrons into most materials has enabled significant new insights into the microscopic origin of the properties of metal alloys, including deformation and plasticity of conventional alloys, and the phase transformation behavior of new high-entropy alloys. Neutrons have also been used to characterize the microstructure and consequent effects on mechanical properties of additively manufactured components. The structure of porous materials has been investigated over a large range of length scales using both diffraction and small angle scattering. Particularly impor- tant work has been done on metal organic frameworks (MOFs), showing how the separation of light hydrocarbons might be greatly improved. Neutrons have also 72 Herbertsmithite is a mineral with chemical structure ZnCu3(OH)6Cl2.
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 213 shed light on the possibility of both MOFs and shales for carbon sequestration and hydrogen storage. Neutron scattering is ideally suited for studies of biological systems because it is a penetrating and nondestructive probe, providing structural and dynamical information across cellular scales of length and time, spanning from the position of an individual hydrogen atom in a protein to the nanomesoscale structure and dynamics of functional complexes and hierarchical assemblies within a cell. At the atomic level, neutrons provided insight into the role of critical hydrogen atoms in the catalytic mechanism of a peroxidase and binding of drug targets to proteins. Neutrons have been used in studies of proteins and their complexes because they can detect conformational changes and assembly/disassembly processes under near-physiological conditions. Nanoscale studies of protein-nucleic acid com- plexes have revealed how methylation events are used as a regulatory mechanism for ribosomal ribonucleic acid folding and contributed to understanding into the regulatory function of cardiac myosin-binding protein carbon, a protein vital for maintaining regular heart function. Neutrons also reveal the organization and assembly of biological membranes and their interactions in the cell, providing in- sights into the structural and mechanical properties of lipid nanodomains and the mechanism of voltage gating, which can impact numerous neurological diseases as well as anesthetic action. The penetrating and nondestructive nature of neutrons has enabled studies investigating the architecture of the plant cell wall, providing new knowledge about structure of the cellulose microfibril and its breakdown to release sugars for biofuel production. In the emerging thematic area of biological complexity, neutrons have been used to study cellular processes in living cells, a new area of application that has only recently become feasible. This has been achieved through targeted H/D isotope contrast to reveal the formation of nanodomains in plasma membranes and to study the dynamical processes of proteins in vivo. 4.5.7 High Magnetic Field Facilities High magnetic fields represent a continuously tunable, reversible, and intrinsi- cally quantum and topological probe of materials. Magnetic fields in the 10 to 100 T range compete with (and thereby effectively probe) the correlation energies of quantum matter and strong spin-orbit coupling of topological materials. Magnetic fields, by virtue of being both a thermodynamic variable and a vector quantity, can separate competing energy scales, as in quantum fluids and quantum spin liquids, and can induce new states of quantum matter (a quantum phase transition induced at absolute zero by varying for example the magnetic field), such as magnetic Bose- Einstein condensates and spin supersolids. Magnetic fields above 100 T will exceed the quantum limit of many low carrier density metals and sufficiently suppress the
214 F r o n t i e r s o f M at e r i a l s R e s e a r c h highest temperature superconductivity to reveal underlying quantum criticality.73 Moreover, the technological impact of high magnetic field research is increasingly significant: whereas 10 to 20 T was sufficient to reveal fundamental electronic and optoelectronic properties (carrier mass, exciton binding energy, etc.) in silicon and GaAs, analogous studies of the new generation of atomically thin 2D semiconduc- tors (such as MoS2, phosphorene, and the transition metal dichalcogenides) will require 100 to 200 T. When magnetic fields interact with moving charges, they probe a characteristic length scale that decreases as the square root of the magnetic field. High magnetic fields of 20 T can probe spatial features comparable to a 6nm diameter quantum dot, while fields of 80 T are necessary to shrink this length by another factor of 2. As such, the study of electronic and magnetic phenomena down to atomic dimen- sions necessitates pushing the present limits of current magnet technology. Three recent studies have looked into the current status and the potential for future developments of high field magnet research. For more detailed infor- mation and discussions of the various technical issues, the committee refers to these reports.74 An important direction besides magnet development will be the integration of high fields with beam lines, as indicated by both Committee on Opportunities in High Magnetic Field Science and MagSci. This would allow the investigation of the neutron and X-ray scattering properties of materials in high magnetic fields.75 Currently, the highest field magnet on a beam line worldwide is a 26 T system at the Helmholtz Zentrum Berlin developed by the National High Magnetic Fields Laboratory. 4.5.8 Advanced Computational Facilities Advanced Computational Facilities have played a major role in promoting and facilitating the leap forward in both predictive modeling of functional material and in developing the framework for understanding characteristics of materials at multiscales. These facilities, funded mostly by DOE and NSF, are home to the most sophisticated HPCs), which enable computational material scientists to carry out detailed simulation of material properties from the microscopic to macroscopic length scales and ultrafast (subfemtoseconds) to standard time domainsâregions not readily available in experiments. Under the umbrella of its Advanced Scientific Computing Research program, DOE continues to maintain several world-class HPC centers, enabling collaborative 73 Quantum criticalityâa phase transition brought on by quantum fluctuations at absolute zero. 74 National Research Council, 2013, High Magnetic Field Science and Its Applications in the United States: Current Status and Future Direction, The National Academies Press, Washington, D.C. 75 National Research Council, 2007, Condensed-Matter and Materials Physics: The Science of the World Around Us, The National Academies Press, Washington, D.C.
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 215 research in a number of fields including material science at the following: Argonne Leadership Computing Facility, National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory (LBNL), Oak Ridge Leadership Computing Facility, and Energy Science Network at LBNL. These centers serve the high-performance computational needs of scientists from DOE laboratories, academia, and industry and maintain a high global profile in cutting-edge com- puter hardware. They were among the first to achieve the petascale (1015 Âmachine operations per second) capability and are now gearing to break the exascale (1018 machine operations per second) barrier. These computer hardware advances cou- pled with a global effort in the development of refined computational codes suit- able for application to challenging problems, at a variety of length and time scales, have enabled simulations of complex phenomena, which only a decade back were unimaginable. The material science community has been one of the biggest benefi- ciaries of these advances given the relevance of computational techniques used by researcher on collaborative projects that were initiated first under the NanoÂscience and Nanotechnology Initiative and more recently under MGI. Unsurprisingly, Âmaterial science researchers are now one of the dominant users (comprising greater than 18 percent of all high-performance computing users). Fruitful interactions between experimentalist and computational scientists fostered by these two pro- grams, among others, could not have been possible without the resources made available by the advanced computational facilities. 4.6 CONCLUSION, FINDINGS, AND RECOMMENDATIONS No element of investment in MR is more important than facilities, instru- mentation, and infrastructure. With excellent facilities and instrumentation in place, a considerable amount of important research can be done with no further investment. These tools stimulate and unleash creativity and productivity. Several findings and recommendations aim to improve our national competitive status in this domain. Key Finding: Progress in 3D characterization, computational materials sci- ence, and advanced manufacturing and processing have enabled an increas- ing digitization across disciplines of materials research and has in some cases dramatically accelerated and compressed the time from discovery to inclusion in new products. Key Recommendation: Federal agencies (including the National Science Foundation and the Department of Energy) with missions aligned with the advancement of additive manufacturing, and other modes of digitally controlled manufacturing, should by 2020 expand investments in materials
216 F r o n t i e r s o f M at e r i a l s R e s e a r c h research for automated materials manufacturing. The increased investÂ ments should be across the multiple disciplines that support automated materials synthesis and manufacturing. These range from the most fundaÂ mental research to product realization, including experimental and modelÂ ing capabilities enabled by advances in computing, to achieve the aim that by 2030 the United States is the leader in the field. Key Finding: Infrastructure at all levels, from midscale instrumentation for materials characterization, synthesis, and processing with purchase costs of $4 million to $100 million in universities and national laboratories to large-scale research centers like synchrotron light sources, free electron lasers, neutron scattering sources, high field magnets, and superconductors is essential for the health of the U.S. materials science enterprise. Midscale infrastructure, in par- ticular, has been sorely neglected in recent years, and the cost of maintenance and dedicated technical staff has increased enormously. Key Recommendation: All U.S. government agencies with interests in maÂ terials research should implement a national strategy to ensure that univerÂ sity research groups and national laboratories have local access to develop, and continuing support for use of, state-of-the-art midscale instruments and laboratory infrastructure essential for the advancement of materials research. This infrastructure includes materials growth and synthesis faciliÂ ties, helium liquefiers and recovery systems, cryogen-free cooling systems, and advanced measurement instruments. The agencies should also continue support of large facilities such as those at Oak Ridge National Laboratory, Lawrence Berkeley National Laboratory, Argonne National Laboratory, SLAC National Accelerator Laboratory, National Synchrotron Light Source II (Brookhaven National Laboratory), and National Institute of Standards and Technologyâand engage and invest in long-range planning for upÂ grades and replacements for existing facilities. Finding: There is a strong need for educated end users of software in order for the approximations, limitations, and full range of use cases to be appreciated and used toward significant impact on science and engineering. This includes in particular methods of machine learning as applied to materials. Recommendation: Computational materials science training should be part of a core curriculum in undergraduate and graduate training in the subjects of physics, chemistry, materials science, and related fields, and this should include training in some of the larger computational software packages (and not just Matlab programming). More than one section of
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 217 this training is recommended to be in the area of machine learning apÂ plied to materials. Finding: Researchers are close to a new world of precision synthesis in which the positions and species of individual atoms, molecules, and defects can be controlled to produce desired properties from the nano- to the macro scale, in both organic and inorganic materials, from sequence-controlled polymeriza- tion to molecular beam epitaxy with in situ characterization, scanning probe microscopies, spectroscopy, and angle-resolved photoemission. Recommendation: All agencies that fund materials research, with the NaÂ tional Science Foundation and Department of Energy coordinating, should support research in the area of materials precision synthesis, particularly new methods that test the limits of what is fundamentally achievable and a new understanding of whether the levels of precision that can be achieved actually result in desired or interesting properties. The supported research should clarify when and how exquisitely precise synthesis is essential to achieving new functionality in materials. A multiagency workshop in 2020 or earlier could serve to initiate and propel this line of research into the next decade. Finding: The integration of computational control and automation with ad- vanced characterization techniques has made it possible to build 3D data sets that represent materials digitally with greater fidelity than previously imaginable. Methodologies for 3D characterization and analysis are currently developed locally; universally agreed upon process flows, tools, and analysis techniques are badly needed. Recommendation: Federal agencies should invest significant resources into the creation and widespread use of autonomous experimental three-dimenÂ sional characterization and the development of universal and widely shared computational methodologies for advanced registration, reconstruction, classification, and analysis of digital data sets. Finding: The predictive design and fundamental understanding of the growth of crystalline materials promises enormous potential to impact our materials research and technologies. Crystalline materials play a fundamentally impor- tant role for modern society and commerce. Recommendation: The establishment of materials growth hubs that are multiagency efforts (e.g., the National Science Foundation, Department
218 F r o n t i e r s o f M at e r i a l s R e s e a r c h of Energy, National Institute of Standards and Technology, Department of Defense) is recommended, where it is recognized that agencies with different but mutually beneficial priorities can share and fund knowledge while keeping proprietary information separate. Work at these facilities would not only seek to improve established methods but also initiate new directions. These facilities would serve not only as foundries for synthesis of materials and development of new methods but also as homes to digital materials databases and real-world stockpiles of materials that would be available upon request. Finding: Computer-intensive fields such as artificial intelligence, machine learning, and âbig dataâ collection and analysis are now beginning to have a significant impact in materials science, the impact of which researchers are just beginning to see. To realize the full potential of this revolution, it is es- sential that researchers have access to the most advanced computer hardware and software. Finding: In the future, as the scaling of microchips slows, emphasis on maxi- mizing speed of floating-point operations might not be the best strategy for enabling increases in computing speed. Instead, advanced data analytics, fit-for purpose computers, and software interfaces (applications programming inter- faces and graphical user interfaces) might gain increased importance. Recommendation: The Department of Energy and National Science FounÂ dation should begin in 2020 to support a broad computing program to deÂ velop ânext-generationâ and âfit-for-purposeâ computers. These computers should not only focus on speed but also include improved data analytics and other capabilities. The program should also include support to create and maintain new software and software interfaces (applications programÂ ming interfaces and graphical user interfaces) and ensure that the broad materials research community has access to these tools. This support for code development should go not just to centers but also to single principal investigators or small groups. Finding: International collaboration plays an essential role in materials r Âesearch, with engagements ranging from individual researchers to more formal institutional and facility partnerships. Examples include the Interna- tional Space Station, CERN, SESAME, and LIGO. Advantages include pool- ing resources, promoting diversity in approaches for scientific progress, and scientific diplomacy.
R e s e a r c h To o l s , M e t h o d s , I n f r a s t r u c t u r e , a n d Fa c i l i t i e s 219 Finding: Keeping in mind the importance of international collaboration, it is also important to note that there is a strong linkage between materials research, economic competitiveness, and national security. The U.S. leadership position has begun to erode, with major materials research initiatives and investments taking place outside the United States. Additionally, major state-of-the art facilities such as synchrotron light sources, free electron lasers, neutron scat- tering sources, high field magnets, and supercomputers are needed to attract and retain top researchers. Simply put, if the United States does not maintain leadership with major state-of-the art facilities, the erosion will only accelerate and hinder the U.S. ability to play major roles in international collaborations. Recommendation: The U.S. government should aggressively enhance its support of large research facilities. Facility roadmaps for our funding agents (National Science Foundation, Department of Energy, National Institute of Standards and Technology, and Department of Defense) should reflect a strategy for the coming decade of maintaining or increasing the current leadership role of the United States in major facilities for materials research while remaining abreast of developments in other countries and seeking cooperation when mutual benefit can be found.