National Academies Press: OpenBook

Frontiers of Materials Research: A Decadal Survey (2019)

Chapter: 4 Research Tools, Methods, Infrastructure, and Facilities

« Previous: 3 Materials Research Opportunities
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 114
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 115
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 116
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 117
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 118
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 119
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 120
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 121
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 122
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 123
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 124
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 125
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 126
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 127
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 128
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 129
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 130
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 131
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 132
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 133
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 134
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 135
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 136
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 137
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 138
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 139
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 140
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 141
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 142
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 143
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 144
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 145
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 146
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 147
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 148
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 149
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 150
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 151
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 152
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 153
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 154
Suggested Citation:"4 Research Tools, Methods, Infrastructure, and Facilities." National Academies of Sciences, Engineering, and Medicine. 2019. Frontiers of Materials Research: A Decadal Survey. Washington, DC: The National Academies Press. doi: 10.17226/25244.
×
Page 155

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

    4 Research Tools, Methods, Infrastructure, and Facilities In the past decade, significant advances have been made in the characterization (Section 4.1), synthesis and processing (Section 4.2), and computational (Section 4.3) capabilities available to materials researchers. These new tools have enabled previously unachievable materials insights, and this is especially true when used in combination—for example, in situ measurement and control of novel synthetic strategies or advanced data analytics techniques utilized simultaneously with advanced imaging diagnostics (Section 4.4). Development of these tools is a research frontier in its own right meriting further investment. This chapter highlights a number of methodological advances and the impact they have had on the materials community. One consequence of continually improving tools is the need for infrastructure reinvestment to ensure the availability of state-of-the-art tools (Section 4.5). Novel modalities for such investment are discussed. Last, the current and emerging capabilities available at intermediate-scale facilities as well as national user facilities are highlighted (Section 4.5). 4.1 CHARACTERIZATION TOOLS 4.1.1 Electron Microscopy Transmission electron microscopy (TEM) is a key technique in all areas of materials science because it helps reveal how a material’s internal structure is determined by synthesis and processing and how it correlates with its physical properties and performance. Imaging, diffraction, and spectroscopy can all be carried out across length scales ranging from interatomic distances to micrometers, often within a single transmission electron microscope and on the same sample. The past decade has seen tremendous advances in instrumentation—in particular, spherical and chromatic aberration correctors, monochromators, and new detectors (see Figure 4.1). One example is the continued evolution of the aberration-corrected microscopes. Advanced aberration-corrected scanning transmission electron microscopes (STEMs) can now achieve 0.5 Å resolution in TEM and STEM modes along with 0.1 eV energy resolution.1,2 Other examples of advances afforded by spherical aberration correction include picometer-precision in determining atom column positions, and tremendous improvements in the quality of atomic-scale electron energy-loss and energy-dispersive X-ray spectroscopic images, the latter in combination with new large solid-angle detectors. Understanding what sets the ultimate limit in the quest of further improving instruments to achieve even higher spatial resolution is a topic of ongoing research in the field. Advances in monochromators now allow for energy resolution of 30 meV (or better) in electron energy loss spectroscopy, sufficient to study phonons. Faster cameras and new types of sample holders developed over the past decade provide new opportunities for in 1 Pennycook SJ, “The impact of STEM aberration correction on materials science,” Ultramicroscopy, 180 [1] 22-33 (2017). 2 Ramasse QM, “Twenty years after: How “Aberation correction in the STEM” truly placed a “A Synchrotron in a Microscope,” Ultramicroscopy, 180 [1] 41-51 (2017). PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-1

    situ studies of a wide range of processes in materials. Important ongoing developments include high- speed pixel array detectors that allow for detecting scattered electrons as a function of position in the detector plane. These new detectors allow for utilizing new imaging modes (e.g., differential phase contrast to image electric polarization) and, more generally, improve visibility of features and interpretability of electron microscope images. In parallel with instrumentation, electron microscopy techniques have also made substantial advances. One of the main advantages of electrons for imaging—namely, their strong interaction with matter—was long thought to pose a challenge in the quantitative interpretation of image intensities. Truly quantitative interpretation of image intensities was demonstrated in the past decade, opening up quantitative analysis of atomic resolution images not just in terms of the position, but also the content, of atomic columns in a sample. Another highly active research area is three-dimensional (3D) imaging (electron tomography)3 of crystalline samples. A number of different approaches are actively being developed in the field. Such approaches have been applied to analyze the 3D positions of atoms in nanoparticles. Tomography has also been used in diffraction contrast imaging of crystal defects and, in combination with in situ straining, has allowed for dynamic visualization of the interactions of dislocations with grain boundaries.4 FIGURE 4.1 Improvements in the spatial resolution in light and electron microscopy. SOURCE: Reprinted by permission from Springer Nature: D.A. Muller, 2009, Structure and bonding at the atomic scale by scanning transmission electron microscopy, Nature Materials 8:263, © 2009. 3 Maire E, Withers PJ, “Quantitative X-ray tomography,” International Materials Reviews, 59 [1] 1-43 (2014). 4 A. King, P. Reischig, Simon Martin, Joao F. B. D. Fonseca, M. Preuss, et al., grain mapping by diffraction contrast tomography: extending the technique to subgrain information. Risø International Symposium on Materials Science: Challenges in materials science and possibilities in 3D and 4D characterization techniques, Sep 2010, Denmark. <hal-00531696>. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-2

    4.1.2 Atom Probe Tomography Current and emerging research areas in the physical and life sciences increasingly require the capacity to quantitatively measure the structure and chemistry of materials at the atomic scale. This atomic scale information enables nanoscience research across a wide range of disciplines including materials science and engineering, fundamental physics, chemical catalysis, nanoelectronics, and structural biology. Atom probe tomography (APT) is the only currently available material analysis technique offering extensive capabilities for simultaneous 3D imaging and chemical composition measurements at the atomic scale. It provides 3D “maps” that show the position and elemental species of tens of millions of atoms from a given volume within a material, with a spatial resolution comparable to advanced electron microscopes (around 0.1-0.3 nm resolution in depth and 0.3-0.5 nm laterally) but with higher analytical sensitivity (<10 appm). The development of commercially available pulsed-laser atom probe systems now allows for the 3D analysis of composition and structure at atomic resolution in nonconductive systems such as ceramics, semiconductors, organics, glasses, oxide layers, and even biological materials in addition to metals and alloys. New focused ion beam (FIB) methods enable the fabrication of tailored site-specific samples for atomic scale microscopy with much higher throughput than was previously possible. The latest generation of atom probe instruments now offer increased detection efficiency across a wide variety of metals, semiconductors, and insulators, increasing the fraction of atoms detected from approximately 60 percent to approximately 80 percent, and increasing the sensitivity. Faster and variable repetition rate dramatically increases the speed of data acquisition, and advanced laser control algorithms provide measurably improved sample yields. Atom probe experiments are inherently low throughput. The rate at which experiments can be conducted limits the outcomes that can be achieved, so improvements in the speed of acquisition and the yield of successful data sets promises an enormous step forward. This allows greater sensitivity per unit volume, which is extremely useful for the measurement of trace quantities of materials. This is a great help with geosciences applications such as mineral dating or measurement of nanoparticles or quantum devices. One opportunity includes the development of new detectors or detector technologies that could push atom detection limits closer to 100 percent and raise the possibility of achieving kinetic energy discrimination, which would permit deconvolution of overlapping isotopes. Another significant opportunity lies in the development of multimodal instruments incorporating either a TEM or an SEM column directly into an APT system or vice versa, to provide real-time or intermittent imaging or diffraction data. This could allow assessment of specimen shape and crystallography during APT analysis, which could significantly increase reconstruction accuracy for complex heterogeneous materials. Additional opportunities include automation of procedures such as specimen alignment and application- specific control, which could free the user from monitoring the acquisition and encourage optimized analysis conditions as different material types or interfaces are exposed during analysis. Currently, there is significant ongoing discussion about the development of APT standards. Such developments could help in establishing unified protocols for APT sample preparation, data collection processes, data reconstruction and analysis, and reporting of results worldwide. All these developments could lay the foundation for a bright future for APT as a characterization capability that can take not only materials scientists but also researchers from a variety of disciplines, including geology, biology, and solid-state materials, closer to the goal of achieving the 3D composition, structure, and chemical state of a material atom by atom. Sophisticated data analysis tools are required to extend the reach of the technique beyond visualization, and extract the type of meaningful, quantitative information that is required for the purpose of materials design (e.g., for thermodynamic calculations or grain boundary engineering). PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-3

    Intensive research in this area promises to improve the potential to open up to the application of this powerful technique to a wide range of scientific research areas.5 4.1.3 Scanning Probe Microscopies Scanning probe microscopy (SPM) fundamentally relies on atomic interactions between a tip and a surface. In the two decades following its invention, advances focused on increasing spatial resolution, developing quantitative theory of sample-tip interactions and improving the robustness of signals. The first wave of SPM, extending the range of properties beyond surface structure, resulted in EFM (electric force microscopy), SKFM (scanning kelvin force microscopy), PFM (piezoresponse force microscopy), SCM (scanning capacitance microscopy), and so on. The last decade witnessed a dramatic expansion of the properties that could be probed by exploiting the frequency dependence of imposed and detected signals, achieving low detection limits, increasing scan and detection speed, and managing spatially and temporally resolved functional data sets. These advances allowed access to property functions rather than just constants at nm resolution, driving advances including the following:  Not only capacitance and charge, but the real and imaginary components of dielectric function in organic, inorganics, and biological systems from impedance probes;  Spatially resolved quantum efficiency of photo-generated charge in solar cell materials;  Ultrasonic force detection for subsurface imaging;  Electrochemical strain and ionic diffusion in battery materials;  Dynamic processes including surface diffusion and real-time nucleation and growth in phase transformations;  Ferroelectric switching and domain wall dynamics;  Flexoelectricity in water or ambient of organic or inorganic materials;  Force modulation quantifying elastic moduli and energy dissipation locally;  Nuclear magnetic resonance, spin-resolved STM, and spectroscopy of magnetic atomic particles;  Multi-tip STM quantifying transport and electronic structure of nanotubes, graphene, and two-dimensional (2D) materials; and  Switching of polaritons in 2D materials and others with near field IR spectroscopy. Many of the recent SPM advances have been made routine and will continue to provide facile characterization to drive progress in materials research and application in the next decade. Some challenges on the horizon require additional advances. The expansion of in situ/in operando measurements that approach realistic conditions in terms of chemical environment, temperature, and pressure would eliminate the need to extrapolate measured directly to applications. Opportunity space in this area includes battery materials, fuel cells, corrosion, catalysis, thin film growth, and nanoelectronics fabrication. Imaging rates are continuously increasing, but they are not routinely at video rates. As more scanning probes achieve this speed, the range of dynamic processes that can be quantified will increase. While an optimal pathway is not yet clear, the potential implementation of quantum computation requires characterization of quantum mechanics-based behavior in a variety of settings: quantum optics at multiple frequencies, electronic transport in various materials configurations, and manipulation of matter at the atomic scale. The materials sets for this application range from graphene and other van der Waals materials, to topological insulators, to silicon qubits. 5 A. Devaraj, D.E. Parea, J. Liu, L.M. Gordon, T.J. Prosa, P. Parikh, D.R. Diercks, S. Meher, R.P. Kolli, Y.S. Meng, and S. Thevuthasan, International Materials Reviews 63, 68 (2018). PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-4

    SPMs that use mechanical interactions to acquire subsurface imaging can produce tomographic images, taking characterization to an additional dimension. Taking property tomography to the next level will advance understanding in thin film heterostructures, cells and biological materials, composites, and solar cells. Many SPM techniques have evolved to probe structure and multiple property functions simultaneously, creating large data sets of interconnected information. Integrating the concepts of big data and machine learning could yield unexpected insight into complex behavior in functional materials. 4.1.4 Time-Resolved, Especially Ultrafast Methods In the last decade, significant advances in time-resolved, ultrafast methods have been achieved with picosecond resolution routine, femtosecond common, and attosecond emerging. These methods enable the study of atomic-scale dynamics of materials. Ultrafast, atomic-scale, dynamical motions underlie the performance of all functional materials and devices, and the ability to resolve them opens previously untapped potential to enhance materials performance and create new functionality. Such methods are available through table-top systems owing to advances in laser technology as well as an emerging number of X-ray free electron laser user facilities, beginning with the Linac Coherent Light Source (LCLS) at SLAC National Accelerator Laboratory (SLAC). Upgrades to LCLS are already under way to enhance its capabilities. More intense and coherent synchrotron sources are becoming available, which will also enable a variety of more time-resolved experiments. Applications of ultrafast spectroscopy have been very useful when investigating ultrafast biological processes, such as photo-induced proton and electron transfer or excitation energy transfer. The demonstrations have shown how the time-resolved spectroscopic techniques were useful in providing the understanding of such processes. All the electron transfer steps, especially the initial ones, are ultrafast, and early femtosecond pump-probe experiments revealed the final details of this process.6 Another more recent example of the impact of ultrafast methods is in atomic and ionic diffusion, which is fundamental for the functionality, synthesis, and stability of a wide range of materials. In particular, diffusion of electroactive ions in complex electrode materials is central to the function of fuel cells, batteries, and membranes used for desalination and separations. While much is known from first-principles modeling and simulation about how ions diffuse through a lattice,7 little is known experimentally about the atomic- scale processes involved in ion diffusion. Individual ion-hopping events between adjacent interstitial sites may approach approximately 100 fs time scales and are associated with significant changes in the crystal strain field,8 which in turn can influence the dynamics of neighboring ions. The large response of many complex materials to electromagnetic radiation raises the possibility of ultrafast control of those materials properties through the application of short, intense photon pulses. This new field of materials research is showing promise in a number of areas, including especially strongly correlated electron materials. One example of this is multiferroic materials, which show promising potential applications in which magnetic order is controlled by electric fields. However, the underlying physics and ultimate speed of magnetoelectric coupling remains largely unexplored. Using ultrafast resonant X-ray diffraction revealed the spin dynamics in multiferroic TbMnO3 coherently driven by an intense few-cycle THz light pulse tuned to resonance with an electromagnon mode.9 The results show that atomic-scale magnetic structures can be directly manipulated with an electric field of light on a subpicosecond time scale. 6 Holzapfel, W., et al., Initial Electron-Transfer in the Reaction Center from Rhodobacter-Sphaeroides. Proceedings of the National Academy of Sciences of the United States of America, 1990. 87 (13): p. 5168-5172. 7 G. S. Gautam, P. Canepa, A. Abdellahi, et al., Chem. Mater. 27, 3733 (2015). 8 A. Van der Ven, J. Bhattacharya, and A. A. Belak, Acc. Chem. Res. 46, 1216 (2013). 9 T. Kubacka, J. A. Johnson, M. C. Hoffmann, et al., Science 343, 1333 (2014). PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-5

    Applications of X-rays to quantum materials research include ongoing efforts to understand high- temperature superconductivity, the recent detection and spatial mapping of spin currents using X-ray spectromicroscopy, and the direct demonstration and discovery of new electronic phases of topological quantum matter with angle-resolved photoelectron spectroscopy. Despite this important progress in understanding fundamental material physics, the direct impact of X-ray tools on quantum information technologies has been very low to date. This is because the X-ray tools presently lack the spatial resolution to probe quantum matter on the relevant length scales. The combined spectral, spatial, and temporal sensitivity enabled by emerging high brightness X- ray sources will dramatically change this situation. X-ray beams are currently typically 10-100 m in size. In most cases, this is much larger than underlying quantum coherence length and any quantum information is averaged out. The new sources will enable powerful spectroscopic nanoprobes with few- nanometer spatial resolution. These nanoprobes will be able to measure the decoherence of wavefunctions, the influence of device morphology on emergent quantum phenomena, and the motion of quantum information at the heart of emerging quantum technologies. These experiments will investigate not only the spatial and temporal fluctuations of idealized, pure materials but also their manifestation in real-world devices. 4.1.5 3D/4D Measurements, Including In Situ Methods The last decade has seen tremendous growth in three-dimensional (3D) and four-dimensional (4D) characterization capabilities that are specifically geared toward quantifying mesoscale microstructure and response under stimuli. This growth was made possible by significant advances in computer-based control, sensing, and data acquisition, and has resulted in novel experimental toolsets and methodologies that were not possible a decade ago. These advances have enabled a move from qualitative observations to digital data sets that can be mined, filtered, searched, quantified, and stored with increased fidelity and operability. Mesoscale 3D and 4D characterization of materials with X rays can be divided into two subfields, tomography10 and diffraction-based microscopy. The former, commonly referred to as micro-CT, involves the collection of multiple radiographs with microscale resolution and computer-based reconstruction. Laboratory-based systems can readily produce 3D renderings of soft and lattice materials, but hard materials absorb X rays much more efficiently and require higher energy sources and in some cases synchrotron experiments. By comparison, diffraction-based 3D X-ray microscopy involves scanning a beam across a specimen and reconstructing the polycrystalline microstructure from reciprocal lattices. Variation in the placement of the detectors from near-field to far-field positions allows one to determine the orientation of individual voxels within the specimen, and close inspection of the far-field pattern facilitates the measurement of local elastic strains. 10 S.R. Stock, Recent advances in X-ray tomography applied to materials, International Materials Reviews 53(3):129-181, 2010. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-6

    FIGURE 4.2 An experimental FIB/SEM tomography slice of a sample prepared by embedding silica beads in epoxy (a); the reconstructed 3D structure of the silica phase (b); air velocity field in a plane perpendicular to the inlet direction, computed for a 0.03 m/s inlet velocity and output atmospheric pressure boundary conditions (c). SOURCE: A. Rezikyan, 2016, FIB/SEM tomography of porous ceramics, Microscopy and Microanalysis 22(s3):1884-1885, reproduced with permission. Serial sectioning combined with the acquisition of optical or electron micrographs and/or orientation and chemical maps has emerged as an alternative way of collecting and constructing 3D data sets. Focused ion beam-scanning electron microscopes (FIB-SEM) allow one to shape and extract materials with submicron precision (see, e.g., Figure 4.2), but the milling rates of conventional FIBs are prohibitive for mesoscale studies. Faster techniques have emerged and now allow for 3D characterization of volumes ranging from cubic microns to fractions of cubic millimeters. For hard materials, the emergent technologies have involved ultrashort (femtosecond) laser ablation, plasma-source focused ion beam (P- FIB), broad ion beam sectioning and mechanical polishing. The maturation of diamond-knife microtome sectioning systems that operate inside the SEM, has enabled the structural biology community to gather enormous 3D data sets from SEM imaging at the meso-scale, for example sectioning of the entire brain of a larval zebrafish.11 Microtome sectioning is also emerging in the study of soft materials and metals.12 After data collection, a number of steps are needed to extract meaningful information from a 3D data set. The typical flow of data processing involves registration, reconstruction, classification, and analysis. The raw data is often misaligned and distorted, and registration of fiducials must be employed to remove these distortions and misalignments. Reconstruction involves moving from the abstraction of a series of 2D images or maps to a 3D volume of data. In the case of serial sections, whatever property was measured at the surface of the slice is generally assumed to be consistent through the entire thickness of that slice. As long as the slice thickness is small compared to the changes in the measured features, this is a reasonable assumption, but one that the practitioner should be mindful of when considering measurements within the data. Classification involves unambiguous identification of features of interest within the volume. This can be trivial—for example, precipitates with large density differences can easily be identified by backscatter imaging or tomography and grain boundaries can be highlighted by placing a threshold on the orientation gradient in an orientation map. But, in many cases, the contrast between two regions of interest are not easily differentiated. Human intelligence is extremely well optimized for pattern recognition and classification and can accept a very high level of anomalies within an image and through context and prior knowledge can easily infer and identify the features of interest within a set of images. However, the computer-based methods for determining regions of interest in most imaging modes do not have this context, and the segmentation of the volume is often much more difficult than expected. 11 Hillebrand, Nature 2017. 12 Hashimoto, Ultra, 2016. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-7

    The final step is that of analysis of the structures, which can be a wide variety of measurements including size, arrangement, shape metrics, crystallographic orientation textures and gradients. One of the greatest difficulties in 3D data processing and analysis is the lack of well-developed software packages and tools for 3D analysis. Materials researchers must develop custom codes and pipelines. While developing custom processing tools can have certain advantages, it is counterbalanced by the current massive duplication of effort across separate groups, which is further compounded by the lack of standards for data descriptions and file formats which would make interoperable tools easier to develop. As an example, the development of the DREAM.3D software package13 has been extremely beneficial in reversing this trend. Initially developed for the analysis of 3D electron backscatter diffraction (EBSD) data, the platform continues to evolve to analyze multispectral data, providing a set of standards for data and processing formats and documentation. As an example of the success of these methods, 3D data sets of polycrystalline microstructures have been obtained for a variety of aerospace aluminum, titanium, and nickel alloys, and recent in situ 4D synchrotron experiments have elucidated the importance of residual stress and the redistribution of 14 stresses during plastic deformation. A compact ultra-high-temperature tensile testing instrument, fabricated for in situ X-ray microtomography using synchrotron radiation, has been used to obtain real- time X-ray microtomographic imaging of the failure mechanisms of ceramic-matrix composites (CMCs) 15 under mechanical load at temperatures up to 2300°C in controlled environments. It should also be noted that X-ray diffraction studies of hard materials have historically been conducted in multiuser synchrotron facilities, but significantly enhanced laboratory-scale systems have emerged in recent years and hold the promise for much more widespread availability and use of this technique. At the same time, improvements in experimental tools and accompanying modeling of mechanical properties at nanoscale to micron-scale dimensions have enabled mechanical properties to be quantified at a variety of length scales down to ~100 nm (see, e.g., Figure 4.3), enabling the quantitative study of micro- and mesoscale unit deformation processes with unprecedented spatial precision. Similarly, a variety of techniques have been reported that allow bulk physical properties such as thermal diffusivity to be accurately measured in micrometer-scale depths. This allows more comprehensive assessments to be made of the mechanical and physical properties of surface-modified materials treated by case hardening or ion implantation/plasma processing. In addition, these technique advances allow improved quantitative evaluations of the physical and mechanical properties of the near-surface regions of ion-irradiated materials as a proxy to neutron irradiation conditions that could be difficult, costly, and time-consuming (multiple-year experiments) to obtain. As an example, in situ measurements and 3D X- ray characterization of individual grains in polycrystalline bulk materials have paved the way to a better understanding of microstructural heterogeneity and localized deformation in irradiated materials. Such information is critical to the prediction of material aging and degradation in nuclear power plants and the design of new radiation-resistant materials for next-generation nuclear reactors. For instance, researchers have studied in situ heterogeneous deformation dynamics in neutron-irradiated bulk materials using high- energy synchrotron X rays to capture the micro- and mesoscale physics and link it with the macroscale mechanical behavior of neutron irradiated materials of relevance to reactor design. 13 M. Groeber and M. Jackson, 2014, DREAM.3D: A Digital Representation Environment for the Analysis of Microstructure in 3D, Integrating Materials and Manufacturing Innovation 3:5. doi:10.1186/2193-9772-3-5. 14 Various works of CMU, AFRL, Los Alamos, Riso, and Japanese groups. 15 Ritchie et al. Review Scientific Instruments, 2014. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-8

    Permission Pending FIGURE 4.3 The wide range of length scales that affect a material’s performance and properties, along with several models developed to capture each length scale. While this figure is specifically for aerospace materials, the ideas conveyed are valid for most material classes, including batteries, biomaterials, and nuclear materials. SOURCE: John Sarrao, Los Alamos National Laboratory. It is clear that volumetric characterization at the meso- to macroscale via destructive and nondestructive experimental methods have matured tremendously in the past decade, enabling workflows that provide high-fidelity microstructural information across multiple length scales in a diverse range of material systems, but there are still many barriers that have limited its utilization within the materials community to first adopters and domain specialists. For both destructive and nondestructive workflows, a sorely needed advancement over the next decade is the in situ data analysis of the data collection procedure. The overwhelming majority of volumetric data collection is performed asynchronously and often independent of the analysis and ultimate utilization of the information. “Smart” data collection, where data is refined in key regions to provide additional resolution where required, or additional modalities to provide other attributes of materials state are needed. Dynamic sampling approaches, where data is collected efficiently and iteratively based on prior training using machine learning methods, has appeared in the literature for 2D data collection using a single modality,16 and these methods will provide greater benefit in 3D because of the exponential growth in collection time. Other examples include the ability to detect anomalies and other rare features in data collection using lookup tables and dictionary-based approaches, which may potentially allow for refining analysis dynamically for unknown features based on prior knowledge of the expected structure. Furthermore, truly incorporating and integrating multimodal “costly” information into 3D experiments—based on instrumentation price, acquisition time, or surface preparation requirements—can only realistically be achieved using such integrated approaches. Other needed advancements include utilization of machine learning (ML) in the classification of microstructure, the development of efficient collection methods that more directly measure properties of interest, the collection and use of more signals—ultrasound, contact methods (nano-indents), continuing to push the development of larger volumes to generate higher level statistics, and closed loop material removal for serial sectioning. 4.2 SYNTHESIS AND PROCESSING TOOLS Given the increase in characterization tool capability and capacity over the past decade, there has been a corresponding need to advance synthesis and processing capabilities. These advanced tools not only facilitate accelerated materials discovery but also enable materials control with resolution consistent 16 Bouman group, 2016. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-9

    with advanced measurements. Often these synthetic advances are facilitated by advanced computational methods for predicting new materials. 4.2.1 Precision Synthesis Full realization of the promise of precision materials synthesis (size, shape, composition, architecture, etc.) across length scales will transform materials science in a revolutionary way. Specific examples emerging of the possibilities and power of precision synthesis include molecular engineering of catalytic materials for selective reactivity, control of electrochemical energy conversion with atomically precise materials, new biodegradable polymers with control of degradation rate via sequence control, precision placement of nitrogen vacancy (NV) center defects in diamond to create materials for quantum information, and self-assembly of peptide amphiphiles into fibrous and micellar structures with extraordinary bioactivity. These are the tip of an iceberg beginning to appear. Realization of the full “iceberg” is a surpassingly ambitious objective but is an enormously promising direction to invest in. It means not only putting every atom in a material where you want it, but also knowing where you want to put the atoms, and why, and being able to determine whether you have really put them there. Therefore, what sounds at first hearing to be a “synthesis” challenge is actually a challenge for synthetic materials chemistry, theory, simulation, and instrumental characterization. The “across length scales” part of this challenge brings several new aspects into this synthesis challenge. Researchers need to gain the understanding necessary to predict how precision control structure at the atomic and molecular level plays out in macroscopic behavior and properties. They need to understand what level of precision is needed to achieve particular goals. Furthermore, as the Department of Energy (DOE) BES Basic Research Needs report on synthesis science points out, “The challenge of mastering hierarchy [meaning across length scales] crosscuts all classes of syntheses. In interfacial, supramolecular, biomolecular, and hybrid matter, hierarchy is the characteristic feature that leads to function.”17 This goal will require different synthesis techniques and chemical tools, all operating at different length scales. For example, covalent electronic chemistry would be used to make the building blocks, followed by noncovalent assembly of the building blocks into larger structures. Of course, this is research that is being done already, but without the kind of real-time instrumental monitoring and control needed to achieve optimum results. Atomic layer deposition (ALD) and molecular beam epitaxy (MBE) are atomic-level equivalents of additive manufacturing at the macroscale. Hierarchical synthesis processes that have not traditionally been viewed as kinetic processes, such as self-assembly, should be considered as such; the fact that they are driven by thermodynamics does not mean they do not have kinetic trajectories (often sluggish ones). Kinetically stabilized and metastable phases may be the desired product. A wide variety of materials types will be incorporated into one finished product, necessitating wider understanding or, more likely, collaboration on the part of the synthesizers, in order to master the creation of these materials. 4.2.2 3D Structures from DNA Building Blocks DNA origami is the folding of a long strand of DNA, the scaffold, into nanoscale objects through the use of short-chain DNA oligonucleotides, the so-called strands. The last decade has seen significant advances in the design toolbox to build 3D structures, and with each development the number of degrees of freedom increases and this enables construction of more intricate shapes. The first approach to 3D structures was achieved by bundling DNA helices in a honeycomb structure. Curved objects were achieved by adding or deleting the strands between the helical scaffold bundles. Another approach involved the use of curved rings to enable control of the shape. From the bundled helical structures came 17 See report at https://science.energy.gov/~/media/bes/pdf/reports/2017/BRN_SS_Rpt_web.pdf, accessed June 19, 2018. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-10

    wire-frame designs in either a grid-iron pattern or triangular mesh. This advance has important implications for biomedical applications because structures created through these approaches offer higher resistance to cation depletion under physiological conditions. The advances in scaffold structures has been complemented by design platform software packages such as caDNano, DAEDALUS and vHelix. DNA LEGO-like bricks (Figure 4.4) were developed in the last decade.18 Each brick consists of four short single DNA strands; two head and two tail domains. Using a selection of bricks from a menu of preformed motifs, it is possible to self-assemble almost arbitrarily complex 3D structures without the need to use a scaffold structure. FIGURE 4.4 LEGO-like building blocks. (a) Four domains of single-stranded DNA brick. (b) Representation of the four possible different orientations as LEGO bricks. The tail domains (1 and 4) are represented by the protruding pins and the heads by the holes in the blocks. The possible connections are between blocks 1 and 3 and 2 and 4, and the shape of the protruding part must match the shape of the hole. (c) The connection of a south and east block to form the required 90-degree angle, and the joining of two complementary domains a and a*. (d) Connecting the blocks to form a structure. The first and fifth blocks are the same. SOURCE: From Y. Ke, L.L. Ong, W.M. Shih, and P. Yin, 2012, Three-dimensional structures self-assembled from DNA bricks, Science 338:1177-1183, doi: 10.1126/science.1227268, reprinted with permission from AAAS. 18 Y. Ke, L. L. Ong, W. M. Shih, P. Yin, Three-dimensional structures self-assembled from DNA bricks. Science 338, 1177-1183 (2012). doi: 10.1126/science.1227268; pmid: 23197527. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-11

    The origami structures have been used as templates for producing Au nanoparticles, Au nanorods, and quantum dots; molds in which to synthesize nanoparticles; meshes for microlithography; biosensors; and for drug delivery. In addition, active systems, walkers, machines, and factories have already been demonstrated on origami platforms.19 Perhaps an area that has not been given much attention is the possibility that when using DNA as a structural material, there might be unintended potential for DNA to behave in a signaling fashion. Furthermore, it was demonstrated that origami was possible with long-chain RNA scaffolds. This was first demonstrated by using DNA staples and later using RNA staples. 4.2.3 2D Shape-Changing Materials A class of reconfigurable metamaterials that has advanced considerably over the last decade is one based on two-dimensional (2D) films that fold or bend into predetermined three-dimensional (3D) structures. These materials have potential applications to biomedical devices (e.g., self-deployable stents), energy storage (e.g., stretchable Li-ion batteries), robotics, and architecture (smart window coverings to control light reflection). Advances in the ability to fabricate 3D structures at the micro- and nanolength scales were achieved.20 This includes advances in additive manufacturing techniques and inks based on metals, metal oxides, biomaterials, and biocompatible polymers. Other approaches to develop 3D nanostructures exploit bending and folding of thin plates by the actions of residual stresses or capillarity effects and self-actuating materials. It is also possible to generate 3D structures that respond to an external stimulus such as heat or water. Origami (folding) and kirigami (cutting and folding)-inspired designs have expanded the structures that can be formed and increased the materials space to include important ones needed for future advanced technologies. An example of the processing of a mechanically guided scheme is shown schematically in Figure 4.5. The formation of robust 3D structures requires minimization of the overall strain as well as the formation of strain concentrations. The minimization of the strain has been achieved through the use of finite element modeling, which has shown that the length and width of the kirigami cut are important. It turns out that longer cuts are better than shorter ones, as stress concentrations are avoided, and wider cuts better than narrower ones, as the maximum strain is reduced. The minimization of the strain owing to the introduction of kirigami cuts is shown in Figure 4.6 for 2D square silicon membranes.21 This illustration of finite element modeling is advancing the design of robust 3D structures. One application of kirigami structures has been as window blinds to control the sunlight entering a room and thus creating an adaptive energy saving structure. To control the tilt of the louvres, a lattice of linear cuts on an elastomeric sheet is augmented by notches on one side or the other of the sheet, a technique referred to as kiri-kirigami to indicate cuts on cuts.22 When stretched, the cuts open into diamond-shaped holes bounded by narrow segments that undergo out-of-plane buckling and twisting whose directions are controlled by the placement of the notches. Importantly, in this design the direction of the twist of the joints is independent of the loading direction. The reflectance of the stretched sheet depends on the direction of the twist. This can be controlled passively through mechanical stretching or having cuts on the front and back in the kiri-kirigami structure. Figure 4.7 (a) shows a schematic 19 H. Gu, J. Chao, S.-J. Xiao, N. C. Seeman, A proximity-based programmable DNA nanoscale assembly line. Nature 465, 202-205 (2010). doi: 10.1038/nature09026; pmid: 20463734 20 Y. Zhang, F. Zhang, Z. Yan, Q. Ma, X. Li, Y. Huang, J.A. Rogers, Printing, folding and assembly methods for forming 3d mesostructures in advanced materials, Nature Reviews Materials 2 (2017) 17019. 21 Y. Zhang, Z. Yan, K. Nan, D. Xiao, Y. Liu, H. Luan, H. Fu, et al., A mechanically driven form of kirigami as a route to 3d mesostructures in micro/nanomembranes, Proceedings of the National Academy of Sciences 112(38) (2015) 11757-11764. 22 Y. Tang, G. Lin, S. Yang, Y.K. Yi, and R. Kamien, 2017, Programmable kiri-kirigami metamaterials, Advanced Materials 29:1604262 PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-12

    FIGURE 4.5 Schematic illustration of steps for fabricating 3D mesostructures by controlled mechanical buckling of 2D precursors formed using lithographic techniques. SOURCE: Reprinted by permission from Springer Nature: Y. Zhang, F. Zhang, Z. Yan, Q. Ma, X. Li, Y. Huang, and J.A. Rogers, 2017, Printing, folding and assembly methods for forming 3D mesostructures in advanced materials, Nature Reviews Materials 2:17019, © 2017. FIGURE 4.6 Finite element analysis of the strain developed on folding 2D square of silicon without and with kirigami cuts. Both structures included bilayers of silicon/polymer and the prestrain was 80 percent. The color represents the magnitude of the strain. SOURCE: Y. Zhang, Z. Yan, K. Nan, D. Xiao, Y. Liu, H. Luan, H. Fu, et al., 2015, A mechanically driven form of kirigami as a route to 3D mesostructures in micro/nanomembranes, Proceedings of the National Academy of Sciences 112(38):11757-11764. illustration of the effect of loading in one and two directions on the distortion of the unit cell. Here it can be seen that the orientation of the twist is always the same, with images of light actually entering a room with windows without and with the kiri-kirigami louvres. Window treatments based on this design have been tested to determine the ability to use such structures to control light. These experimental realizations of origami-kirigami structures have been accompanied by sophisticated theory, an example of which focuses on the special case in which kirigami cuts are constrained by the geometry of a honeycomb and are characterized by the disclinations and dislocations they create in that lattice.23 An end product of this work is an algorithm24 for arrangement of kirigami cuts 23 See http://news.mit.edu/2017/algorithm-origami-patterns-any-3-D-structure-0622, accessed August 6, 2018. 24 C. Modes and M. Warner, Physics Today 69 (1), 32-38 (2016). PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-13

    that can produce any topographic shape. Similarly, an algorithm has been developed that successfully yields the folding pattern to produce any polyhedral shape in the least number of folds. FIGURE 4.7 Kiri-kirigami structure. (a) Cutting and notch pattern. (b) and (c) When stretched in different ways, controllable and repeatable local tilting can be achieved. (d) A room with window wells where incoming light creates areas in shadow and areas cast in harsh light. (e) The same room but with windows covered by a carefully controlled kirigami structure that casts an even soft light in the room. SOURCE: Y. Tang, G. Lin, S. Yang, Y.K. Yi, and R. Kamien, 2017, Programmable kiri-kirigami metamaterials, Advanced Materials 29:1604262, © 2016 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim. 4.2.4 Additive Manufacturing Several fabrication innovations have transformed the approach for producing complex components. During the past decade, additive manufacturing (AM) of metallic components has transformed from a fledgling research effort to a high-visibility commercial activity, with particularly high impact for aerospace and medical implant applications. AM is the ability to deposit materials layer by layer or point by point to fabricate complex components directly from computer-aided design models. An example is the Box 2.6. The materials palette for AM extends beyond metals to polymers, ceramics, composites, and biomaterials, employing a variety of different techniques for assembly. Even apartment buildings have now been additively manufactured in China.25 As indicators of the increased interest in additive manufacturing, the current industrial growth rate is approximately 30 percent/year,26 and the number of peer-reviewed publications per year more than quadrupled between 2006 and 2016. In pursuit of AM for mainstream applications, four goals support more rapid and widespread usage of this technology: 25 See http://www.3ders.org/articles/20150118-winsun-builds-world-first-3d-printed-villa-and-tallest-3d- printed-building-in-china.html, accessed July 14, 2018. 26 D.L. Bourell, Perspectives on Additive Manufacturing, Annual Rev. of Mat. Res., Vol. 46:1-18, 2016. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-14

    1. Enhancing AM components performance through materials development; 2. Developing new methodologies for certifying additive components for use; 3. Developing integrated computational materials engineering capabilities together with high- throughput characterization techniques to accelerate the development to deployment cycle of AM; and 4. Developing new processes and machines with increased deposition rates, build volumes, and mechanical properties. Materials development, certification, and integrated characterization and modeling (goals 1-3) represent an important investment opportunity because of the potential to enable disruptive change in manufacturing. AM is still a very small market, at approximately $4 billion worldwide and growing at approximately 34 percent per year (compared to $11 trillion/year in worldwide manufacturing), but it is estimated to be valued at $33 billion by 2023.27 AM systems are limited by the costs of materials, rates of fabrication, reliability of processes, integration with other processes, and limitations in layer-by-layer deposition. Historically, AM has been limited to small build envelopes at low deposition rates with limited materials that are sold by the equipment manufacturers. Next-generation systems explore controls, hardware, feedstock condition, and software to develop new machines with high deposition rates, large build volumes, and improved properties using the low-cost feedstocks available. As systems are enhanced, additional applications are possible, increasing the number of companies interested in the technology, and its potential impact. New control and robotic systems are driving advances in AM that have the potential to include out-of-plane deposition, resulting in true additive manufacturing and the ability to deposit multiple materials in the same machine. A wide range of feedstocks, including irregular particulate morphologies and other product forms, have the potential to lower overall cost of the materials involved. Development in smart or enhanced feed mechanisms will also increase reliability and the overall cost impact. All of these enhancements are directed at achieving transformation of AM from a 1- to 2-sigma process (30 to 70 percent success rate) to a 6-sigma process (3.4 failures per million). Achieving this goal will require specific focus on identification of design rules for new processes, development of robust tool paths via advanced slicing software to minimize residual stress and part defects, development of machine design, process monitoring and control to improve reliability and repeatability of the deposition process, and layer-by-layer inspection and adaptive process control to correct part defects during manufacturing. The improvements in AM processes and machines will proceed in parallel with the materials developments that expand the potential uses of the technology. One example of an improvement needed is better ability to control the surface finish. 4.2.5 Cold Gas Dynamic Spraying Cold gas dynamic spraying, commonly referred to as “cold spray,” is a solid-state material deposition process that uses powder particles sprayed at high velocity onto a substrate. The powder particles plastically deform upon impact, creating a metallurgical bond between the powder and the substrate. The process utilizes an accelerated gas stream (N2, He, or air) to propel particles at speeds ranging from 300-1200 m/s toward a substrate, resulting in solid-state particle consolidation and rapid buildup of material. While cold spray is a fundamentally solid-state process, it is performed over a range of temperatures that can reach more than 50 percent of the melting point of the material. The advantages of cold spray are low thermal impact to the substrate, no combustion fuels/gases, no melting of the coating material, and a resultant coating with high density and moderate compressive residual stresses. More importantly, cold spray can be applied for additive repair in a field environment. 27 See https://www.marketsandmarkets.com/Market-Reports/3d-printing-market-1276.html, accessed May 12, 2018. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-15

    As cold spray deposition technology rapidly advances, many critical and intriguing scientific questions are uncovered and remain to be answered. The actual physics present at a single-particle impact is still a very open question. New, fascinating, fundamental experiments using laser-shock acceleration combined with ultra-high-speed cameras are allowing for the imaging and measurements of single- particle impact dynamics. These new measurements are providing crucial experimental data to support, inform, and validate the many theoretical models that have been and are being developed to describe the cold spray deposition process. The sudden impact of metallic particles also causes the grain/crystallite structure to be reduced by an order of magnitude in size, but the mechanism by which this transformation takes place in less than 100 nanoseconds is still unknown. To this point, the community really needs more quantitative information about the micro- and nanostructures generated by the cold spray process. 4.2.6 Nonequilibrium Processing Materials made either by nature or by manufacturing are seldom the result of equilibrium processes. Among the exceptions are crystals and alloys that are thermodynamically stable and formed by slow cooling. Living systems make their wide variety of functional materials from proteins. They continuously produce these proteins through an active process using ribosomes and genetic information to assemble specified amino acids. More than half of their metabolic energy is consumed in protein production. Manufacturing processes use energy to heat, cool, mix, stress, chemically change, and otherwise manipulate constituents into materials with desired properties that are not found in thermodynamically stable form. Material quenches and production techniques such as MBE produce metastable materials. Although researchers are growing progressively more sophisticated in using far-from-equilibrium processing, from semiconductors to additive manufacturing (AM), the underlying science is missing. Thermodynamics and statistical mechanics provide rules for averaging the well-known dynamics of classical or quantum states to obtain the macroscopic properties of materials and many-particle systems in equilibrium. There is a solid foundation in laws that are universal, easily applied, and well tested over the past two centuries. The answers result from extremizing free energies and related state functions. There are no such universal governing principles or a set of free-energy-like functions to be extremized for nonequilibrium processes—for example, in some instances, dissipation is maximized; in others, it is minimized. The need for a deeper understanding of nonequilibrium phenomena is nowhere greater than in materials science. Recent advances, such as additive manufacturing, present new challenges and opportunities for using and understanding nonequilibrium processes. “Laser-based AM processes generally have a complex nonequilibrium physical and chemical metallurgical nature, which is material and process dependent. The influence of material characteristics and processing conditions on metallurgical mechanisms and resultant microstructural and mechanical properties of AM processed components needs to be clarified.”28 The extreme temperature, solvent, or stress gradients imposed by such processing will provide insight into far- from-equilibrium reactions and the properties of the metastable materials they produce. Levitation is both a synthesis method and a characterization method because the influence of the container on the measurement is removed. Initial work in levitation was on metallic glasses, low- temperature melting materials to create high entropy, and nonequilibrium structures. Levitation methods have been expanded to include acoustic, aerodynamic, electromagnetic, and electrostatic, depending on the heating source and the sample size resulting. Both X-ray and neutron characterization can be applied to levitating materials. Samples fabricated from levitation are highly nonstable, and one must consider whether evaporation plays a role during characterization of said sample. These sets of techniques are helping to elucidate structure pathways of materials from high-temperature through a variety of cooling 28 Gu, D.D., W. Meiners, K. Wissenbach, and R. Poprawe. "Laser additive manufacturing of metallic components: materials, processes and mechanisms." International materials reviews 57, no. 3 (2012): 133-164. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-16

    routes. Characterization of containerless synthesized materials is extremely helpful for understanding and design of novel materials. 4.2.7 Single Crystal Growth Advances in fundamental and applied materials research have been driven by the development of numerous different families of crystalline materials with widely varied functionalities.29 The two main paths involve synthesis of (1) high purity but chemically simple and abundant materials and (2) systems with complex stoichiometries/structures that feature multiple tunable characteristic energy scales. Examples in the first category include germanium, silicon, and gallium arsenide, while examples in the second category include strong rare-earth permanent magnets, high-temperature superconductors, and many other quantum materials. Despite the fundamental importance of crystal growth for many different scientific and commercial purposes, it remains the case that it is very often more of an art or technique than a science. Furthermore, many synthesis methods that are routinely used have made only marginal evolution during the past several decades. General categories include (1) solid-to-solid reactions, (2) liquid-to-solid reactions, and (3) vapor-to-solid reactions. Here, the focus is on methods to produce bulk crystals, but the arguments below lend themselves to thin films as well. For example, (1) includes solid- state reaction and spark plasma sintering; (2) includes molten flux growth, arc- and induction-furnace melting, Czochralski crystal pulling, and Bridgman crystal pulling; and (3) includes vapor transport (e.g., using iodine) and thin film techniques. For a detailed review of most such methods, see B. R. Pamplin’s book first published in 1975.30 Note that this and other detailed accounts of these synthesis methods were already presented at least four decades ago, and in many ways the same methods remain the state of the art. This emphasizes the difficulty in developing new strategies but also highlights that this is an area where there is significant opportunity for transformative advances. Despite their usefulness, most crystal growth methods are limited by several practical problems that can now be addressed. First among them is that the progression of events during crystal formation is not often quantitatively understood. Instead, most processes are developed by trial and error, and even the philosophy of synthesis is guided by qualitative experience of individuals or isolated groups—that is, through colloquial methodology. In order to advance beyond these limitations, it is necessary to develop routine methods that provide detailed knowledge about processes that occur during a reaction as well as active modeling that allows modification of a growth in real time. There have been limited recent attempts to do this—for example, where a crystal growth process is observed through neutron scattering, but this field is wide open for advances. Also important is that most real materials harbor defects on all length scales that are difficult to characterize and even harder to correct. A simple example is seen for single crystals that are pulled from a melt (e.g., using the Czochralski technique), where chemical gradients along the growth axis are commonplace. For quantum materials, such variations often have a large impact on electronic and magnetic properties and a mastery of them is needed but undeveloped. An in-depth knowledge and control of the growth process would mitigate these problems and furthermore would open yet another tuning parameter to control a material’s properties. In order to accomplish these goals, a multifaceted and well-funded push to develop new intersections for crystal growth within the materials research community is needed. In this scenario, crystal growth would be treated as a research area of its own that is not strictly associated with specific classes of materials or topics. Two promising directions of research are (1) methods for rapid- throughput/rapid-characterization and (2) synthesis methods under extreme conditions (e.g., applied pressure, magnetic fields, electric field, etc.). This research requires suites of materials analysis (e.g., neutron, X-ray scattering, and microanalysis) and computation collaborations to facilitate progress, plus ways to broadly distribute information. 29 See https://www.nap.edu/catalog/12640/frontiers-in-crystalline-matter-from-discovery-to-technology. 30 See https://www.elsevier.com/books/crystal-growth/pamplin/978-0-08-025043-4. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-17

    4.3 SIMULATION AND COMPUTATION TOOLS The nature and benefits of computation capabilities in materials research are extensive, but vary dramatically, depending on the material class or application. For example, the useful computational tools for the well-developed semiconductor and aerospace industries are very different from those needed for new materials, where there are still basic questions and no elaborate databases for mining or application of artificial intelligence. However, it is clear that computational capabilities, on both the large as well as the small scale, will continue to advance large expanses of the materials research landscape. 4.3.1 Integrated Computational Materials Engineering and Materials Genome Initiatives Two initiatives began during the past decade that aimed to accelerate the timeline from development to deployment of a material, by highlighting the benefits of experiment and computation working together, and the need for materials computational design at all stages of the manufacturing process. One initiative, the Integrated Computational Materials Engineering (ICME) approach, was detailed in a National Academies study of 2008.31 The ICME approach seeks to integrate materials models across different length scales (multiscale) and computational methodologies to capture the relationships between synthesis and processing, structure, properties, and performance. The initial successes of ICME were enabled by the existence of a wealth of data on specific materials systems that had been generated over prior decades of research.32 The second initiative began in 2011, when President Barack Obama launched the Materials Genome Initiative (MGI) with the intent “to discover, manufacture, and deploy advanced materials twice as fast, at a fraction of the cost.”33 Central to this vision was the equal weighting of, and integrated nature of, computational tools, digital data, and experimental tools. The latter included synthesis and processing as well as material characterization and property assessment. It was recognized that in each area there was a need to develop new tools and capabilities to advance the field and to explore the intersection between each of the three. The initiative recognized the importance of data and the need to develop databases and the tools to interrogate and visualize it. This need is particularly important to the success of the initiative as extensive and accessible databases exist for only a few material systems. It was important that the integrated tools were to be one framework over the seven stages of the materials development continuum (the seven stages are discovery; development; property optimization; systems design and integration; certification; manufacturing; and deployment).34 Following the direction indicated by these two initiatives has led to some notable successes.35 For example, Ford Motor Company used the ICME framework to obtain a 15-25 percent decrease in the product development time of cast aluminum power train products. For cast aluminum components, subtle changes in the manufacturing process or component design can lead to engine durability issues and program delays, but simulating the effect of manufacturing history on the engine durability allowed Ford to avoid these costly delays. Computational methods integrated with material property databases have successfully been used to recently develop two forms of steel that were licensed to a U.S. steel producer (by QuesTek Innovation, LLC), and then deployed into demanding applications. The first alloy was for the U.S. Air Force: Ferrium 31 National Research Council, 2008, Integrated Computational Materials Engineering: A Transformational Discipline for Improved Competitiveness and National SecurityThe National Academies Press, Washington, D.C., https://doi.org/10.17226/12199. 32 G.B. Olson / Acta Materialia 61 (2013) 771-781, http://dx.doi.org/10.1016/j.actamat.2012.10.045. 33 See https://www.mgi.gov/sites/default/files/documents/materials_genome_initiative-final.pdf. 34 Includes sustainability and recovery. 35 See https://www.mgi.gov/sites/default/files/documents/mgi-accomplishments-at-5-years-august-2016.pdf. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-18

    S53, an ultra-high-strength and corrosion-resistant steel that eliminates toxic cadmium plating, and is now flying as safety-critical landing gear on USAF A-10, T038, C-5, KC-135, and on numerous SpaceX rocket flight critical components. The second alloy was for the U.S. Navy: Ferrium M54, an upgrade from legacy alloys, which offers more than twice the lifetime of the incumbent steel while saving $3 million in overall program costs and is now deployed on their T-45 safety-critical hook shank component. As seen in Box 2.2 in Chapter 2, the time from development to deployment was reduced from 8.5 years for Ferrium S53 (deployment in 2008) to 4 years for Ferrium M54 using only one design iteration (qualification in 2014). QuesTek has designed, also using integrated methods, a third steel: Ferrium C64, which is a best-in-class gear steel that allows for increased power density, fuel efficiency, and lift of military helicopters. This steel has been patented and is now available for purchase. Other successful areas of integrated computation-experiment-data have been in new materials for batteries,36 and many other companies, such as Boeing, use integrated approaches for advanced metals and other material discovery and deployment. An example of the acceleration of the discovery of materials through a combination of quantum mechanical calculations, synthesis, and experiments is the design and optimization of liquid crystal sensors.37 These sensors work on the principle of the selective displacement of liquid crystal molecules by analytes that results in an optically detected transition of the liquid crystal. Liquid crystals are in general sensitive to UV light, poisons/pollutants, and strain. There are now many liquid crystal sensors. They hold promise as inexpensive, portable, and wearable sensors for key applications (e.g., poisonous gas detection). One challenge regarding the MGI is that it has tended over time to become an interaction between materials modeling, computing, and data communications, leaving experiment behind. Vast databases have been created of purely computational results, without experimental validation of stability or accuracy. In addition, without experimental results, the materials cannot be easily tested or deployed for manufacture. Some of these issues are addressed further in Section 4.3.5 on databases. Another challenge is that it is difficult to maintain modeling continuity across the full range of seven stages from discovery to deployment unless the research group is immersed in a development environment. The MGI goal would be furthered by increased university-industry interactions, which have widespread benefits, as explained elsewhere in this report. 4.3.2 Computational Materials Science and Engineering Over the past decade, there have significant improvements in modeling materials on multiple length scales, including quantum mechanical, atomic, mesoscale (course-grained or phase field), and continuum scales, in addition to statistical methods. This would also include, for example, the Landau- Lifshitz-Gilbert equation38 for magnetic materials, and progress on understanding Gilbert damping.39 These advances have been spurred on by the advances of physical science, such as the example just given, together with the vast increase in computing power over the last decade as well as the integration with experiment and data described in the last section. The area of quantum-level modeling has had perhaps the greatest advancement and opportunity for future improvements, and will be summarized first. In a significant shift, electronic structure (i.e., density functional theory, DFT) computational software has become readily available in packages both commercial (CASTEP, VASP, WIEN2K) and 36 For example, http://ceder.berkeley.edu/ and the many successes on that website. 37 H. Hu, Z. Lu, and W. Yang, Fitting molecular electrostatic potentials from quantum mechanical calculations, J. Chem. Theory Comput., 2007, 3 (3), pp 1004-1013. 38 For a recent review, see, e.g., M. Lakshmanan, The fascinating world of the Landau-Lifshitz-Gilbert equation: an overview, Phil. Trans. R. Soc. A 369, 1280 (2011). 39 L. Chen et al., Emergence of anisotropic Gilbert damping in ultrathin Fe layers on GaAs(001) Nature Physics 14, 490 (2018). PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-19

    open source (Quantum Espresso, Abinit).40 These packages are well-documented online, and some (e.g., VASP) have well-developed user interfaces. The calculations of material properties enabled by these packages have high fidelity. They are used to predict structure-property relationships for many material types, discover new structures, enhance the interpretation of experimental data, and populate databases. When magnetic material properties are calculated with DFT, the addition of a “Hubbard U” term can give very accurate properties of d-electron materials, and in the past decade there have been many developments in this area.41 Modern DFT packages can handle full 3D spin dependence (not just spin up or down), and including relativistic effects and spin-orbit coupling is now a matter of setting parameters in the input file. DFT is challenged when there are multiple sources of magnetism, such as the f-electron materials, which often have unfilled d-electron orbitals as well (for which an additional parameter J is often added, and even then comparison with experiment is often needed), and whenever there are many- body interactions (superconductivity, metal-insulator transitions, Kondo effects, complex oxides, etc.) not describable by single-particle states, as DFT is, based on functions of the local density. Many useful improvements of DFT have been developed in the past decade, including extensions of DFT to finite temperature, excited states, and time dependence. These often combine perturbation theory with standard methods (such as the GW method) or go beyond it.42 These extensions add much calculation overhead. In addition, active work includes improvements to the exchange functional used in DFT codes, as well as to the pseudopotentials used to estimate the inner cores of atoms in programs such as quantum espresso and VASP. Going beyond DFT to attempt to model many-body physics of complex materials such as the rare earths is dynamical mean field theory (DMFT), which maps the lattice problem to a local impurity model. This latter, even though it is a many-body problem, has a recognized set of solutions. The main approximation of this method is to assume that the lattice self-energy is independent of momentum, an approximation that becomes exact in the limit of infinite dimensions. This methodology, combined with DFT, has had some notable successes, including the phase diagram of plutonium,43 and the metal- insulator phase transition in the Bose-Hubbard model.44 By combining DMFT with time-dependent DFT (TDDFT), properties that depend on the time evolution of electronic states such as multielectron and hole bound states (excitons, trions, etc.) could be calculated. TDDFT has the advantage of being a theory of one time-argument function, the charge density. Quantum Monte Carlo (QMC) is another technique for studying materials with many-body effects. In general, it is an accurate and reliable method that is trivially parallelizable and thus high- performance computing (HPC)-friendly. It is also computationally expensive, complex to apply, and thus challenging. Among the different flavors of QMC, Diffusion Monte Carlo (DMC) is the most popular. It is a stochastic method that allows direct access to system ground states and sometimes of the excited states of a many-body system as well. In principle, DMC is an exact method that maps the Schrӧdinger equation to a diffusion equation. However, approximations must be made for computational feasibility when modeling fermions (i.e., electrons). The most common approach is the fixed-node approximation— nodes of the wavefunctions are kept fixed to those of the original trial wavefunction during the search for ground-state wavefunctions. 40 For the five programs see https://www.castep.org; https://www.vasp.at; http://susi.theochem.tuwien.ac.at/; https://www.quantum-espresso.org; https://www.abinit.org, all accessed on 6/6/2018. 41 Burak Himmetoglu et al., Hubbard-corrected DFT energy functionals: The LDA+U description of correlated systems, International Journal of Quantum Chemistry 114, 14 (2014). 42 Shu Xia Tao, Accurate and efficient bandgap predictions of metal halide perovskites using the DFT-1/2 method: GW accuracy with DFT expense, Nature Scientific Reports, 7, 14386 (2017). 43 N. Lanata et al., Phase Diagram and Electronic Structure of Praseodymium and Plutonium, Phys. Rev. X 5, 011008 (2015). 44 Peter Anders et al., Dynamical Mean Field Solution of the Bose-Hubbard Model, Phys. Rev. Lett. 105, 096402 (2010). PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-20

    For molecular solids, quantum chemistry methods tend to be more accurate. These include configuration interaction (CI), coupled cluster (CC), and multireference methods. The trade-off for accuracy is that they are computationally formidable and not convenient for high-performance computing. Moving from the subatomic scale of the quantum methods to those on the atomic scale, a technique widely used in chemistry for molecules but also useful for especially surfaces of materials is molecular dynamics (MD). This method uses force fields generated by DFT or other more approximate methods based on tables of vibrational analysis of bonds. MD can generate time dependence of atomic movements (for very short periods of time) at finite temperatures. A development to note of the past decade is the availability of highly modular, massively parallel shareware software to carry out large- scale atomic-scale simulations with high efficiency of dynamical properties of materials, including thermal conductivity, mechanical deformation, and irradiation, and many other properties. An example of an MD software package is LAMMPS (Large-Scale Atomic/Molecular Massively Parallel Simulator),45 developed by Sandia National Laboratory. Such atomic-scale simulations have been further enabled by the availability of multiphysics (multiple interacting physical effects) reactive force fields or potentials that provide frameworks for the study of heterogeneous material systems. The cataloging of reactive potentials, through efforts at the National Institute of Standards and Technology (NIST) and the OpenKIM project,46 are providing important ways to track performance and suitability of these empirical methods. Development of simple atomistic potentials with accuracies at the level of DFT and computational efficiency sufficient to undertake simulation of realistic (laboratory-level) large length- and time-scale simulations or for high-throughput calculations is necessary. Machine learning has helped develop such potentials, as described in Section 4.3.3. Mesoscale modeling has also had significant advancements over the past decade. One example is the software CALPHAD (Computer Coupling of Phase Diagrams and Thermochemistry),47 which enables the prediction of phase diagrams and thermodynamic behavior. Phase field modeling48 simulates materials growth and mesoscale structure-property relationships, and as a third example on the mesoscale, coarse- grained simulation methods have much improved their usability for modeling molecular materials such as polymers. Over the last decade, there has been an increasing translation of computational tools to industrial application: the boxes in Chapter 2 provide examples in alloy development and industrial processing. Continuum-state variable process models are used in manufacturing for casting, forging, rolling, vapor deposition, machining, and so on. Further, CALPHAD and the DICTRA diffusion code and method are ubiquitous and heavily supported by industry. The transition of other tools—for example, phase field, kinetic Monte Carlo, and so on—are in progress. Progress has been made in physics-based multiscale models for mechanical behavior prediction, but these have yet to be adopted by industry. Intense effort has gone into developing and improving single- and multiscale methods that may be concurrent, hierarchical, or hybrid and that may be solved in parallel, sequentially, or in a coupled manner. These methods have enhanced both fundamental science studies of, for example, the mechanics of materials and the physics and chemistry associated with materials growth, and applied engineering efforts associated with, for example, the manufacture of material parts in industry. The ability of these methods to make full use of improvements in computer architecture varies with method type, length scales, and time scales. Time-dependent materials have made great strides over the last decade (e.g., modeling creep), but as is the case for classical molecular dynamics simulation, as system size increases, 45 See Lammps.sandia.gov, accessed January 5, 2018. 46 See https://www.ctcms.nist.gov/potentials/ and https://openkim.org, both accessed January 5, 2018. 47 CALPHAD is a computational methodology. See, for example, www.thermocalc.com/products- services/databases/the-calphad-methodology/ or a free development at http://www.opencalphad.com/, both accessed January 5, 2018. 48 L.Q. Chen, 2002, Phase-field models for microstructure evolution, Annual Reviews of Materials Research 32:113-140. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-21

    there are rapid increases of computational demands of time and memory. Consequently, redesigning software or using combinations of methods (e.g., molecular dynamics and Monte Carlo) are important. Other advances have been achieved through co-design of experiment and computational infrastructure, which has come to the fore in the past decade. Examples include progress in image recognition for microstructure identification and the use of the parallel advances in brightness and power from scattering methods, such as X-ray and neutron scattering, and computational materials science that promise to advance the field of scattering science by elevating the interrogation of data from scattering experiments. A grand challenge in computational materials science is to design the electronic structure of materials directly from first principles, to go from physical/mechanical properties to structure and atomic constituents, rather than the usual other way around. Technical roadblocks in the capabilities of computational methods are being addressed in federally supported research centers, as well as in selected academic centers and private companies, and much progress in the coming decade is expected toward this goal. 4.3.3 Machine Learning for Materials Discovery Over the last decade, both supervised and unsupervised machine learning algorithms have been used to calculate materials properties, explore materials compositional space, identify new structures, discover quantum phases, and identify phases and phase transitions. Although training is usually necessary, once set up, these models are able to calculate a wide range of properties, with high accuracy, at large scale, and at speeds orders of magnitude faster than conventional computational methods. Supervised machine learning algorithms that have been applied to materials include random forests, kernel ridge regression, and multilayer perceptron artificial neural networks. These methods allow the mapping of a section of features—for example, atomic positions—to output values such as material properties and performance; their goal is to map output to input. Machine learning methods have been used to efficiently and effectively explore materials space, considering all possible structures and combinations to identify new structures as well as ones with specific properties. For example, a machine learning model based on kernel ridge regression was used to calculate, with DFT accuracy, the formation energies of the 2 million possible elpasolites—an isometric- hexoctahedral quaternary (Al, F, K and Na) mineral. This method identified the most strongly bound elpasolites, proposed a new elpasolite order based on the energy, and identified 128 structures with 90 unique stoichiometries. This is one example from many in which machine learning was used to efficiently computationally screen many polomorphs to identify unique crystal properties (see Figure 4.8). Normally, DFT is used to calculate the potential energy surfaces that are the necessary inputs to molecular dynamics or Monte Carlo simulations, and the time and memory demands of these programs significantly limit the length of time a large system can be simulated. By introducing the concept of a symmetry-function-value for each atom that reflects the local environment of that atom, a generalized neural net representation of DFT potential energy surfaces can be developed. This method provides the energy and forces as a function of all atomic positions for large system sizes and was several orders of magnitude faster than DFT.49 The method can be used with nonperiodic systems and can be used to describe all types of bonding. There are now for MD, for example, artificial neural networks potentials and Gaussian approximation potentials. Support vector machines (SVMs) and the Spectral Neighbor Analysis Potential (SNAP) are extensively used. The most commonly used approach utilizes traditional fully connected, feed-forward, neural networks by mapping the arrangement of atoms as their input. The total potential energy of the system is broken into atomic energy contributions, whose local environment can be described using radical and angular Gaussian symmetry functions. These potentials, which are 49 Jörg Behler and Michele Parrinello, 2007, Generalized neural-network representation of high-dimensional potential-energy surfaces, Physical Review Letters 98:146401. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-22

    analytical and easily integrated in broadly used molecular dynamics simulation codes such as LAMMPS, facilitate large-scale and long-time-scale simulations with affordable computational cost. FIGURE 4.8 Elpasolite is AlNaK2F6 and its crystal structure is common among inorganic materials. Substituting other elements in the form ABC2D6 gives the large family of elpasolites. The authors have used their machine learning model to predict the formation energies for all 2 106 elpasolites made up of all main-group elements up to Bi. The figure shows a heat map of the energies, with the scale in eV/atom shown on the right. In the lower left half of the figure separated by the diagonal white line, the vertical and horizontal axes represent elements for D and C, respectively, and in the upper right half of the figure, the vertical and horizontal axes represent elements for B and A, respectively, with the other two constituents in each case running over all values, giving the two million materials. SOURCE: F.A. Faber, A. Lindmaa, O.A. von Lilienfeld, and R. Armiento, 2016, Machine learning energies of 2 million elpasolite (ABC2D6) crystals, Physical Review Letters 117:135502, https://doi.org/10.1103/PhysRevLett.117.135502, https://creativecommons.org/licenses/by/3.0/. Unsupervised algorithms (such as principal-component analysis and deep-convolutional autoencoders) can be trained to categorize unlabeled data and can reveal patterns in high-dimensional material and chemical spaces that would otherwise be difficult to perceive. Principal component analysis (PCA) is a statistical technique for data reduction in high- dimensional spaces. A linear rotation has been found that results in data with a smaller spread in the new space. A nonlinear form of PCA (via a deep convolutional autoencoder) has been used to explore phase transitions, such as a ferromagnetic Ising model of several thousand atoms.50 Remarkably, it showed that fully connected and convolutional neural networks can identify phases and phase transitions, as well as unconventional low-order states, in a range of condensed-matter models. Deep convolutional autoencoders belong to a relatively new class of “deep learning” algorithms based on multilayered artificial neural networks. Deep learning methods let the algorithm itself decide on the relevant features, operating directly on “raw” data. These techniques evolved from the field of computer vision. In traditional machine learning, manual feature selection is required prior to training. Deep learning techniques do not require this step (although they require a much larger training set). As an example, a deep convolutional neural network was used on the Ising model and obtained accurate 50 J. Carrasquilla and R.G. Melko, 2017, Machine learning phases of matter, Nature Physics 13:431. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-23

    energies and magnetization for both a nearest-neighbor Hamiltonian and a long-range screened Hamiltonian. Training allowed the neural network to generalize to “never before seen examples,” using observations from a limited number of configurations. Deep convolutional neural networks have been used for modeling large systems, with a method of domain decomposition into overlapping tiles for training and inference, and good results have been obtained for spin models and many-body quantum mechanical operators with DFT accuracy. Recently,51 a deep convolutional neural network model, trained so that no manual feature selection was necessary, predicted ground-state energies of an electron in a random 2D potential to within chemical accuracies with no analytic form for either the potential or ground-state energy. 4.3.4 Quantum Computing as a Computational Materials Tool Chapter 2 describes the materials research that has been under way for improved qubits. This section summarizes some of the research needed for quantum computing to become a functional, computational tool. Quantum computers use superposition and entanglement of quantum states to perform operations on many bits simultaneously. With the appropriate initial state preparation (in itself a research challenge to do optimally), N quantum bits could hold the information of 2N classical bits. With enough error-corrected quantum bits (thought to be around 75-100), quantum computers could calculate properties of complex molecules well beyond the capabilities, both computational and memory, of foreseeable classical computers. This will impact many fields, including medicine and agriculture. Likewise, there is promise for being able to simulate excited states, and the time and temperature dependence of quantum materials, areas also beyond classical computational capabilities. Additional opportunities for quantum computing exist in the areas of optimization, cryptography, finance, and other fields. In the present day, qubits still have a considerable rate of decoherence, which affects the accuracy and gate depth, limiting the number of consecutive operations. Error correction, routine in classical computation, is a significant challenge for quantum computation. With current qubit quality, estimates are that millions of qubits would be needed to do complete error correction for a single logical qubit. Even partial error correction could take dozens of qubits. Nonetheless, significant progress is being made without fault-tolerant computing. There are operating quantum computers of up to around 30 qubits in superconducting (e.g., IBM, Google, Rigetti) systems, and of upward of 50 qubits in trapped ion systems (e.g., Professor C. Monroe, University of Maryland). IBM is notable for having put their quantum computer online,52 the first quantum computer ever available in this way. There are also simulators of quantum computing online that run in the cloud on classical hardware, and these are limited by memory requirements to around 50 qubits. IBM was among the first to adopt this methodology; notable efforts, with large simulators, are also conducted by Microsoft. In addition, several quantum computing centers provide software kits and algorithms for using Python to submit jobs to the quantum computer. With this infrastructure, researchers are performing calculations, and papers are appearing in top journals. Some landmark examples are as follows. Programming for a tour de force calculation53 of three small molecules, the authors worked around the error, an indication of how computation on real quantum computers will need to be done in the foreseeable future. Using a one-dimensional (1D) chain of trapped alkali-metal atoms54 as a 51-qubit quantum simulator, researchers observed a quantum phase transition. In 51 K. Mills, M. Spanner, and I. Tamblyn, 2017, Deep learning and the Schrödinger equation, Physics Review A 96:042113. 52 See https://www.research.ibm.com/ibm-q/, accessed December 3, 2018. 53 A. Kandala et al., 2017, Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets, Nature 549:242. 54 H. Berneien et al., 2017, Probing many-body dynamics on a 51-atom quantum simulator, Nature 551:579. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-24

    a system of 53 trapped ions,55 researchers studied the nonequilibrium dynamics of the transverse-field Ising model with long-range interactions. As quantum computing technologies mature, they hold the potential to impact many materials areas. There are extensive research opportunities in modeling and algorithm development, as well as in experimental implementation, for these and other related applications. 4.3.5 Materials Databases: Achievements, Promise, and Challenges Materials databases have been available for decades, and they have been widely accessible on the Internet. Legacy efforts from the 1990s are largely characterized by the delivery of very high quality reference materials data that was hand-curated from the peer-reviewed literature, which gives some guarantee of quality. By contrast, more modern repositories are able to meet a number of additional demands, including ease of use for nonexperts, assignment of a persistent identifier, broad accessibility, and sophisticated searchability. Application programming interfaces allow machine-to-machine interactions with minimal human involvement, and the capacity to be federated across multiple instances in geographically disparate locations. Examples of general-purpose repositories include the NIST Materials Data Repository (https://materialsdata.nist.gov); the Materials Data Facility (https://materialsdatafacility.org) sponsored by the NIST center of excellence—the Center for Hierarchical Materials Design; the DOE-funded PRISMS Materials Commons (https://materialscommons.org); and numerous resources associated with the DOE Energy Materials Network (https://energy.gov/eere/energy-materials-network/energy-materials-network). There are also databases for specific materials types, such as catalysts, polymers, phase data, electronic properties, and more. NIST has a database of kinetic properties that are important for processing and material evolution. Additional databases that contain a digital representation of microstructure are being developed. Some of the achievements of the past decade in using machine learning, a heavily data-driven technique, to advance materials understanding, were summarized in Section 4.3.3. Other uses of materials data, combined with data mining tools, are to search for new materials compositions. Such searches could include new thermoelectric compounds, and for additive manufacturing, new aluminum alloy compositions. Materials data combined with data mining can also be applied for microstructure development based on processing conditions. More recently, researchers have been combining these tools with robotics and in situ process monitoring and characterization to build autonomous research apparatuses like the Autonomous Research System (ARES) developed by Air Force Research Laboratory (AFRL) researchers to determine optimum growth conditions for carbon nanotubes. A most telling example of the rising value of materials data is the emergence of for-profit institutions that provide data services. For example, Citrine Informatics provides free data repository services for those researchers willing to share their data with others as well as data analysis tools and services to commercial customers based on the vast wealth of materials data they are accumulating. Another source of vast databases is databases created with density functional techniques of various materials and their properties. This leads to the question of the validity and verifiability of the data in databases. As more calculations of known materials are performed and stored in databases, it is becoming possible to calculate the distribution of errors as well as the average error in calculations of certain physical properties. These known errors, together with enthalpy calculations of various competing phases, allow an estimation of how likely it is that a new material calculation will give a stable material at various temperatures. Databases include the Materials Project56 of thermodynamic structure-property 55 J. Zhang et al., 2017, Observation of a many-body dynamical phase transition with a 53-qubit quantum simulator, Nature 551:601. 56 See https://materialsproject.org/. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-25

    relationships, which is being actively used for materials selection. An example of using such enthalpy calculations and data from the Materials Project is shown in Figure 4.9. Permission Pending FIGURE 4.9 Over 2,000 compositions of perovskite materials, plotted as a function of their predicted catalytic ability and stability. Different symbol colors represent different perovskite structures. Red = ABO3, blue = AA*BO3 or ABB*O3, and purple = AA*BB*O3. An “x” symbol denotes an insulator and “o” a conductor. The catalytic ability estimates how well the material enhances surface exchange of oxygen in solid oxide fuel cell cathodes and is represented by the surface exchange coefficient k*, plotted on the y-axis, calculated using first-principles DFT techniques. For stability, solid fuel cell operating conditions (temperature, water vapor, etc.) were applied in a multicomponent phase stability analysis via the open source Pymatgen toolkit and Materials Project data, and an energy was calculated for each material, plotted on the x-axis, with lower energy indicating greater stability. Fifty-two promising candidates were found, ready for experimental testing for activity and stability, which in turn will improve the modeling tools. SOURCE: R. Jacobs, T. Mayeshiba, J. Booske, and D. Morgan, 2018, Material discovery and design principles for stable, high activity perovskite cathodes for solid oxide fuel cells, Advanced Energy Materials 8(11):1702708. An active area of research, which has seen considerable growth over the last decade, is in the development of unambiguous, well-defined methods of verification, validation, and uncertainty quantification of computational results. In addition to collaborating with others who are disciplinary experts, materials scientists and engineers are ensuring that new methodology development takes place hand in hand with experimental work. The need for verification, validation, and uncertainty quantification increases in complexity when methods that span multiple length and time scales are combined, even as the need for robust assessment of these quantities increases as these scales approach those associated with processing and manufacturing. In other words, the closer researchers get to “materials by design,” the more important it is not only to have predictions but also to have engineering-level understanding of their associated errors. Data-driven approaches are poised to dramatically increase the productivity of materials research, but realizing the true potential is predicated on the development of a seamless Materials Data Infrastructure (MDI) that allows for the storing, sharing, searching, analysis, and learning from data spread over multiple sites. An underpinning goal of the MDI is to create a digital thread through the material life cycle, from discovery through recycling, where critical information is seamlessly passed digitally along the life cycle in order to reduce barriers of information transfer, thus reducing the relearning currently needed at each stage. One idea that has been suggested is to apply blockchain to data storage for security, timestamping, data version control, attribution, and so on, leading to more secure, reliable data bases. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-26

    FAIR data principles are important in making data findable, accessible, interoperable, and reproducible. To address issues around findability and accessibility, NIST has worked with the international community to create the Materials Resource Registry, allowing organizations to advertise metadata about organizations, data collections, application programming interfaces, informational sites and materials software, using set schema to describe their resources, similar to that used in the astronomy community to enable the Virtual Observatory. In addition, several organizations have developed integrative or e-collaborative platforms that use web-based technologies to manage the materials data life cycle, implementing the FAIR principles. The goal of these platforms is to ease ingestion of data and metadata from experiments or simulation, and enable manual or automated data analysis, data search, and data publication. NIST’s lead in developing schemas will be very helpful in providing standards and criteria for describing and exchanging data. However, challenges remain. The prevalence of proprietary data formats used in scientific instrumentation works against this data being used by others, and in general the difficulty in extracting scientific data, and even more so the supporting metadata, from instrumentation makes data sharing a monumental and laborious challenge for individual investigators. The same lack of metadata holds for data produced by many computational programs as well. Solutions at implementing the FAIR principles will vary by discipline and will evolve toward relatively stable metadata schemas only after a considerable shake-out period. For relatively simple systems, schemas already exist—for example, the Crystallographic Information File (CIF) format used by crystallography. For a general approach, NIST has developed a Materials Data Curation System, which requires each subdiscipline to define a schema that describes the type of data to be curated. Much effort still remains. 4.4 INTEGRATION OF SYNTHESIS, CHARACTERIZATION, AND MODELING While alluded to in discussions throughout the report, the integration of the different types of tools used by materials researchers presents its own set of opportunities and challenges, some of which are discussed in this section. 4.4.1 High-Throughput Screening High-throughput screening, in which thousands of experimental samples are subjected to simultaneous testing under given conditions, was first recognized as a tool in the 1980s at GEC Hirst Research Center. Over the next 30 years, high throughput was not highlighted as a tool for materials researchers. In the last decade, more and more materials researchers are using foundational high- throughput tools to further their own work.57,58 High-throughput efforts can be separated into discovery and optimization, each with its own risks and rewards. Discovery (primary screening) is intended to sample broad and diverse areas. Optimization (secondary screening) accelerates development of new materials. The risk in the discovery arena is a higher number of false positives and false negatives than individual screening. In optimization, accuracy is usually sacrificed for speed, in that traditional characterization techniques do not always maintain pace with high-throughput synthesis.59 Over the past few years, screening has evolved from relatively simple materials and 1D and 2D synthesis (e.g., nanoparticles and thin films) to more and more complex materials and 3D synthesis (e.g., 57 E.B. Svedberg, chapter in, Combinatorial and High-Throughput Discovery and Optimization of Catalysts and Materials (R.A. Potyrailo and W. Maier, eds.), Critical Reviews in Combinatorial Chemistry, Vol. 1, CRC Press. 58 E. Chunsheng, et al., 2006, Combinatorial synthesis of Co/Pd magnetic multilayers, Journal of Applied Physics 99:113901. 59 Maier, Stowe, Sieg, 2007, Angew Chem Int Ed. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-27

    thick film, solution-based methods, additive manufacturing, etc.). Improvements in spectroscopic techniques and image analysis are the first characterization methods to maintain pace with materials synthesis. Characterization and data analytics can still be rate limiting. A 2014 publication on high-throughput synthesis and characterization of bulk metallic glasses highlights the progress made in that area.60 Over 3,000 alloy compositions were analyzed for both glass- forming ability and thermoplastic formability, an indication of their ability to respond to strain, through a creative methodology. Wells were made in a silicon wafer substrate, and then three confocal sputtering targets were used to deposit compositions in the Cu-Y-Mg family. Freestanding membranes were created by removing the silicon from the backside. These are low glass-transition temperature (Tg) glasses, and gases can generate enough pressure at 100ºC to elastically deflect the membrane in a short period of time. The height of the final membrane yields an indication of the thermoplastic formability and can be quantified (see Figure 4.10). It is clear that high-throughput screening will be of increasing importance and that it will change how much of the materials research of the future is conducted.61 FIGURE 4.10 Image shows parallel blow-forming setup of compositional membranes and its realization. (a) Schematics of the parallel blow-forming setup. The relative thermoplastic formability is given by the final height of the membrane after deformation. SOURCE: Reprinted by permission from Springer Nature: S. Ding, Y. Liu, Y. Li, Z. Liu, S. Sohn, F.J. Walker, and J. Schroers, 2014, Combinatorial development of bulk metallic glasses, Nature Materials 13(5):494-500, © 2014. 4.4.2 Predictive Experimental Materials Design and Combined Experimental/Computational Analysis Accompanied by advances in first-principles calculations, molecular-dynamics simulations, machine learning, and other data analytics tools, predictive material design is fast becoming the norm, 60 S. Ding, Liu, Y., Liu, Z., Sohn, S., Walker, F.J., Schroers, J. 2014. Nature Materials 13: 494. 61 See, for example, H. Shibata, et al., 2014, Chapter 4, pp. 173-196, Heterogeneous catalysis high throughput workflow: a case study involving propane oxidative dehydrogenation, in Modern Applications of High Throughput R&D in Heterogenous Catalysis. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-28

    accelerating materials discovery. There is, of course, necessary caution to be applied and further advances to be made before predictive modeling attains the kind of robustness necessary for industrial applications. For example, for the new generation of accurate but still approximate calculations, what are their errors? Standard and well-understood protocols have been developed to some extent, particularly in quantum chemistry, but many new techniques have spotty testing. It would be useful to have a clear suite of experimental properties that would serve as a test-bed. These large amounts of experimental data need to be collected in a coherent way. Among the many as-yet unanswered questions are: How do researchers encode all the differences in processing material samples? How do researchers extract what’s relevant from what’s not? Can researchers predict whether a substance can be doped (solubility), and what the effects of the doping will be? So far, multiple length scales and the need for high accuracy have made this an extremely challenging problem. This is a challenge both in terms of theoretical and computational techniques and in interfacing those efforts with experiment, since out-of-equilibrium behavior is highly dependent on the initial conditions. Many materials being used are actually metastable, which raises a separate set of questions: How can researchers predict whether a metastable material can be formed and use that to design not just new materials but also experimental processes? Often what is computed from first principles is not quite what is measured experimentally. For example, while some techniques can compute angle-resolved photoemission spectroscopy (ARPES) spectra, they typically do so for systems under ideal conditions of temperature and pressure and for controlled geometry. In operando measurements, for example, confirm that materials undergo restructuring under ambient conditions. Can researchers develop computational techniques that bridge the pressure, temperature, and material gap? Despite ever-increasing fidelity of and access to experimental facilities, opportunities still exist to fine-tune experimental conditions, accelerate analysis of the data, and move toward high-throughput screening of materials (see the preceding section) by coupling the instruments to computational facilities. Advancements in this area will not only improve the ability to fully interrogate the terabytes of data that can be acquired from a single experiment in a short time period but also change experimental conditions so as to maximize the utility and descriptiveness of the data that is collected. The idea would be to carry out real-time computational analysis of experimental data. For example, while in operando measurements are being taken of a chemical reaction on a nanocatalyst surface, digitized images of the catalyst under reaction conditions, the vibrational frequencies of the reaction intermediates and X-ray photoelectron spectroscopy data of the system could all be made available to a computational “beam line,” which would calculate the same quantities for the “real” geometry and state of the catalyst. Real-time comparison of the two sets of data (experimental and computed) would allow both sides to tweak conditions (parameters) until the desired result is obtained. Such a scenario is achievable, given the advances made in the past decade, discussed earlier in this chapter, in tools that can interrogate materials with atomic- scale precision and computational techniques that aim to predict material structure and dynamics under laboratory conditions. An early vision of conjugated experimental and computational analysis was proposed by the European Theoretical Spectroscopy Facility (ETSF),62 which promotes standardization of computational codes, libraries, and tools to facilitate broad usage particularly by experimentalists. These ideas could be taken further by integrating such analysis-ready computational facilities to experimental beam lines. Given adequate resources, the United States could set the stage for real data analysis for accelerated materials discovery. 62 See https://www.etsf.eu/. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-29

    4.5 INFRASTRUCTURE AND FACILITIES Infrastructure for modeling and simulation, synthesis and processing, and property assessment and characterization is central to MR. This section outlines the need for such facilities within universities and for large-scale national facilities. Deficiencies in current funding modes for the operation of state-of- the-art characterization facilities at universities, and for the acquisition of instrumentation that falls in the well-known funding gap between $4 million and $100 million are highlighted.63 The impact that national user facilities, stewarded primarily by DOE and to a lesser extent by the National Science Foundation (NSF), National Institutes of Health (NIH), and other agencies, have had and continue to have on MR is highlighted. The case for continued investment in developing new instrumentation and techniques for MR is made, both through university investment and at national user facilities. 4.5.1 Research Infrastructure The field of materials research and engineering is a highly research-instrumentation-intensive discipline. Enormous demands on research infrastructure exist in all subfields, ranging from instrumentation to synthesize materials, characterize their structure and properties, to the fabrication devices, applications, and systems. Often, a single researcher uses highly complicated instrumentation in all three of these areas, often in the span of a few days. For example, a single researcher active in the area of electronic materials may use sophisticated instrumentation to synthesize thin films of a new electronic material, followed by characterization of the microstructure of the film by techniques such as X-ray diffraction or transmission electron microscopy (TEM), followed by measurements of physical properties, such as electrical resistance or magnetization, and then proceed to fabricating devices in a clean room and characterization of the devices using state-of-the-art measurement techniques. In the course of a typical project, instrumentation costing many millions of dollars is used by the student. Over the past 10 years, the ever-rising costs of acquiring and maintaining state-of-the-art research infrastructure combined with the dire lack of funding avenues for instrumentation have culminated in a situation that can only be described as a crisis for all of materials science and engineering. Most extramural research at universities is funded by federal agencies, private foundations, and industry, who do not provide funding for the instrumentation that is needed to carry out the research that they fund. While the Department of Defense (DOD) has a mechanism to fund instruments through their Defense University Research Instrumentation Program (DURIP) program, the funding level of typical DURIP grants is much too small to pay the cost of many of the instruments used in materials science and engineering. This leaves the NSF as the main sponsor of research instrumentation (through their Major Research Instrumentation, or MRI, program). This program is extremely competitive, and the chances of getting funded are so low that it is inadequate to support the research instrumentation needed at major research universities. As a result, the burden to support research instrumentation has been shifted largely to the universities. Today, the mechanism by which universities support research instrumentation is mainly through start-up funds of new faculty. As a result of the crisis in extramural funding for research instrumentation, start-up funds have risen enormously over the past decade, reaching levels of above a million dollars for beginning assistant professors in experimental hard matter sciences. These sharply rising start-up costs are affecting the number of faculty universities can hire—in particular, those universities that do not have large endowments. Furthermore, this mechanism of funding research infrastructure is completely inadequate in sustaining forefront research in the long term. Specifically, start-up funds are typically used to buy instrumentation within the first five years of a faculty member’s 63 See the findings and recommendations of Chapters 3 and 4 of the report National Academy of Sciences, National Academy of Engineering, and Institute of Medicine, 2006, Advanced Research Instrumentation and Facilities, The National Academies Press, Washington, D.C., https://doi.org/10.17226/11520. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-30

    career. If this remains the only the source for equipment funding, as the trend over the past decade suggests, instrumentation is bound to become out-of-date as academic researchers reach the years that are normally the peak productivity of their careers. Such a model is unsustainable if the United States is to remain at the forefront of materials science and engineering. The Department of Energy is the major supporter of research facilities at the National Laboratories. These laboratories are excellently equipped with state-of-the-art instruments. The use of DOE supported scattering facilities (X-rays, neutrons) is an integral part of several fields of academic research in materials. Even the U.S. portion of the International Space Station (ISS) was in 2005 designated as a national lab, supported by NASA, conducting key materials research (see Box 4.1). However, these national facilities are no substitute and no practical means to address the crises in research infrastructure at universities, where much of the nation’s forefront research in materials is carried out. In particular, most materials research requires the infrastructure to be located at a researcher’s institution. To give an example, a typical materials project that involves synthesis of new materials requires a constant and immediate feedback loop between synthesis, structure, and property measurements that may go through many cycles within a short period of time—that is, a span of a few days. It is not feasible to carry out this research if it requires remote facilities, and this is true for most materials research. Last, the dire situation of deteriorating research infrastructure at universities also has more direct consequences on the nation’s economy. In particular, many large research universities operate open- access or shared user facilities, such as clean rooms and material characterization facilities, which are open to outside entities such as companies. In essence, the universities also function as incubators, resources, and technology transfer opportunities for small and large companies, including start-ups. To give an example showing that this has been an issue for quite some time, Figure 4.11 shows the breakdown of users of the nanofabrication facility at the University of California, Santa Barbara (UCSB) over 10 months in 2011 by type of users. A large fraction of the 26 percent of users from small companies are from a vital start-up culture surrounding UCSB that relies on this facility. In other words, an up-to-date university research infrastructure can have many positive effects on the economy. Permission Pending FIGURE 4.11 The breakdown of users of the nanofabrication facility at the University of California, Santa Barbara (UCSB), over 10 months in 2011 by type of users. SOURCE: National Nanotechnology Infrastructure Network, 2011, NNIN Annual Report: March 2011-Dec 2011, http://www.nnin.org/sites/default/files/NNIN_year_8.pdf, Image 139 e. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-31

    BOX 4.1 Materials Research Conducted on the International Space Station (ISS) The International Space Station (Figure 4.1.1) is a multination facility that was assembled between 1998 and 2011 and has been continuously occupied since 2000. The ISS hosts a wide variety of laboratory facilities to enable discovery and innovation, with the U.S. portion being designated as a national lab in 2005. FIGURE 4.1.1 NASA and CASIS (the Center for the Advancement of Science in Space is the managing entity of the ISS U.S. National Lab) support a materials science program on the Space Station. Investigators in the United States have access to this orbiting laboratory for conducting long-duration experiments in microgravity, where effects including convection, buoyancy-driven flow, and sedimentation are nearly negligible.1 As an example, a project to develop better flame-retardant textiles compared combustion in Earth’s gravity to that in microgravity, where diffusion-dominated flames result. The image on the right shows this difference between a candle flame on Earth and on the Space Station. The relative ease of access to the external space environment from the ISS also enables a variety of materials exposure experiments important to the space technology community. SOURCE: Courtesy of NASA. Current ISS facilities for materials science research include: Low Gradient Furnace, Solidification and Quenching Furnace, Microgravity Science Glovebox, Pore Formation and Mobility Investigation Apparatus, Solidification Using a Baffle in Sealed Ampoules (modification to include CVD capabilities for 2D material growth is under study), Coarsening in Solid/Liquid Mixtures Apparatus, Transparent Alloys Experiment, Electromagnetic Levitator, Electrostatic Levitation Furnace, 3D Polymer Printer, and MISSE (Materials on ISS Experiment, which hosts external exposure experiments). The following are recommended new materials research facilities as described in the MaterialsLab2 Strategic Plan: Granular Materials Facility, Brazing and/or Welding Facility, Electrolysis of Molten Glasses, Diffusion Measurements, Float Zone Furnace, Biomaterials Facility, and 3D Bioprinting. The primary selection criteria for proposals submitted to the ISS National Lab is that the research must require the persistent microgravity of the ISS or other unique property of the space environment and pass applicable feasibility and safety requirements. Investigators wanting to learn more about opportunities in materials research on the ISS may contact either NASA’s ISS Program Office or CASIS. 1 See http://www.spacestationresearch.com/, accessed March 6, 2018. 2 See https://www.nasa.gov/feature/nasa-selects-16-proposals-for-materialslab-investigations-aboard-the- international-space, accessed March 6, 2018. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-32

    4.5.2 General Laboratory Infrastructure The equipping of an experimental laboratory within a university is usually accomplished when the faculty member first joins the university. A similar dynamic occurs at a national laboratory or in industry, where more base infrastructure exists but the opportunities for equipment refresh are limited. Although there are opportunities at funding agencies for researchers to secure grants to acquire new equipment, the number of opportunities and the total level of funding available is limited. Similarly, at universities, there are limited resources available to replace or upgrade intermediate-scale equipment or to revitalize physical infrastructure (HVAC, lab modernization, etc.). A consequence of these limitations is that acquisition of everyday experimental capabilities, such as furnaces, tensile frames, lasers, and so on, is a challenge, which means these laboratories become obsolete over time. The difference is particularly striking when visiting universities laboratory facilities in other countries, in which a stronger commitment to infrastructure revitalization often is evident. 4.5.3 Midscale Instrumentation/Facilities Midscale research facilities include many of the characterization, synthesis, and processing facilities discussed in earlier sections, and complement the national user facilities discussed below. Many research universities operate characterization and fabrication facilities,64 and these are supported in part by the university or on a full-cost recovery mode from users. The acquisition cost of new instruments, while significant, has become just one component in meeting the escalating costs of operating a user facility.65 The annual maintenance costs and the need for dedicated technical staff are becoming increasingly expensive. While the federal agency initiatives, such as the DOE Basic Energy Sciences (BES) nanoscience centers and the NSF Materials Innovation Platforms (MIP) are to be commended, the loss of such facilities at universities will limit progress especially in the area of instrument and technique development. There is a pressing need for a new national strategy on how to make available new instruments to a wide user base, meet the operational costs, and continue to stimulate creativity and development.66 Midscale funding for facilities,67 in the range between a beam line or large magnet (tens of millions of dollars) and a full facility (in the billion-dollar range), has been a recognized challenge for some time.68 The ability to study materials at extreme conditions—for example, in high magnetic fields while under extreme pressure, at very low or high temperature, with light or neutron scattering, or with scanning probes—has become an important direction of materials research on a global scale and is a prime example of the midscale funding gap. The capability to produce the desired environment is often beyond the scale of what a principal investigator (PI) can afford and also not within the budget of large- scale user facilities. For example, at MagLab, designing and building midscale projects such as the SC 32 Tesla and the series connected hybrid 41.5 Tesla magnet are in the several tens of millions of dollars 64 See, for example, Science and Engineering Indicators 2018, https://www.nsf.gov/statistics/2018/nsb20181/report/sections/academic-research-and- development/highlights#infrastructure-for-academic-r-d, accessed December 3, 2018. 65 See Chapter 3, Instrumentation and universities outlines the various costs for facilities, in National Academy of Sciences, National Academy of Engineering, and Institute of Medicine, 2006, Advanced Research Instrumentation and Facilities, The National Academies Press, Washington, D.C., https://doi.org/10.17226/11520. 66 Midsize facilities: The Infrastructure for Materials Research, 2006, https://www.nap.edu/catalog/11336/midsize-facilities-the-infrastructure-for-materials-research. 67 Workshop Report on Midscale Instrumentation to Accelerate Progress in Quantum Materials. http://physics- astronomy.jhu.edu/miqm/. 68 See, for example, National Science and Technology Council, 1995, Final Report on Academic Research Infrastructure: A Federal Plan for Renewal, National Science and Technology Council, Washington, D.C. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-33

    today, with running costs in the millions.69 Another example is the challenge of developing next- generation high field magnets, the funding for which typically also falls in the $4 million to $100 million midscale range. Novel growth ability also falls into this category, especially when addressing integration. For example, the control of interfaces of complex quantum heterostructures or devices becomes extremely critical; the interface often dictating the properties of the heterostructures. Control of the interface will also help us to understand the wavefunctions on either side of the interface, leading to our ability to design and synthesize functional quantum structures. This ability requires the integration, in a single system, of an advanced toolset for materials synthesis, interface and surface control, and in situ characterization. 4.5.4 Nanoscale Science Research Centers In the last decade, a number of nanoscale science research centers have emerged whose scale is beyond those of midscale facilities at research universities. On a national scale, the DOE Office of Science’s Scientific User Facilities Division operates five Nanoscale Science Research Centers (NSRCs) as user facilities that are located at National Laboratories, and the NSF operates the National Nanotechnology Coordinated Infrastructure at 16 universities. The National Cancer Institute’s Nanotechnology Characterization Laboratory is another midscale facility, as is the National Institute of Standards and Technology’s Center for Nanoscale Science and Technology. The five NSRCs are user centers for interdisciplinary research at the nanoscale. Each center has particular expertise and capabilities in selected areas, such as synthesis and characterization of nanomaterials; catalysis; theory, modeling, and simulation; electronic materials; nanoscale photonics; soft and biological materials; imaging and spectroscopy; and nanoscale integration. This array of centers, sponsored by various agencies, have been very successful, as judged by the high oversubscription of their use and by the many important results that they have produced and reported. This leads to the conclusion that the expansion of this type of center would be a valuable asset to promote materials research in the United States. Such facilities not only empower U.S. researchers but also attract valuable international exchange and collaborations. Furthermore, organized collaboration and planning among existing and new centers as to the nature and types of facilities that they acquire and maintain would also be valuable. 4.5.5 X-Ray Light Sources From the early days of synchrotron science, it was clear that light sources would have a large impact in the field of materials science. As is well documented, that early promise has been fulfilled many times over. Further, light sources have been improving at a rapid rate—the rate of increase in brightness of X-ray sources in the last 30 years exceeds that of Moore’s law for transistors in the same period. The fulfillment of the promise, and the exciting future possibilities, of synchrotron light sources has been amply documented, for example, in these DOE reports: Next Generation Photon Sources for Grand Challenges in Science and Energy, Report of the Workshop on Solving Science and Energy Grand Challenges with Next-Generation Photon Sources,70 and Report of the BESAC Subcommittee on Future X-Ray Light Sources.71 This rapid improvement in brightness has driven ever more impactful uses of X rays to study the structure and function of materials. Most recently, the United States has seen the 69 See https://nationalmaglab.org/magnet-development/magnet-science-technology, accessed December 3, 2018. 70 See https://science.energy.gov/~/media/bes/besac/pdf/Ngps_rpt.pdf, accessed July 3, 2018. 71 See https://science.energy.gov/~/media/bes/besac/pdf/Reports/Future_Light_Sources_report_BESAC_approved_72513. pdf, accessed July 3, 2018. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-34

    commissioning of the new National Synchrotron Light Source II (Brookhaven National Laboratory [BNL]), with brightness significantly higher than the other U.S. synchrotron light sources, and future upgrades are planned for two of the existing sources, the Advanced Photon Source Upgrade (APS-U) and the Advanced Light Source Upgrade (ALS-U), as shown in Figure 4.12. The accelerator lattice design for a diffraction-limited synchrotron pioneered at Max-IV in Sweden is being implemented as upgrades to existing synchrotrons or new facilities such as the Beijing high-energy synchrotron, which will be collocated with a major supercomputing and quantum materials effort. A principal development of the last decade has been the emergence of X-ray free electron lasers as a complement to synchrotrons, notably the Linac Coherent Light Source (LCLS) and the future LCLS- II and its high-energy upgrade, LCLS-II-HE, at SLAC in the United States. New X-ray free electron lasers have been built or are under construction at DESY (Germany), PSI (Switzerland), CAS Shanghai (China), and elsewhere. Collectively, these new ultrabright sources will drive further advances in the techniques, enabling transformative studies of materials with nanoscale resolution while under operating conditions and on ultrafast time scales. The United States had a significant fraction of all the world-leading capabilities 20 years ago, but that lead has eroded and today’s landscape is one of intense competition from both Europe and Asia. In addition to the increasing brightness of X-ray light sources, a second technological development has revolutionized the use of X-rays for materials science—the use of area detectors. This has facilitated the very rapid taking of data, allowing surveys of large swathes of reciprocal space to be undertaken and for new imaging modalities, such as coherent diffraction imaging, to be developed. The former has enabled, for example, tomographic studies of crystallographic grain orientation or studies of small distortions in crystal structures, while the latter has enabled, for example, studies of strain fields in nanoparticles and nanoscale semiconductors. FIGURE 4.12 Time average brightness curves for selected existing (solid lines) and future (dashed curves) U.S. and international light sources. The plot illustrates the competitiveness of the international scene in both synchrotron and free electron laser light sources. SOURCE: Deutsches Elektronen- Synchrotron, Media Database, https://media.desy.de/DESYmediabank/ConvertAssets/Peak_Brillianz.jpg, © DESY. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-35

    The broad photon energy range available (from the far IR to hard X ray) and the intense brightness of the beams, which allows the photon beams to be tailored to specific experimental geometries and environments, makes X-ray light sources near-ideal probes of the structure and function of materials. The robust suite of experimental capabilities represented by APS, ALS, BNL, LCLS, and the Stanford Synchrotron Radiation Lightsource (SSRL) have enabled important contributions in quantum materials (superconductivity to graphene), energy storage (solid electrolyte interphase formation), self- assembly, advanced microelectronics (EUV lithography and strain engineering), as well as studies in extreme environments (high pressure and high magnetic fields) described elsewhere in this report. As the field moves toward the capability to fully integrate computational materials science, synthesis, and advanced manufacturing for real-world performance, the microscopic characterization of structure and dynamics enabled by the next generation of instruments and upgraded sources will provide the crucial link to enable materials by design. 4.5.6 Neutrons The past decade has seen a revitalization of neutron sciences in the United States. A major stimulus has been the beginnings of operation of the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL), which now operates routinely at powers of over 1 MW and produces the world’s highest peak flux neutron pulses for beam research on materials. The SNS first target station now operates with 19 specialized state-of-the-art instruments dedicated to materials research, spanning techniques aimed at structures at the meso-, nano-, or atomic-length scales, and dynamics on the micro- to picosecond time scales. At the same time, the continuous reactor sources of neutrons, including the NIST Center for Neutron Research (NCNR) and the HFIR at Oak Ridge, have seen significant improvements in the availability of cold neutron instrumentation. This has greatly increased the ability to perform unique investigations of “large-scale structures,” such as those found in polymers, biomaterials, and solid-state nanoscale systems, as well as measurements of low-energy dynamics with excellent resolution and signal to noise. Permission Pending FIGURE 4.13 Map of peak brightness and average brightness that represent figures of merit for different classes of experiment for existing (solid) and planned (open) sources in the United States (blue), Asia (red), Australia (yellow), and Europe (green). SNS-FTS (Spallation Neutron Source—First Target Station, Oak Ridge National Laboratory, Tennessee), SNS-STS (Spallation Neutron Source—Second Target Station), ESS (European Spallation Source, Lund, Sweden), ISIS (ISIS Neutron and Muon Source, Rutherford Appleton Laboratory, Oxfordshire, UK), CSNS (China Spallation Neutron Source, Guangdon, China), HFIR (High Flux Isotop Reactor, Oak Ridge National Laboratory, Tennessee), ILL (Institut Laue- Langevin, Grenoble, France), FRM-II (Forschungs-Neutronenquelle Heinz Maier-Leibnitz -II, Garching, Germany), J-PARC (Japan Proton Accelerator Research Complex, Tokai, Japan), OPAL (Open-Pool Australian Lightwater Reactor, Sydney, Australia). SOURCE: Steve Nagler ORNL. Figure 4.13 shows a comparison of selected neutron sources worldwide along the two axes of source performance, average brightness and peak brightness for a typical thermal neutron wavelength. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-36

    Depending on the nature of phenomena being studied and design of the instrumentation (fixed wavelength or time-of-flight), one or the other metric is more appropriate. Oak Ridge National Laboratories’ High Flux Isotope Reactor (HFIR) and the NCNR are competitive in terms of average brightness, while SNS offers high performance in peak brightness (it should be noted the Japanese source J-PARC has not yet achieved its design power). Planned upgrades at SNS to increase the power to 2 MW and add a second, short-pulse, long-wavelength target station will preserve the competitive position even in the context of new facilities in Europe and China. The large amount of data produced in modern neutron scattering instruments has in turn produced its own set of challenges, and at present researchers are starting to see early benefits of coupling high- performance computing and neutron scattering data analysis. This trend is also present for X-ray sources and microscopies, tools for integrating large volumes of data with modeling and simulation in real time to guide experiments is leading to closer coupling of experiment and theory. The advances in neutron scattering sources and instrumentation discussed above have enabled substantial scientific advances spanning the range from fundamental discoveries about the nature of novel matter to new materials with specific technological applications. Unconventional superconductivity has been a forefront materials problem for three decades, and within the past decade received a major impetus with the surprising discovery of iron-based superconductors. Almost immediately, neutron diffraction was used to elucidate the magnetic structure of parent compounds of several families of iron-based materials, showing that the ordering wave vector in most cases differed from the older cuprates. Detailed mapping of the phase diagrams uncovered regions of phase separation and evidence for stripe order. Inelastic neutron scattering showed the existence of a resonant magnetic excitation in the superconducting materials and found a striking relationship between the transition temperature and resonant frequency across many different types of unconventional superconductors. The investigation of quantum materials is now a burgeoning forefront of research on solids. The field has been transformed by the developing understanding of the key role played by topology, and the possibility that topological materials or excitations might play an important future role in new technologies such as quantum computation. Neutron scattering has provided crucial information for much of this effort. The role of quantum fluctuations, and in particular fractionalized excitations, has been an overriding theme in the problem of quantum spin liquids. Inelastic neutron scattering has shown evidence for several different fractional excitations: spinons in Herbertsmithite,72 which is possibly an example of a Heisenberg quantum spin liquid; magnetic Majorana fermions in α-RuCl3, which is believed to be proximate to a Kitaev quantum spin liquid; and excitations that are formally equivalent to the elusive magnetic monopoles in so-called spin-ice. Evidence for topological structures in the form of magnetic skyrmion lattice was discovered by small-angle neutron scattering, and skyrmions now represent a promising new direction for spintronics applications. Neutron diffraction measurements of magnetic structures in various multiferroic materials have shown how chirality and frustration may play a role in multiferroic phenomena. Polarized neutron reflectivity has been an especially valuable tool for studying buried interfaces, and it has shown how exchange bias in magnetic multilayers can lead to an interfacial region that is magnetically different from the bulk surroundings. In some cases, the interface can induce ferromagnetism in an interface layer one atom thick. Studies of interfaces between topological insulators and ferromagnetic insulators showed that shallow ferromagnetic regions can be induced in the topological insulator, and such structures might enable magnetic control via the electric field for new types of technologies. Neutrons have had a large impact in the realm of functional materials, and particularly thermoelectrics, where careful measurements of phonon anharmonicities have revealed a path to producing greatly improved materials. Similarly, a deeper understanding of the requirements for 72 Herbertsmithite is a mineral with chemical structure ZnCu3(OH)6Cl2. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-37

    improvement in energy storage materials has come about through a combination of conventional and operando diffraction studies of batteries and related materials. Neutron imaging has been particularly useful for understanding the inner workings of fuel cells. The deep penetration of neutrons into most materials has enabled significant new insights into the microscopic origin of the properties of metal alloys, including deformation and plasticity of conventional alloys, and the phase transformation behavior of new high-entropy alloys. Neutrons have also been used to characterize the microstructure and consequent effects on mechanical properties of additively manufactured components. The structure of porous materials has been investigated over a large range of length scales using both diffraction and small angle scattering. Particularly important work has been done on metal organic frameworks (MOFs), showing how the separation of light hydrocarbons might be greatly improved. Neutrons have also shed light on the possibility of both MOFs and shales for carbon sequestration and hydrogen storage. Neutron scattering is ideally suited for studies of biological systems because it is a penetrating and nondestructive probe, providing structural and dynamical information across cellular scales of length and time, spanning from the position of an individual hydrogen atom in a protein to the nanomesoscale structure and dynamics of functional complexes and hierarchical assemblies within a cell. At the atomic level, neutrons provided insight into the role of critical H atoms in the catalytic mechanism of a peroxidase and binding of drug targets to proteins. Neutrons have been used in studies of proteins and their complexes because they can detect conformational changes and assembly/disassembly processes under near-physiological conditions. Nanoscale studies of protein-nucleic acid complexes have revealed how methylation events are used as a regulatory mechanism for rRNA folding and contributed to understanding into the regulatory function of cardiac myosin-binding protein C, a protein vital for maintaining regular heart function. Neutrons also reveal the organization and assembly of biological membranes and their interactions in the cell, providing insights into the structural and mechanical properties of lipid nanodomains and the mechanism of voltage gating, which can impact numerous neurological diseases as well as anesthetic action). The penetrating and nondestructive nature of neutrons has enabled studies investigating the architecture of the plant cell wall, providing new knowledge about structure of the cellulose microfibril, and its breakdown to release sugars for biofuel production. In the emerging thematic area of biological complexity, neutrons have been used to study cellular processes in living cells, a new area of application that has only recently become feasible. This has been achieved through targeted H/D isotope contrast to reveal the formation of nanodomains in plasma membranes and to study the dynamical processes of proteins in vivo. 4.5.7 High Magnetic Field Facilities High magnetic fields represent a continuously tunable, reversible, and intrinsically quantum and topological probe of materials. Magnetic fields in the 10 to 100 T range compete with (and thereby effectively probe) the correlation energies of quantum matter and strong spin-orbit coupling of topological materials. Magnetic fields, by virtue of being both a thermodynamic variable and a vector quantity, can separate competing energy scales, as in quantum fluids and quantum spin liquids, and can induce new states of quantum matter (a quantum phase transition induced at absolute zero by varying for example the magnetic field), such as magnetic Bose-Einstein condensates and spin supersolids. Magnetic fields above 100 T will exceed the quantum limit of many low carrier density metals and sufficiently suppress the highest temperature superconductivity to reveal underlying quantum criticality.73 Moreover, the technological impact of high magnetic field research is increasingly significant: whereas 10 to 20 T was sufficient to reveal fundamental electronic and optoelectronic properties (carrier mass, exciton binding energy, etc.) in silicon and GaAs, analogous studies of the new generation of atomically thin 2D 73 Quantum criticality: a phase transition brought on by quantum fluctuations at absolute zero. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-38

    semiconductors (such as MoS2, phosphorene, and the transition metal dichalcogenides) will require 100 to 200 T. When magnetic fields interact with moving charges, they probe a characteristic length scale that decreases as the square root of the magnetic field. High magnetic fields of 20 T can probe spatial features comparable to a 6nm diameter quantum dot, while fields of 80 T are necessary to shrink this length by another factor of 2. As such, the study of electronic and magnetic phenomena down to atomic dimensions necessitates pushing the present limits of current magnet technology. Three recent studies have looked into the current status and the potential for future developments of high field magnet research. For more detailed information and discussions of the various technical issues, the committee refers to these reports.74 An important direction besides magnet development will be the integration of high fields with beam lines, as indicated by both COHMAG and MagSci. This would allow the investigation of the neutron and X-ray scattering properties of materials in high magnetic fields.75 Currently, the highest field magnet on a beam line worldwide is a 26 T system at the Helmholtz Zentrum Berlin developed by the National High Magnetic Fields Laboratory (NHMFL). 4.5.8 Advanced Computational Facilities Advanced Computational Facilities have played a major role in promoting and facilitating the leap forward in both predictive modeling of functional material and in developing the framework for understanding characteristics of materials at multiscales. These facilities, funded mostly by DOE and NSF, are home to the most sophisticated high-performance computers (HPCs), which enable computational material scientists to carry out detailed simulation of material properties from the microscopic to macroscopic length scales and ultrafast (subfemtoseconds) to standard time domains— regions not readily available in experiments. Under the umbrella of its Advanced Scientific Computing Research (ASCR) program, DOE continues to maintain several world-class high-performance computing centers, enabling collaborative research in a number of fields including material science at the following: Argonne Leadership Computing Facility (ALCF), National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory (LBNL), Oak Ridge Leadership Computing Facility (OLCF), and Energy Science Network (ESnet) at LBNL. These centers serve the high-performance computational needs of scientists from DOE labs, academia, and industry and maintain a high global profile in cutting- edge computer hardware. They were among the first to achieve the petascale (1015 machine operations per second) capability and are now gearing to break the exascale (1018 machine operations per second) barrier. These computer hardware advances coupled with a global effort in the development of refined computational codes suitable for application to challenging problems, at a variety of length and time scales, have enabled simulations of complex phenomena, which only a decade back were unimaginable. The material science community has been one of the biggest beneficiaries of these advances given the relevance of computational techniques used by researcher on collaborative projects that were initiated first under the Nanoscience and Nanotechnology Initiative (NNI) and more recently under the Materials Genome Initiative (MGI). Unsurprisingly, material science researchers are now one of the dominant users (comprising over 18 percent of all high-performance computing users). Fruitful interactions between experimentalist and computational scientists fostered by these two programs, among others, could not have been possible without the resources made available by the advanced computational facilities. 74 National Research Council, 2013, High Magnetic Field Science and Its Applications in the United States: Current Status and Future Direction (MagSci), The National Academies Press, Washington, D.C. 75 National Research Council, 2007, Condensed-Matter and Materials Physics: The Science of the World Around Us (COHMAG), The National Academies Press, Washington, D.C., https://doi.org/10.17226/11967. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-39

    4.6 CONCLUSION, FINDINGS, AND RECOMMENDATIONS No element of investment in materials research is more important than facilities, instrumentation, and infrastructure. With excellent facilities and instrumentation in place, a considerable amount of important research can be done with no further investment. These tools stimulate and unleash creativity and productivity. Several findings and recommendations aim to improve our national competitive status in this domain. Key Finding: Progress in three-dimensional (3D) characterization, computational materials science, and advanced manufacturing and processing have enabled an increasing digitization across disciplines of materials research and has in some cases dramatically accelerated and compressed the time from discovery to inclusion in new products. Key Recommendation: Federal agencies (including NSF and DOE) with missions aligned with the advancement of additive manufacturing, and other modes of digitally controlled manufacturing, should by 2020 expand investments in materials research for automated materials manufacturing. The increased investments should be across the multiple disciplines that support automated materials synthesis and manufacturing. These range from the most fundamental research to product realization, including experimental and modeling capabilities enabled by advances in computing, to achieve the aim that by 2030 the United States is the leader in the field. Key Finding: Infrastructure at all levels, from midscale instrumentation for materials characterization, synthesis, and processing with purchase costs of $4 million to $100 million in universities and national laboratories to large-scale research centers like synchrotron light sources, free electron lasers, neutron scattering sources, high field magnets, and superconductors is essential for the health of the U.S. materials science enterprise. Midscale infrastructure, in particular, has been sorely neglected in recent years and the cost of maintenance and dedicated technical staff has increased enormously. Key Recommendation: All U.S. government agencies with interests in materials research should implement a national strategy to ensure that university research groups and national laboratories have local access to develop, and continuing support for use of, state-of-the-art midscale instruments and laboratory infrastructure essential for the advancement of materials research. This infrastructure includes materials growth and synthesis facilities, helium liquefiers and recovery systems, cryogen-free cooling systems, and advanced measurement instruments. The agencies should also continue support of large facilities such as those at Oak Ridge National Laboratory, Lawrence Berkeley National Laboratory, Argonne National Laboratory, SLAC National Accelerator Laboratory, National Synchrotron Light Source II (Brookhaven National Laboratory), and National Institute of Standards and Technology—and engage and invest in long-range planning for upgrades and replacements for existing facilities. Finding: There is a strong need for educated end users of software in order for the approximations, limitations, and full range of use cases to be appreciated and used toward significant impact on science and engineering. This includes in particular methods of machine learning as applied to materials. Recommendation: Computational materials science training should be part of a core curriculum in undergraduate and graduate training in the subjects of physics, chemistry, materials science, and related fields, and this should include training in some of the larger computational software packages (and not just Matlab programming). More than one PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-40

    section of this training is recommended to be in the area of machine learning applied to materials. Finding: Researchers are close to a new world of precision synthesis in which the positions and species of individual atoms, molecules, and defects can be controlled to produce desired properties from the nano- to the macro scale, in both organic and inorganic materials, from sequence-controlled polymerization to molecular beam epitaxy with in situ characterization, scanning probe microscopies, spectroscopy, and angle-resolved photoemission. Recommendation: All agencies that fund materials research, with NSF and DOE coordinating, should support research in the area of materials precision synthesis, particularly new methods that test the limits of what is fundamentally achievable and a new understanding of whether the levels of precision that can be achieved actually result in desired or interesting properties. The supported research should clarify when and how exquisitely precise synthesis is essential to achieving new functionality in materials. A multiagency workshop in 2020 or earlier could serve to initiate and propel this line of research into the next decade. Finding: The integration of computational control and automation with advanced characterization techniques has made it possible to build 3D data sets that represent materials digitally with greater fidelity than previously imaginable. Methodologies for 3D characterization and analysis are currently developed locally; universally agreed upon process flows, tools, and analysis techniques are badly needed. Recommendation: Federal agencies should invest significant resources into the creation and widespread use of autonomous experimental 3D characterization and the development of universal and widely shared computational methodologies for advanced registration, reconstruction, classification, and analysis of digital data sets. Finding: The predictive design and fundamental understanding of the growth of crystalline materials promises enormous potential to impact our materials research and technologies. Crystalline materials play a fundamentally important role for modern society and commerce. Recommendation: The establishment of materials growth hubs that are multiagency efforts (e.g., NSF, DOE, NIST, DOD) is recommended, where it is recognized that agencies with different but mutually beneficial priorities can share and fund knowledge while keeping proprietary information separate. Work at these facilities would not only seek to improve established methods but also would initiate new directions. These facilities would serve not only as foundries for synthesis of materials and development of new methods but also as homes to digital materials databases and real-world stockpiles of materials that would be available upon request. Finding: Computer-intensive fields such as artificial intelligence, machine learning, and “big data” collection and analysis are now beginning to have a significant impact in materials science, the impact of which researchers are just beginning to see. To realize the full potential of this revolution, it is essential that researchers have access to the most advanced computer hardware and software. Finding: In the future, as the scaling of microchips slows, emphasis on maximizing speed of floating-point operations might not be the best strategy for enabling increases in computing speed. Instead, advanced data analytics, fit-for purpose computers, and software interfaces (APIs and GUIs) might gain increased importance. Recommendation: The DOE and NSF should begin in 2020 to support a broad computing program to develop “next-generation” and “fit-for-purpose” computers. These computers PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-41

    should not only focus on speed but also include improved data analytics and other capabilities. The program should also include support to create and maintain new software and software interfaces (APIs and GUIs) and ensure that the broad materials research (MR) community has access to these tools. This support for code development should not go just to centers but also to single principal investigators (PIs) or small groups. Finding: International collaboration plays an essential role in materials research, with engagements ranging from individual researchers to more formal institutional and facility partnerships. Examples include the International Space Station, CERN, SESAME, and LIGO. Advantages include pooling resources, promoting diversity in approaches for scientific progress, and scientific diplomacy. Finding: Keeping in mind the importance of international collaboration, it is also important to note that there is a strong linkage between materials research, economic competitiveness, and national security. The U.S. leadership position has begun to erode, with major materials research initiatives and investments taking place outside the United States. Additionally, major state-of-the art facilities such as synchrotron light sources, free electron lasers, neutron scattering sources, high field magnets, and supercomputers are needed to attract and retain top researchers. Simply put, if the United States does not maintain leadership with major state-of-the art facilities, the erosion will only accelerate and hinder the U.S. ability to play major roles in international collaborations. Recommendation: The U.S. government should aggressively enhance its support of large research facilities. Facility roadmaps for our funding agents (NSF, DOE, NIST, and DOD) should reflect a strategy for the coming decade of maintaining or increasing the current leadership role of the United States in major facilities for materials research while remaining abreast of developments in other countries and seeking cooperation when mutual benefit can be found. PREPUBLICATION COPY – SUBJECT TO FURTHER EDITORIAL CORRECTION 4-42

Next: 5 National Competitiveness »
Frontiers of Materials Research: A Decadal Survey Get This Book
×
Buy Paperback | $55.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Modern materials science builds on knowledge from physics, chemistry, biology, mathematics, computer and data science, and engineering sciences to enable us to understand, control, and expand the material world. Although it is anchored in inquiry-based fundamental science, materials research is strongly focused on discovering and producing reliable and economically viable materials, from super alloys to polymer composites, that are used in a vast array of products essential to today’s societies and economies.

Frontiers of Materials Research: A Decadal Survey is aimed at documenting the status and promising future directions of materials research in the United States in the context of similar efforts worldwide. This third decadal survey in materials research reviews the progress and achievements in materials research and changes in the materials research landscape over the last decade; research opportunities for investment for the period 2020-2030; impacts that materials research has had and is expected to have on emerging technologies, national needs, and science; and challenges the enterprise may face over the next decade.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!