National Academies Press: OpenBook

Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering (2004)

Chapter: Fundamental Limits of Nanotechnology: How Far Down is the Bottom?

« Previous: Environmental Engineering
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 25
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 26
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 27
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 28
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 29
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 30
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 31
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 32
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 33
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 34
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 35
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 36
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 37
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 38
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 39
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 40
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 41
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 42
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 43
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 44
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 45
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 46
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 47
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 48
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 49
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 50
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 51
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 52
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 53
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 54
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 55
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 56
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 57
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 58
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 59
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 60
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 61
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 62
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 63
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 64
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 65
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 66
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 67
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 68
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 69
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 70
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 71
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 72
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 73
Suggested Citation:"Fundamental Limits of Nanotechnology: How Far Down is the Bottom?." National Academy of Engineering. 2004. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering. Washington, DC: The National Academies Press. doi: 10.17226/10926.
×
Page 74

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

FUNDAMENTAL LTMTTS OF NANOTECHNOLOGY: HOW FAR DOWN IS THE BOTTOM?

Status, Challenges, and Frontiers of Silicon CMOS Technology JACK HERGENROTHER IBM T. J. Watson Research Center Yorktown Heights, New York For more than three decades, continued improvements in silicon (Si) transis- tor density, integration, switching speed and energy, and cost per electronic func- tion have driven the $160-billion semiconductor industry, one of the most dy- namic industries in the world. These faster and cheaper technologies have led to fundamental changes in the economies of the United States and other countries around the world. The exponential increase in transistor count that has occurred over the past few decades was accurately predicted by Gordon Moore in 1965. To continue to power the information technology economy, however, the Si industry must remain on the Moore's Law trajectory. Because Moore's Law has accurately predicted the progress of Si technol- ogy over the past 38 years, it is considered a reliable method of predicting future trends. These extrapolated trends set the pace of innovation and define the nature of competition in the Si industry. Progress is now formalized each year in an annual update of the International Technology Roadmap for Semiconductors (ITRS), also known as "The Roadmap" (ITRS, 2003~. Table 1 shows the status and key parameters in the 2002 update. ITRS extends for 15 years, but there are no guarantees that the problems confronting Si technology will be solved over this period. ITRS is simply an assessment of the requirements and technological challenges that will have to be addressed to maintain the current rate of exponen- tial miniaturization. At the current pace, it is widely believed that the industry will reach the end of conventional scaling before the end of 2016, perhaps as early as 2010. The device that supports Moore's Law, known as the metal-oxide semicon- ductor field-effect transistor (MOSFET), comes in both e-channel and p-channel flavors depending on whether the primary current is carried by electrons or 27

28 FRONTIERS OF ENGINEERING TABLE 1 2001 Status and Summary of Key Parameters from the 2002 ITRS Update MPU ASIC Gate Gate Logic SRAM DRAM Technology Length Length Density Density Density Year Node (nary) (nary) (Mtransistors/cm2) (Mtransistors/cm2) (Gbit/cm2) 2001 "130 nm" 65 90 39 184 0.55 2004 "90 nm" 37 53 77 393 1.49 2007 "65 nm" 25 32 154 827 3.03 2010 "45 nm" 18 22 309 1718 6.10 2013 "32 nm" 13 16 617 3532 18.4 2016 "22 nm" 9 11 1235 7208 37.0 Source: ITRS, 2003. holes, respectively. The types and placement of dopants in a MOSFET deter- mine whether it is e-channel or p-channel. The basic structure of the nMOSFET (Figure 1) consists of a moderately p-doped channel formed near the top surface of a polished Si wafer between two heavily e-doped source and drain regions. On top of the channel is a thin insulating layer of silicon dioxide or oxynitride, which separates the heavily e-doped polysilicon gate from the channel. For gate voltages above the threshold voltage VT (typically 0.3 to 0.5 V), electrons are attracted to the gate but remain separated from it by the insulating gate oxide. These electrons form an "inversion layer" that allows a significant electron cur- rent to flow from the source to the drain. The magnitude of this "drive current" ID is a key MOSFET performance metric. In digital circuits, MOSFETs essentially behave like switches, changing the current by as much as 12 orders of magnitude via the gate voltage. To enable low-power chips, MOSFETs must be excellent switches and have very small suicide suicide n+ source Gate gate oxide 1 Cop - type \ channel 14 G silicide n+ drain \ - FIGURE 1 Basic structure of a traditional planar nMOSFET.

STATUS, CHALLENGES, AND FRONTIERS OF ICON CMOS TECHNOLOGY 29 subthreshold leakage currents (Iota. This is the undesirable current that leaks from the source to the drain when 0 V is applied to the gate. In today's state-of- the-art MOSFETs, metal silicides, such as cobalt silicide (CoSi2) and nickel silicide (NiSi), are used to increase the electrical conductivity of the source, drain, and gate. Individual MOSFETs are electrically separated by silicon diox- ide deposited in shallow trenches and connected by many levels of miniature copper wire interconnects. nMOSFETs and pMOSFETs are combined in circuits to form comple- mentary MOS (CMOS) technology. The general principle of traditional CMOS scaling is the reduction of the horizontal and vertical MOSFET dimensions as well as the operating voltage by the same factor (typically 0.7x per technology generation every two to three years) to provide simultaneous improvements of 2x in areal transistor density, 0.7x in switching time, and 0.5x in switching energy. Figure 2 shows the intrinsic switching time of nMOSFETs plotted against their physical gate lengths LG. The switching time is essentially the time it takes a first transistor carrying a current ID to charge the input gate capacitance CG of an identical transistor to the supply voltage VDD. Note that, although the fastest microprocessors being produced today have clock frequencies in the range of 2 GHz = 1/~500 ps), the intrinsic switching time A _, · _. con · _. B con · ~ A .~ LO (elm) FIGURE 2 The trend of intrinsic switching time CGVDJID versus physical gate length. The fastest Si MOSFETs produced to date have intrinsic switching times well under 500 fs. Source: Bohr, 2002. Reprinted with permission.

30 FRONTIERS OF ENGINEERING of nMOSFETs is only about 1.5 ps, about 300 times shorter than the inverse clock time. The difference is a consequence of several basic elements of CMOS design architecture: (1) transistors must often drive more than one input gate capacitance; (2) interconnect capacitances must also be charged; (3) typically, about 20 logic stages are cascaded and have to be switched within one clock cycle; and (4) the clock frequency must accommodate the slowest of these cas- caded paths. The intrinsic switching time is the critical metric (as opposed to the inverse clock frequency) to keep in mind when comparing the performance of Si MOSFETs with alternative nanodevices intended for logic applications. It is likely that today's research devices with intrinsic switching times under 500 Is will be produced in volume before the end of the decade. Figure 3 shows the switching energy CGVDD2, a simple metric that leaves out second-order effects. Note that the fundamental limit on switching-energy transfer during a binary switching transition is kBTln2 ~ 3 x 10-2i J (Meindl and Davis, 2000~. This is roughly five orders of magnitude smaller than the switch- ing energy of Si devices currently being manufactured, indicating that this truly fundamental limit is not an imminent concern. When discussing state-of-the-art CMOS devices, one must be careful to distinguish research devices from the devices at the cutting edge of volume L`G (~m) FIGURE 3 The trend of switching energy CGVDD2 versus physical gate length. Si MOSFETs with less than 30 aJ switching energies have been fabricated. Source: Bohr, 2002. Reprinted with permission.

STATUS, CHALLENGES, AND FRONTIERS OF SILICON CMOS TECHNOLOGY 31 manufacturing. Manufactured devices must be so finely tuned that they are able to pass a long list of stringent performance, yield, and reliability tests. Depend- ing on the structural, material, or process innovation in question, it may take anywhere from 5 to 15 years of significant, and in some cases worldwide, invest- ment to prepare a new research concept for volume manufacturing. State-of-the- art, bulk Si (Thompson et al., 2002) and Si-on-insulator (SOI) technologies (Khare et al., 2002), which are very near manufacturing, feature gate lengths of 50 nm or smaller and physical gate oxide thicknesses of 12 A (just a few Si-Si atomic bond lengths) and can pack a six-transistor static RAM (SRAM) cell within a 1.0 ~J=2 area. These devices are interconnected with as many as 10 layers of Cu wiring. The critical features are patterned in engineering marvels known as scanners that use deep ultraviolet light with a wavelength of 193 nm. The entire process is carried out on 300-mm diameter wafers in fabrication lines that cost about $3 billion each to build. Note that in recent years physical gate lengths have been shrunk faster than the technology node would suggest. This "hyperscaling" of the gate length, achieved via photolithographic enhancements that are suitable for the reduction of isolated feature sizes, as well as controll- able linewidth reduction ("trimming") techniques, has accelerated performance improvements. Super-scaled bulk MOSFET research devices (in which there is not as high a premium placed on line-width control and optimized performance) have been demonstrated with gate lengths as small as 15 nm (Hokazono et al., 2002; Yu et al.,2001~. Although these devices have record-breaking intrinsic switching times and switching energies, they must still undergo significant optimization over the next five to seven years to have a shot at meeting the ITRS performance targets. Showstoppers to continued scaling have been predicted for three decades, but a history of innovation has sustained Moore's Law in spite of these chal- lenges. However, there are growing signs today that MOS transistors are begin- ning to reach their traditional scaling limits (ITRS, 2003; Meindl, 2001~. The most notable sign is the ever increasing subthreshold leakage current Ions which results from thermionic emission over the source barrier. The subthreshold leak- age dictates that the exponential decay of the subthreshold current with gate voltage can be no faster than kBT/q, or about 60 mV/decade at room temperature. In practice, MOSFETs do not turn off as abruptly as suggested by the funda- mental 60 mV/decade limit because of capacitive divider effects and, more im- portant, various short-channel effects that become significant for sub-100 nm channel lengths. As gate and channel lengths continue to shrink, it is becoming increasingly difficult to design devices with proper electrostatic scalability, that is, reasonable immunity to these short-channel effects. The short-channel effects that impact Io~ fall into three categories: (1) threshold voltage roll-off, the de- crease of threshold voltage with decreasing channel length; (2) drain-induced barrier lowering (DIBL), the decrease of threshold voltage with increasing drain voltage; and (3) degradation of the subthreshold swing with decreasing channel

1^ FRONTIERS OF ENGINEERING length. All three effects are related in that they are the simple result of two- dimensional electrostatic effects, and they can dramatically increase the sub- threshold leakage current. Because there is a fundamental limit to subthreshold swing at room temperature, short-channel effects put a lower limit on the thresh- old voltage of the target device. Until recently, VT has been scaled downward in proportion to the supply voltage VDD. The floor on VT has a significant impact of the MOSFET drive current, which depends on the gate overdrive VDD_VT. To maintain the drive current performance dictated by The Roadmap, subthreshold leakage targets have been significantly relaxed. For example, in the 0.25 ,um generation that en- tered manufacturing circa 1997, Ions was maintained at approximately 1 nA/ ,um. In the 90 nm generation soon to be ready for manufacturing, Ions for high- performance devices are typically about 40 nA/,um. If the leakage goes much higher, it is unlikely that the power generated by chips with hundreds of millions of these transistors can be tolerated. In fact, difficulty in controlling the subthreshold leakage current has already led to a scaling bifurcation be- tween high-performance and low-power transistors that has been formally adopted in the ITRS. Another significant challenge to continued scaling is the rapid increase of the quantum mechanical tunneling current that passes through the gate oxide as it is progressively thinned. This gate current is rapidly approaching the size of the subthreshold leakage. Current 90 nm generation high-performance devices soon to be in manufacturing have 12 A SiO2-based gate oxides that are within at most 2 A of the limit imposed by gate leakage. Although this gate leakage does not destroy the functionality of digital circuits, in the absence of a solution, chip power-dissipation levels will be unacceptably high. Lithography, which has been one of the key enablers of scaling, has recently emerged as one of the key challenges to scaling. The Rayleigh equation defines the half-pitch resolution as R=k'(\lNA) where k' is a coefficient that accounts for the effectiveness of resolution enhancement techniques, ~ is the wavelength of the light used, and NA is the numerical aperture of the lens. In the past, the evolution of lithography occurred primarily through the adoption of successively shorter exposure wavelengths. More recently, because of increasing difficulty of moving to shorter wavelengths, progress in lithography has depended somewhat more heavily on resolution enhancement techniques (such as the use of phase- shift masks and customized illumination techniques), and the development of higher NA lenses. At the 90 nm node, 193 nm wavelength lithography is used for critical mask levels, whereas more mature 248 nm lithography is widely used for other mask levels. Although 157 nm lithography was originally targeted for introduction at the 65 nm node (mainstream volume manufacturing in 2007), significant un- foreseen problems have arisen that have delayed its availability. As a result, lithographers are facing the challenge of extending 193 nm lithography to the

STATUS, CHALLENGES, AND FRONTIERS OF ICON CMOS TECHNOLOGY 33 65 nm node. The lithography required for the 45 nm node (2010) will have to be a very mature 157 nm lithography or a next-generation lithography (NGL) technique such as extreme ultraviolet (EUV) lithography (Attwood et al., 2001) or electron-projection lithography (EPL) (Harriott, 1999~. However, the step from 157 nm to 13.5 nm (EUV) is huge, because virtually all materials absorb rather than transmit light at 13.5 nm wavelengths; thus, the only option is re- flection optics. With such a short wavelength, EUV allows the use of conserva- tive k' and NA values, but it faces significant challenges: (1) source power and photoresist sensitivity; (2) the production of defect-free reflection masks; and (3) contamination control in reflection optics. Even if advanced lithographic techniques can be developed to pattern the required ultrafine features in photoresist, these features must be transferred reli- ably and with high fidelity into the materials that make up the transistors and interconnect. This will require significant advances in Si process technology, in particular in etching and film deposition. A wide range of processes are under development to meet these future needs (Agnello, 2002~. Among them are self- limited growth and etch processes, such as atomic-layer deposition, that have the potential to provide atomic-level control (Ahonen et al., 1980; Suntola and Antson, 1977~. In MOSFETs at the limit of scaling, it is also critical to provide precise dopant-diffusion control while also enabling the high levels of dopant activation required for low parasitic resistances. The frontiers of research on Si technology feature a wide spectrum of work aimed at enhancing CMOS performance, solving (or at least postponing) some of the major issues, such as gate leakage, and providing options in case the traditional scaling path falters. One of the most active areas of research is the use of strain to enhance carrier-transport properties (Hwang et al., 2003; Rim et al., 2002a, 2002b; Thompson et al., 2002; Xiang et al., 2003~. Si under biaxial tensile strain is well known to exhibit higher electron and hole mobilities than unstrained Si. Devices built with strained-St channels have shown drive currents as much as 20 percent higher than in unstrained-St channels (Hwang et al., 2003; Rim et al., 2002a; Thompson et al., 2002; Xiang et al., 2003~. A variety of approaches have been used to provide strained-St channels. The dominant ap- proaches include: (1) the formation of a thin, strained-St layer on top of a carefully engineered SiGe, relaxed graded buffer layer (Fitzgerald et al., 1992~; and (2) the formation of a strained-St layer on top of a relaxed SiGe on insulator (SGOI) (Huang et al., 2001; Mizuno et al., 2001~. In both cases, the larger lattice constant of SiGe constrains the lattice of Si, essentially pulling it into tensile strain. The strained-St layer must be quite thin to prevent strain relief from occuring through dislocations. Currently intense efforts are being made in several areas: (1) studying how much mobility enhancement translates to an improvement in drive in short-channel devices; (2) reducing the defect densities of strained-St starting materials; and (3) solving integration issues (e.g., rapid arsenic diffusion) that result from the presence of SiGe in the active area of the

34 FRONTIERS OF ENGINEERING device. In spite of these problems, strained Si is one of the most promising nontraditional technology options. A much more radical method to improve carrier-transport properties in- volves the use of germanium as the channel material itself (Chui et al., 2002; S hang et al., 2002~. This is primarily driven by its significantly higher bulk mobilities for electrons (2x) and holes (4x). One of the primary challenges is the lack of a stable native Ge oxide, which makes it difficult to passivate the surface of Ge. Renewed interest in Ge MOSFET is due in large part to recent advances in the understanding of oxynitrides and high-k dielectrics (initially aimed at Si- based transistors), suggesting that these materials might be used effectively to passivate the surface of Ge. Work on Ge MOSFETs is still in a very early phase. At the time of this writing, only Ge pMOSFETs have been demonstrated, with several groups attempting to build the first Ge nMOSFETs. Returning to the imminent challenges facing Moore's Law, one possible solution to the difficult gate-leakage problem is the introduction of high-k (high dielectric constant) gate dielectrics (Wallace and Wilk, 2002~. Many different high-k materials, such as Si3N4, HfO2, ZrO2, TiO2, Ta2O5, La2O3, and Y2O3 have been studied for use as a MOSFET gate dielectric. The materials that have received the most attention recently are hafnium oxide (HfO2) (Gusev et al., 2001; Hobbs et al., 2003), hafnium silicate (HfSiXOy) (Wilk et al., 2000), and nitrided hafnium silicate (HfSiXOyNz) (Inumiya et al., 2003; Rotondaro et al., 2002~. Because the main objective of scaling the gate oxide is to increase the capacitance density of the gate insulator, high-k dielectrics are attractive because the same capacitance density can be achieved with a physically thicker (and if the new material's barrier height is sufficient, a lower gate-leakage) layer. How- ever, after 40 years, the Si/SiO2 system has almost ideal properties, many of which have yet to be satisfied by a new high-k dielectric. Replacing SiO2 is not as easy as it appears. Among these requirements is material-bulk and interface properties comparable to those of SiO2. The new material must also exhibit thermal stability during the temperature cycles required in Si processing, low dopant-diffusion coefficients, and sufficient reliability under constant voltage stress. Although rapid progress has been made, no group to date has demon- strated a sufficiently thin high-k dielectric that preserves the transport properties (i.e., carrier mobilities) of the Si/SiO2 system. An interesting fact about high-k gate dielectrics is that they will be required for moderate performance, low- power applications (e.g., hand-held communication devices) before they are needed for high-performance applications (e.g., microprocessors). Another approach to increasing the capacitance density of the doped polySi/ gate dielectric/St stack is to replace the doped polySi in the gate with a metal (Kedzierski et al., 2002; Lee et al., 2002; Ranade et al., 2002; Samavedam et al., 2002; Xiang et al., 2003~. This would eliminate undesirable polysilicon-depletion effects, which typically add an effective thickness of 4 to 5 A to the thickness of the gate oxide. With physical gate-oxide thicknesses currently in the 12 A range,

STATUS, CHALLENGES, AND FRONTIERS OF ICON CMOS TECHNOLOGY 35 there is a significant degradation of capacitance density. One of the challenges to metal-gate technology is that the work function of the gate material is used to set the threshold voltage of the MOSFET. For bulk MOSFETs, a suitable pro- cess must be devised that allows the formation of tunable or dual work function metal gates (Lee et al., 2002; Ranade et al., 2002~. In recent years, double-gate MOSFETs have received a good deal of atten- tion, and a variety of approaches to building them have been investigated (Guarini et al., 2001; Hisamoto et al., 1998; Wong et al., 1997) (Figure 4~. These devices provide several key advantages over traditional planar MOSFETs (Won" et al., 2002~: (1) enhanced electrostatic scalability because two gates are in close proximity to the current-carrying channel; (2) reduced-surface electric fields that can significantly improve the carrier mobilities and the CGVDD/ID performance; and (3) because the electrostatics are controlled with geometry (rather than with doping as in the planar MOSFET), they can be built with undoped channels. This eliminates the problems caused by statistical fluctuations in the number of dopants in the device channel, which is becoming an increasing concern for planar MOSFETs. Double-gate MOSFETs present difficult fabrication challenges. The most significant of these is that the Si channel must be made sufficiently thin (no thicker than about 2/3 LG) and must be precisely controlled. In addition, the two gates must be aligned to each other as well as to the source-drain doping to minimize the parasitic capacitance that could otherwise negate the advantage derived from enhanced electrostatic scalability. The most thoroughly studied double-gate MOSFET is known as the FinFET (Hisamoto et al., 1998; Kedzierski et al., 2001; Yu et al., 2002) and in slightly modified forms as the tri-gate transistor (Doyle et al., 2003) or omega-FET FIGURE 4 Structure of a representative double-gate MOSFET. LG Gate dielectric

36 FRONTIERS OF ENGINEERING (Park et al., 2003; Yang et al., 2002) (Figure 5~. Of all double-gate MOSFETs that have been fabricated, the FinFET is the most similar to the traditional planar MOSFET. This is a huge advantage when one considers the possibility of altering the device structure that supports a $160-billion world market and that chips must be manufactured in which every single one of a billion of these devices all work, satisfies tight electrical performance tolerance requirements, and continues to operate reliably for 10 to 30 years. In the FinFET, a thin (10 to 30 nm) vertical fin of Si is created on top of an oxide layer. The gate of the device wraps around both sides and the top of the fin to provide excellent control of the channel potential and resistance to short-channel effects. Within the past year, aggressively-scaled FinFETs with 10 nm gate lengths and intrin- sic switching times (CGVDD/ID) of 300 Is have been demonstrated (Yu et al., 2002), and a functional six-transistor SRAM cell using FinFETs was realized (Nowak et al., 2002~. In conclusion, it can be argued that one of the most significant challenges to the continuation of traditional scaling is the need for continued improvements in performance (e.g., in CGVDJID) amidst other scaling constraints, which are pri- marily manifested as limits on static-power dissipation, power that is dissipated even when transistors are not switching. In light of this fact, it is very unlikely that Moore's Law will ever hit a brick wall. More likely, it will first enter a regime in which CMOS scaling provides diminishing marginal performance ben- efits and, ultimately, a negative marginal performance benefit. Some engineers in the field refer to this scenario as Moore's Summit rather than as the end of Moore's Law. Clearly, there will be no single, well-defined "summit," but FIGURE 5 Basic structure of the FinFET, the most thoroughly studied double-gate MOSFET.

STATUS, CHALLENGES, AND FRONTIERS OF ICON CMOS TECHNOLOGY 37 many summits depending on the application (Frank, 2002; Frank, et al., 2001~. Device designs for different applications are likely to continue to diverge and may possibly use different device structures and matenals. Rather than a single end point for scaling, there will be a variety of end points optimized for particu- lar applications. REFERENCES Agnello, P.D. 2002. Process requirements for continued scaling of CMOS: the need and prospects for atomic-level manipulation. IBM Journal of Research and Development 46 (2/3): 317-338. Ahonen, M., M. Pessa, T. Suntola. 1980. A study of ZnTe films grown on glass substrates using an atomic layer evaporation method. Thin Solid Films 65: 301-307. Attwood, D., G. Kubiak, D. Sweeney, S. Hector, C. Gwyn. 2001. Progress and future directions of the U.S. EUV lithography program. International Microprocesses and Nanotechnology Con- ference, Digest of Papers 78. Bohr, M.T. 2002. Nanotechology goals and challenges for electronic applications. IEEE Transac- tions on Nanotechnology 1(1): 56-62. Chui, C.O., K. Hyoungsuh, D. Chi, B.B. Triplett, P.C. McIntyre, K.C. Saraswat. 2002. A sub-400/ spl deg/C germanium MOSFET technology with high-/spl kappa/dielectric and metal gate. International Electron Devices Meeting Technical Digest 437-440. Doyle, B., B. Boyanov, S. Datta, M. Doczy, S. Hareland, B. Jin, J. Kavalieros, T. Linton, R. Rios, R. Chau. 2003. Tri-gate fully-depleted cmos transistors: fabrication, design and layout. Sympo- sium on VLSI Technology, Digest of Technical Papers 133-134. Fitzgerald, E.A., Y.-H. Xie, D. Monroe, P.J. Silverman, J.M. Kuo, A.R. Kortan, F.A. Thiel, and B.E. Weir. 1992. Relaxed GexSil-x structures for III-V integration with Si and high mobility two- dimensional electron gases in Si. Journal of Vacuum Science and Technology B 10(4): 1807- 1819. Frank, D.J. 2002. Power-constrained CMOS scaling limits. IBM Journal of Research and Develop- ment 46(2/3): 235-344. Frank, D.J., R.H. Dennard, E. Nowak, P.M. Solomon, Y. Taur, H.-S.P. Wong. 2001. Device scaling limits of Si MOSFETs and their application dependencies. Proceedings of the IEEE 89(3): 259-288. Guarini, K., P.M. Solomon, Y. Zhang, K.K. Chan, E.C. Jones, G.M. Cohen, A. Krasnoperova, M. Ronay, O. Dokumaci, J.J. Bucchignano, C. Cabra. Jr., C. Lavoie, V. Ku, D.C. Boyd, K.S. Petrarca, I.V. Babich, J. Treichler, P.M. Kozlowski, J.S. Newbury, C.P.D Emic, R.M. Sicina, and H.-S.P. Wong. 2001. Triple-self-aligned, planar double-gate MOSFETs: devices and circuits. International Electron Devices Meeting Technical Digest 425-428. Gusev, E., D. Buchanan, E. Cartier, A. Kumar, D. DiMaria, S. Guha, A. Callegari, S. Zafar, P. Jamison, D. Neumayer, M. Copel, M. Gribelyuk, H. Okorn-Schmidt, C. D'Emic, P. Kozlowski, K. Chan, N. Bojarczuk, L.-A. Rannarsson, P. Ronsheim, K. Rim, R. Fleming, A. Mocuta, and A. Ajmera. 2001. Ultrathin high-k gate stacks for advanced CMOS devices. International Electron Devices Meeting Technical Digest 451-454. Harriott, L.R. 1999. A new role for e-beam: electron projection. IEEE Spectrum 36(7): 41. Hisamoto, D., W.-C. Lee, J. Kedzierski, E. Anderson, H. Takeuchi, K. Asano, K, T.-J. King, J. Bokor, C. Hu. 1998. A folded-channel MOSFET for deep-sub-tenth micron era. International Electron Devices Meeting Technical Digest 1032-1034. Hobbs, C., L. Fonseca, V. Dhandapam, S. Sarnavedarn, B. Taylor, J. Grant, L. Dip, D. Triyoso, R. Hegde, D. Gilmer, R. Garcia, D. Roan, L. Lovejoy, R. Rai, L. Hebert, H. Tseng, B. White, P. Tobin. 2003. Fermi level pinning at the polySi/metal oxide interface. Symposium on VLSI Technology, Digest of Technical Papers 9-10.

38 FRONTIERS OF ENGINEERING Hokazono, A., K. Ohuchi, M. Takayanagi, Y. Watanabe, S. Magoshi, Y. Kato, T. Shimizu, S. Mori, H. Oguma, T. Sasaki, H. Yoshimura, K. Miyano, N. Yasutake, H. Suto, K. Adachi, H. Fukui, T. Watanabe, N. Tamaoki, Y. Toyoshima, H. Ishiuchi. 2002. 14 nm gate length CMOSFETs utilizing low thermal budget process with poly-SiGe and Ni salicide. International Electron Devices Meeting Technical Digest 639-642. Huang, L.J., J.O. Chu, D.F. Canaperi, C.P. D'Emic, R.M. Anderson, S.J. Koester, and H.-S. Philip Wong. 2001. SiGe-on-insulator prepared by wafer bonding and layer transfer for high- performance field-effect transistors. Applied Physics Letter 78(9): 1267-1269. Hwang, J.R., J.H. Ho, S.M. Ting, T.P. Chen, Y.S. Hsieh, C.C. Huang, Y.Y. Chiang, H.K. Lee, A. Liu, T.M. Shen, G. Braithwaite, M. Currie, N. Gerrish, R. Hammond, A. Lochtefeld, F. Singaporewala, M. Bulsara, Q. Xiang, M.R. Lin, W.T. Shiau, Y.T. Loh, J.K. Chen, S.C. Chien, F. Wen. 2003. Performance of 70mn strained-St CMOS devices. Symposium on VLSI Technology, Digest of Technical Papers 103-104. Inumiya, S., K. Sekine, S. Niwa, A. Kaneko, M. Sato, T. Watanabe, H. Fukui, Y. Kamata, M. Koyama, A. Nishiyama, M. Takayanagi; K. Eguchi, Y. Tsunashima. 2003. Fabrication of HfSiON gate dielectrics by plasma oxidation and nitridation, optimized for 65nm node low power CMOS applications. Symposium on VLSI Technology, Digest of Technical Papers 17-18. ITRS (International Technology Roadmap for Semiconductors). 2003. Available online at: <http:// publ ic. itrs. net/Home. htm>. Kedzierski, J., D.M. Fried, E.J. Nowak, T. Kanarsky, J. Rankin, H. Hanafi, W. Natzle, D. Boyd, Y. Zhang, R. Roy, J. Newbury, C. Yu, Q. Yang, P. Saunders, C. Willets, A. Johnson, S. Cole, H. Young, N. Carpenter, D. Rakowski, B.A. Rainey, P. Cottrell, M. Ieong, and P. Wong. 2001. High-performance symmetric gate and CMOS-compatible Vt asymmetric gate FinFET devices. International Electron Devices Meeting Technical Digest 437-440. Kedzierski, J., E. Nowak, T. Kanarsky, Y. Zhang, D. Boyd, R. Carruthers, C. Cabral, R. Amos, C. Lavoie, R. Roy, J. Newbury, E. Sullivan, J. Benedict, P. Saunders, K. Wong, D. Canaperi, M. Krishnan, K.-L. Lee, B.A. Rainey, D. Fried, P. Cottrell, H.-S.P. Wong, M. Ieong, W. Haensch. 2002. Metal-gate FinFET and fully-depleted SOI devices using total gate silicidation. Interna- tional Electron Devices Meeting Technical Digest 247-250. Khare, M., S.H. Ku, R.A. Donaton, S. Greco, C. Brodsky, X. Chen, A. Chou, R. DellaGuardia, S. Deshpande, B. Doris, S.K.H. Fung, A. Gabor, M. Gribelyuk, S. Holmes, F.F. Jamin, W.L. Lai, W.H. Lee, Y. Li, P. McFarland, R. Mo, S. Mittl, S. Narasimha, D. Nielsen, R. Purtell, W. Rausch, S. Sankaran, J. Snare, L. Tsou, A. Vayshenker, T. Wagner, D. Wehella-Gamage, E. Wu, S. Wu, W. Yan, E. Barth, R. Ferguson, P. Gilbert, D. Schepis, A. Sekiguchi, R. Goldblatt, J. Welser, K.P. Muller, P. Agnello. 2002. A high performance 90nm SOI technology with 0.992 /spl mulmlsup 2/6T-SRAM cell. International Electron Devices Meeting Technical Di- gest 407-410. Lee, J.-H., H. Zhong, Y.-S. Suh, G. Heuss, J. Gurganus, B. Chen, V. Misra. 2002. Tunable work function dual metal gate technology for bulk and non-bulk CMOS. International Electron Devices Meeting Technical Digest 359-362. Meindl, J. 2001. Special issue on limits of semiconductor technology. Proceedings of the IEEE 89(3): 223-226. Meindl, J.D., and J.A. Davis. 2000. The fundamental limit on binary switching energy for terascale integration (TSI). IEEE Journal of Solid-State Circuits 35(10): 1515-1516. Mizuno, T., N. Sugiyama, A. Kurobe, S.-i. Takagi. 2001. Advanced SOI p-MOSFETs with strained- Si channel on SiGe-on-insulator substrate fabricated by SIMOX technology. IEEE Transac- tions on Electron Devices 48(8): 1612-1618. Nowak, E.J., B.A. Rainey, D.M. Fried, J. Kedzierski, M. Ieong, W. Leipold, J. Wright, M. Breitwisch. 2002. A functional FinFET-DGCMOS SRAM cell. International Electron Devices Meeting Technical Digest 411-414.

STATUS, CHALLENGES, AND FRONTIERS OF SILICON CMOS TECHNOLOGY 39 Park, T., S. Choi, D.H. Lee, J.R.Yoo, B.C. Lee, J.Y. Kim, C.Q. Lee, K.K. Chi, S.H. Hong, S.J. Hyun, Y.G. Shin, J.N. Han, I.S. Park, S.H. Chung, J.T. Moon, E. Yoon, J.H. Lee. 2003. Fabrication of body-tied FinFETS (omega MOSFETS) using bulk Si wafers. Symposium on VLSI Tech- nology, Digest of Technical Papers 135-136. Ranade, P., Y.-K. Choi, D. Ha, A. Agarwal, M. Arneen, T.-J. King. 2002. Tunable work function molybdenum gate technology for FDSOI-CMOS. International Electron Devices Meeting Technical Digest 363-366. Rim, K., J. Chu, H. Chen, K.A. Jenkins, T. Kanarsky, K. Lee, A. Mocuta, H. Zhu, R. Roy, J. Newbury, J. Ott, K. Petrarca, P. Mooney, D. Lacey, S. Koester, K. Chan, D. Boyd, M. Ieong, H.-S. Wong. 2002a. Characteristics and device design of sub-100 nm strained Si N- and PMOSFETs. Symposium on VLSI Technology, Digest of Technical Papers 98-99. Rim, K., S. Narasimha, M. Longstreet, A. Mocuta, J. Cai. 2002b. Low field mobility characteristics of sub-100 nm unstrained and strained Si MOSFETs. International Electron Devices Meeting Technical Digest 43-46. Rotondaro, A.L.P., M.R. Visokay, J.J. Chambers, A. Shanware, R. Khamankar, H. Bu, R.T. Laak- sonen, L. Tsung, M. Douglas, R. Kuan, M.J. Bevan, T. Grider, J. McPherson, L. Colombo. 2002. Advanced CMOS transistors with a novel HfSiON gate dielectric. Symposium on VLSI Technology, Digest of Technical Papers 148-149. Samavedam, S.B., L.B. La, J. Smith, S. Dakshina-Murthy, E. Luckowski, J. Schaeffer, M. Zavala, R. Martin, V. Dhandapani, D. Triyoso, H.H. Tseng, P.J. Tobin, D.C. Gilmer, C. Hobbs, W.J. Taylor, J.M. Grant, R.I. Hegde, J. Mogab, C. Thomas, P. Abramowitz, M. Moosa, J. Conner, J. Jiang, V. Arunachalarn, M. Sadd, B.-Y. Nguyen, B. White. 2002. Dual-metal gate CMOS with HfO/sub 2/ gate dielectric. International Electron Devices Meeting Technical Digest 433- 436. Shang, H., Okorn-Schmidt, K.K. Chan, M. Copel, J.A. Ott, P.M. Kozlowski, S.E. Steen, S.A. Cor- des, H.-S.P. Wong, E.C. Jones, W.E. Haensch. 2002. High mobility p-channel germanium MOSFETs with a thin Ge oxynitride gate dielectric. International Electron Devices Meeting Technical Digest 441-444. Suntola, T., and J. Antson. A Method for Producing Compound Thin Films. U.S. Patent 4,058,430. Filed 1975, issued 1977. Thompson, S., N. Anand, M. Armstrong, C. Auth, B. Arcot, M. Alavi, P. Bai, J. Bielefeld, R. Bigwood, J. Brandenburg, M. Buehler, S. Cea, V. Chikarmane, C. Choi, R. Frankovic, T. Ghani, G. Glass, W. Han, T. Hoffmann, M. Hussein, P. Jacob, A. Jain, C. Jan, S. Joshi, C. Kenyon, J. Klaus, S. Klopcic, J. Luce, Z. Ma, B. Mcintyre, K. Mistry, A. Murthy, P. Nguyen, H. Pearson, T. Sandford, R. Schweinfurth, R. Shaheed, S. Sivakumar, M. Taylor, B. Tufts, C. Wallace, P. Wang, C. Weber, M. Bohr. 2002. A 90 nm logic technology featuring 50 nm strained Si channel transistors, 7 layers of Cu interconnects, low k ILD, and 1 /spl mulmlsup 2/ SRAM cell. International Electron Devices Meeting Technical Digest 61-64. Wallace, R.M., and G. Wilk. 2002. High-k gate dielectric materials. Materials Research Society Bulletin 27(3): 192-197. Wilk, G.D., R.M. Wallace, and J.M. Anthony. 2000. Hafnium and zirconium silicates for advanced gate dielectrics. Journal of Applied Physics 87(1): 484-492. Wong, H.-S.P., 2002. Beyond the conventional transistor. IBM Journal of Research and Develop- ment 46(2/3): 133-168. Wong, H.-S.P., K. Chan, and Y. Taur. 1997. Self-aligned (top and bottom) double-gate MOSFET with a 25nm thick Si channel. International Electron Devices Meeting Technical Digest 427- 430. Xiang, Q., J.-S. Goo, J. Pan, B. Yu, S. Ahmed, J. Zhang, M.-R. Lin. 2003. Strained Si nmos with nickel-silicide metal gate. Symposium on VLSI Technology, Digest of Technical Papers 101- 102.

~~ ~~ - Y=g, F.-L~ H.-Y. Cben, F.-C. Cben, C^C. Hung, C^Y. Cog, Ham. CHu, C^C. ~e, C^C. Cben, H^T. Hump C~J. Cben, H.-1 Tag, Y^C. Yea, HAS. ~~ C. Hu. 2002. 23 am COOS omega FETs. ~tem~on~ Beckon Devices Peeing Techie D1gest 233-238. Yu, B., L. Cog, S. Abated, H. W=g, S. Bell, C.-Y. Yang, C. Them, C. Ho, Q. X1=g, T.-J. King, J. Bang, C. Hu, ~.-R. on, D. Kyser. 2002. HnFET scaling to 10 am gag length. Intema- U~n~ Beckon Devices ~eeUng Techie D1gest 231-234. Yu, B., H. W=g, A. JosH, Q. X1=g, E. ~ok, ad ~.-R. On. 2001. 13 am gag length plow COOS star ~~on~ Beckon Devices ~eeUng Technics D1gest 937-939.

Molecular Electronics JAMES R. HEATH Department of Chemistry California Institute of Technology Pasadena, California In molecular-electronics research, molecules are used to yield the active and passive components (switches, sensors, diodes, resistors, LEDs, etc.) of elec- tronic circuits or integrated circuits. For certain applications, such as molecular- based memory or logic circuitry, the devices are simply molecular-based ana- logues of devices with more conventional silicon-based circuitry. In those cases, molecular-electronics components may have the advantages of less manufactur- ing complexity, lower power consumption, and easier scaling. The field of molecular electronics is evolving rapidly, and even though there are no commer- cial applications as yet, the science coming out of this research is spectacular. Consider the circuits of nanowires shown in the electron micrograph in Figure 1 (Melosh et al., 2003~. The smallest (100-element) crossbar in this image is patterned at a density approaching 10~2/cm2, and the wire diameter is approximately 8 nm. With species like boron or arsenic, at a doping level of 10~/cm3, a similar 8-nary diameter micrometer-long segment of silicon wires would have 20 to 30 dopant atoms; a junction of two crossed wires would con- tain approximately 0.1 to 0.2 dopant atoms. Thus, conventional field-effect transistors fabricated at these wiring densities might exhibit nonstatistical and perhaps unpredictable behavior. In fact, the patterning method (called superlat- tice nanowire pattern transfer [SNAPj) that produced the generation of patterns shown in Figure 1 can be used to prepare ultradense arrays of silicon nanowires. Thus, for the first time, researchers can interrogate the statistics of doping and other materials-type fluctuations that are expected to become important at the nanoscale. Doping fluctuations, however, are relatively trivial compared to other prob- lems encountered by engineers trying to scale conventional, silicon-based inte- 41

42 FRONTIERS OF ENGINEERING FIGURE 1 A series of 100-element crossbar circuits, the prominent circuits of molecu- lar electronics. The molecule of interest is typically sandwiched between the intersection of two crossed wires. This very simple circuit can be used even in the presence of manufacturing defects and can be fabricated at dimensions that far exceed the best litho- graphic methods. The device densities in these circuits approach 1012/cm2 for the small- est crossbars. grated circuits to significantly higher densities than are produced now. Power consumption (just from leakage currents through the gate oxide) is perhaps the most serious issue, but the lack of patterning techniques and high fabrication costs are also important. In fact, no one is seriously contemplating scaling standard electronics de- vice concepts to molecular dimensions (Packan, 1999~; alternative strategies are being pursued, although they are all in their early stages. These alternatives include molecular electronics, spintronics, quantum computing, and neural net- works, all of which have so-called "killer applications." For quantum comput- ing, it is the reduced scaling of various classes of NP-hard problems. For spintronics, it is a memory density that scales exponentially with numbers of coupled spin transistors. For molecular electronics, it is vastly improved energy efficiency per bit operation, as well as continued device scaling to true molecular dimensions. For true neural networks, it is greatly increased connectivity and, therefore, a greatly increased rate of information flow through a circuit. Because molecular electronics borrows most heavily from current tech- nologies, in the past few years it has advanced to the point that many major semiconductor-manufacturing companies, including IBM, Hewlett Packard, and LO (Korea), are launching their own research programs in molecular electronics. At device areas of a few tens of square nanometers, molecules have a certain fundamental attractiveness because of their size, because they represent the ulti-

MOLECULAR ELECTRONICS 43 mate in terms of atomic control over physical properties, and because of the diverse properties (e.g., switching, dynamic organization, and recognition) that can be achieved through such control. In the crossed-wire circuit shown in Figure 1 (called a crossbar circuit), the molecular component is typically sandwiched between the intersection of two crossing wires. Molecular-electronics circuits based on crossbar architectures can be used for logic, sensing, signal routing, and memory applications (Luo et al., 2002~. To realize such applications, many things must be considered simul- taneously: the design of the molecule; the molecule/electrode interface; elec- tronically configurable and defect-tolerant circuit architectures; methods of bridg- ing the nanometer-scale densities to the submicrometer densities achievable with lithography; and others (Heath and Ratner, 2003~. Using a systems approach in which all of these issues are dealt with consistently and simultaneously, we have been able to fabricate and demonstrate simple molecular-electronics-based logic, memory, and sensing circuitry. The active device elements in these circuits are molecular-mechanical com- plexes (Figure 2) organized at each junction within the crossbar, as shown in +'~ ro ,~Q ~0 (0~0) (Q [; O) OMe LO MeO ro FIGURE 2 Two types of molecular-mechanical complexes that have been demonstrated to work as molecular-electronic switches. At left is a [2]catenane complex, and at right is a [2]rotaxane complex. For both structures, the tetracationic cyclophane (TCP4+) ring encircles the tetrathiafulvalene (TTF) recognition unit. The lowest energy-oxidation state of either complex corresponds to removal of an electron from the TTF group. This leads to a coulombic repulsion of the TCP4+ ring so that it encircles the dioxynapthyl group. Molecular switches based on this concept have been demonstrated to work in solution, in solid polymer matrices, immobilized on solid surfaces, and sandwiched between two electrodes. Source: Collier et al, 2002.

44 FRONTIERS OF ENGINEERING FIGURE 3 A computer graphic showing the molecular components in a crossbar cir- cuit. The [2]rotaxanes molecules shown here are bistable molecular-mechanical com- pounds. The switching mechanism involves the translation of the ring between two binding sites along the backbone of the molecule. Molecular-electronic circuits have been demonstrated to work as both random-access memory circuits and simple logic circuits. Figure 3 (Heath and Ratner, 2003; Luo et al., 2002~. The molecular structure in Figure 2 (left) is a [2]catenane that consists of two mechanically interlocked rings. One ring is a tetracationic cyclophane (TCP4+~; the other is a crown-ether- type ring with two chemical recognition sites, a dioxynapthyl (DN) group and a tetrathiafulvalene group (TTF). The structure on the right side of Figure 2 is a [2jrotaxane that consists of similar chemical motifs. Here the TCP4+ ring encir- cles a dumbbell-shaped structure that has both DN and TTF recognition groups. The molecules are switched via a one- or two-electron process that results in a molecular-mechanical transformation (and a significant change in the electronic structure) of the molecule. This type of molecular actuation, which can be rationally optimized through molecular design and synthesis, provides the basis for information storage or for defining the "open" and "closed" states of a switch. Typical data from one of our molecular switches is shown in Figure 4 (we have incorporated additional data from various control molecules in this figure). For the structures shown in Figure 2, the controls include [2]catenanes with identical recognition sites (i.e., two DN groups), the dumbbell component of the [2jrotaxane structure, the TCP4+ ring, and others. One always does control ex- periments, of course, but controls are critical here, because these devices are difficult to characterize fully, and one of the few experimental variables is mo-

MOLECULAR ELECTRONICS 20 - Q 50- 30 - 3 10 45 an. ~ . J f Write voltage Of _ ,~:~.~ it_ 10 20 Device cycle FIGURE 4 The hysteretic response (top) and switch cycling (bottom) of a molecular- electronic switch consisting of a bistable [2]rotaxane sandwiched between a polysilicon bottom electrode and a Ti/A1 top electrode. The device dimensions are approximately 50 x 50 nm. Control data from the dumbbell component (the [2]rotaxane without the TCP4+ ring) are included. lecular structure. Perhaps the most fundamental challenge facing scientists con- structing solid-state molecular-electronic devices and circuits is developing an intuition for guiding the design of the molecular components. Charge transport through molecules has been known and studied for a long time, but it has traditionally been a solution-phase science (Joachim et al., 2000; Kwok and Ellenbogen, 2002; Mujica and Ratner, 2002; Nitzan, 2001; Ratner, 2002~. For example, a critical component of electron-transfer theory in mole- cules is the solvent-reorganization coordinate. Molecular synthesis is also a solution-phase endeavor, and all of the analytical techniques (e.g., mass spec- trometry, nuclear magnetic resonance (NMR), optical spectroscopy, etc.) are primarily designed to investigate the structure and dynamics of molecules in solution. When molecules are sandwiched between two electrodes, none of these theories and analytical techniques is of any use, and new concepts must be developed.

46 FRONTIERS OF ENGINEERING This situation is exacerbated because certain critical aspects of basic solid- state-device physics cannot be translated directly into the world of molecular electronics. For example, consider the following two fundamental tenets of solid-state materials. First, when two different materials are brought together, their Fermi levels align with each other. Second, if a conducting wire of length L has a measured resistance of R. then a wire of the same material and diameter, but with a length 2L, will exhibit a resistance of 2R. This is called ohmic conductance. Neither of these rules holds true for molecules. This is because molecular orbitals are spatially localized, whereas in solids the atomic orbitals form extended energy bands. When a molecule is adsorbed or otherwise at- tached to the surface of a solid, there is no straightforward way to think about the position of the molecular orbitals with respect to the Fermi level of the solid. Furthermore, if a molecule of length L yields a resistance value of R. then a similarly structured, but longer molecule could actually be a better conductor or, perhaps, a far worse conductor. There is no real ohmic conductance in molecules. Thus, the fundamental challenge is to develop an intuition of how molecules behave in solid-state settings and to use that intuition as feedback to molecular synthesis. In fact, we need a method of characterizing molecular-electronic devices that yields the type of rich information generated by modern NMR spec- troscopic methods. Perhaps the most promising approach is single-molecule, three-terminal devices in which a single molecule bridges a very narrow gap (called a break junction) between a source and a drain electrode. A third elec- trode (called the gate) provides an electric field for tuning the molecular- electronic energy levels into and out of resonance with the Fermi energies of the source and drain electrodes. These devices were originally developed by Park and McEuen and were then further explored by their two groups separately (Liang et al., 2002; Park et al., 2002~. Figure 5 shows data from a single- molecule device containing the dumbbell component of the [2jrotaxane mol- ecule shown in Figure 2 (Yu et al., 2003~. These data show clearly that these types of device measurements represent a very high information-content analyti- cal method. In an analogy to optical methods, the energy levels resolved in such a device span a very broad range of the spectrum, from ultraviolet to far infrared. This means that molecular orbital energies, and even low-frequency molecular vibrations, might be observable. How molecular orbitals align with the Fermi levels of electrodes, the coupling of molecular vibrations with charge transport through the molecule, and the nature of the molecule/electrode interface states are questions being addressed in various laboratories around the world. Some of the critical length scales of circuits that can now be fabricated, such as the diameter of the wires and the interwire separation distance (or pitch), are more commonly associated with biological macromolecules, such as proteins, mRNA oligonucleotides, and so on, than with electronics circuitry. In fact, one unique application of nanoscale molecule-electronics circuitry, and perhaps the

MOLECULAR ELECTRONICS ~ ~- VG(~) ~ tic Jo ~ -1 00 -50 0 0.5 0~.Q -~.0 I (nA)-2~0 G47 so vSo(mV) / _ _ . _~ V~ntmV) ~ ~ 5\ ~ ~ FIGURE 5 Data from a single-molecule, three-terminal device in which the dumbbell component of a [2]rotaxane molecular switch bridges a 4-nary gap separating a source and drain electrode. Voltage on a gate electrode is used to tune the molecular-energy levels into resonance with the electrode-energy levels. 5a. The conductance (dI/dV) of the molecular junction is plotted as a function of source-drain voltage (x-axis) and gate volt- age (y-axis). Light colors indicate high conductance values. 5b. The current-voltage trace, drawn through the point indicated by the arrows and the dashed white line on the conductance plot. 5c. The structure of the molecule being measured in this junction. All measurements are carried out at 2 Kelvin. application that sets this field apart from traditional electronics, is the potential construction of an electrical interface to a single biological cell (Cud et al., 2001~. One of our ongoing projects is constructing such an interface designed to mea- sure, simultaneously and in real time, thousands of molecular signatures of gene and protein expression (Heath et al., 2003; Zandonella, 2003~. This type of

48 FRONTIERS OF ENGINEERING circuitry may eventually provide a direct connection between the worlds of mo- lecular biology and medicine and the worlds of electrical engineering and inte- grated circuitry. ACKNOWLEDGMENTS The work described in this paper has been supported by the Defense Ad- vanced Research Projects Agency, the MARCO Center, the Semiconductor Research Corporation, the Office of Naval Research, the U.S. Department of Energy, and the National Science Foundation. The results presented here were collected by an outstanding group of students and postdoctoral fellows at UCLA and Caltech. REFERENCES Collier, C.P., G. Mattersteig, E.W. Wong, Y. Luo, K. Beverly, J. Sampaio, F.M. Raymo, J.F. Stod- dart, and J.R. Heath. 2002. A [2]-catenane based solid-state electronically reconfigurable switch. Science 289(5482): 1172-1175. Cui, Y., Q. Wei, H. Park, and C.M. Lieber. 2001. Nanowire nanosensors for highly sensitive and selective detection of biological and chemical species. Science 293(5533): 1289-1292. Heath, J.R., M.E. Phelps, and L. Hood. 2003. Nanosystems biology. Molecular Imaging and Biology 5(5): 312-325. Heath, J.R., end M.A. Ratner. 2003. Molecular electronics. Physics Today 56(5): 43-49. Joachim, C., J.K. Gimzewski, and A. Aviram. 2000. Electronics using hybrid-molecular and mono- molecular devices. Nature 408(6812): 541-548. Kwok, K.S., and J.C. Ellenbogen. 2002. Moletronics: future electronics. Materials Today 5(2): 28-37. Liang, W., M.P. Shores, M. Bockrath, J.R. Long, and H. Park. 2002. Kondo resonance in a single- molecular transistor. Nature 417(6890): 725-729. Luo, Y., C.P. Collier, K. Nielsen, J. Jeppesen, J. Perkins, E. DeIonno, A. Pease, J.F. Stoddart, and J.R. Heath. 2002. Two-dimensional molecular electronics circuits. ChemPhysChem 3(6): 519-525. Melosh, N.A., A. Boukai, F. Diana, B. Geradot, A. Badolato, P.M. Petroff, and J.R. Heath. 2003. Ultrahigh-density nanowire lattices and circuits. Science 300(5616): 112-115. Mujica, V., and M.A. Ratner. 2002. Molecular Conductance Junctions: A Theory and Modeling Progress Report. Pp. 10-1-10-27 in Handbook of Nanoscience, Engineering, and Technology, D. Brenner, S. Lyshevski, G. Iafrate, W.A. Goddard III, eds. Boca Raton, Fla.: CRC Press. Nitzan, A. 2001. Electron transmission through molecules and molecular interfaces. Annual Re- view of Physical Chemistry 2(1): 681-750. Packan, P. 1999. Pushing the limits. Science 24(5436): 2079-2081. Park, J., A.N. Pasupathy, J.I. Goldsmith, C. Chang, Y. Yaish, J.R. Petta, M. Rinkoski, J.P. Sethna, H.D. Abruna, P.L. McEuen, and D.C. Ralph. 2002. Coulomb blockade and the Kondo effect in single-atom transistors. Nature 417(6890): 722-725. Ratner, M.A. 2002. Introducing molecular electronics. Materials Today 5(2): 20-27. Yu, H., Y. Luo, H.R. Tseng, K. Beverly, J.F. Stoddart, and J.R. Heath. 2003. The molecule- electrode interface in single-molecule transistors. Angewandte Chemie 42: 5706-5711. Zandonella, C. 2003. Cell nanotechnology: the tiny toolkit. Nature 423(6935): 10-12.

Limits of Storage in Magnetic Materials THOMAS J. S ILVA Magnetic Technology Division National Institute of Standards and Technology Boulder, Colorado The modern hard disk drive is a marvel of multidisciplinary technology. The basic concept behind the disk drive is simple enough: A magnetic solenoid (the write head) applies a localized magnetic field to the surface of a disk coated with a thin film of magnetic material (the media). When the magnetic field from the write head exceeds a threshold (the coercivity), the magnetic polarity of the media is reversed, resulting in a "bit" of recorded data. To read the data back from the media, a field sensor is passed over the same disk surface. Stray magnetic fields from the recorded "bits" alter the resistivity of the field sensor, which is measured with a current source and broadband voltage preamplifier. The disk rotates under the impetus of a precision direct-drive motor, with rota- tion rates of 15,000 rpm in high-performance drives. Because the head applies localized magnetic fields to write data, an actuator and armature swivels the head to access different tracks of data recorded at specific radii on the disk surface. Localization of both the write fields and read head sensitivity is achieved by positioning both the write head and read head sensor in very close proximity of the disk surface. The heads are bonded to the rear end of a "slider" that skims across the disk surface on a thin layer of air entrained by the rotating disk. Thus, the operation of a disk drive requires a delicate balance of mechanical, electri- cal, and magnetic properties, with all three optimized for both economy and performance. Although the technology is conceptually trivial, in its execution, it strains the limits of human imagination. Magnetic recording capacities have been in- creasing at an astonishing rate of 100 percent compounded annually for the last 49

so FRONTIERS OF ENGINEERING seven years. Areal densities for commercial drives are approaching 1.6 x 10~° bits/cm2 (100 Gbits/in2), with a magnetic bit area of only 10,000 square nano- meters. While the actuator is reading and writing data to the disk surface, it must keep the head over tracks of data 250 nm wide with nanometer precision. The head used to read and write data from the magnetic platter flies over the disk surface with an average separation of only 10 nm at the breathtaking speed of 215 km/hr (134 mph). At the same time, the cost per gigabyte for disk storage is unrivaled in the on-line data storage sector. 160 GB drives can now be purchased for a street price of $230, with an equivalent cost per megabyte of only 14 cents! (By comparison, the street price for flash RAM is approaching $250 for 1 GB, or 25 cents per MB more than two orders of magnitude more expensive than the cost per MB for hard-disk-drive storage.) The economic advantage of the magnetic recording process has sustained the disk-drive industry throughout the entire period of computer commercialization. Replacing the disk drive with purely solid-state memory in the computer architecture has never been economically feasible despite the disadvantages associated with a mechanical device. The economic advantage of disk-drive architecture rests on the recording of data on a featureless surface. The media in a disk drive does not require any lithographic processing, which can be heavily leveraged to provide high-density data storage at a highly competitive price. The fabrication of the recording head, on the other hand, relies heavily on precise nanofabrication techniques, includ- ing lithography and chemomechanical polishing. Yield demands are predicated on the capability of a single functioning head to access as much as 40 gigabytes of data on one side of a disk three inches in diameter. One of the astounding dimensions associated with mechanical disk drives is the extremely thin air bearing that separates the media from the head. The reason for such small head/media spacing is quickly apparent when you consider the frequency components for magnetic fields in free space generated by an arbitrary magnetization distribution in a plane at z = 0. Because the fields in free space (z > 0) must satisfy the Laplace equation, the Fourier components of the magnetic field at spatial frequency k decay as ~ 7 H(k' OC e-~Z (1) This results in a 55 dB signal loss for every increase in spacing equal to a multiple of the recorded bit pattern wavelength. The exponential decay of the stray fields from the recorded pattern in media mandates very small head/media spacing and very small media thickness. Magnetic data storage depends upon a hysteresis, a fundamental physical property of all ferromagnets. When a magnet field is applied to a magnet and then turned off, the final state of the magnet is a function of the applied field amplitude and direction. Thus, the magnet "remembers" the applied field, per- mitting the recording of information in a nonvolatile and erasable format. The

LIMITS OF STORAGE IN MAGNETIC MATERIALS 51 fundamental fact that ferromagnets exhibit hysteresis is a manifestation of the spontaneous symmetry breaking that is characteristic of all phase transitions (Pathria, 1996~. In the ferromagnetic phase, the collective alignment of the uncompensated electron spins lowers the free energy of the electronic system. However, the collective behavior of the spins requires they all point in some direction. A prerequisite for the ferromagnetic order parameter is the existence of a preferred direction for the ferromagnetic order parameter to align with. However, the existence of a preferred direction in an inversion symmetric mate- rial also implies that there must be more than one direction the magnetization tends to align with. By such symmetry arguments, one can show that the free energy of the magnetization will have an angular dependence that exhibits even- order symmetry, with respect to the anisotropy axis, a symmetry axis of the magnetic material. This anisotropy energy is generally written as an energy density (Morrish, 2001~: UK(~=KIsin20+K2sin40+~. (2) The coefficients Kn in the anisotropy energy expansion are the energies required to rotate the magnetization out of the easy axis direction. This perturba- tion energy, acting on the ferromagnetic order parameter, imbues the ferromag- net with its hysteretic properties (Stoner and Wohlfarth, 1948~. To understand the role of anisotropy in the magnetic recording process, we need only consider the lowest order term in the anisotropy expansion of Eq.2, also referred to as uniaxial anisotropy energy density, Ku. SUPERPARAMAGNETISM Let us consider a system in equilibrium that is in contact with a thermal reservoir. From Boltzmann statistics, we know that the relative probability that the system occupies a state of energy density /\U above the ground state energy density UH is given by (AU+UH )V e kRT Pr(/\U) = (UH)V ,~ kRT =e _(Au)v kRT (3) where V is the total volume of the magnetic system under consideration. Now, let us assume that the system hops between different energy states with a charac- teristic time scale of ~0. If we consider that the probabilities associated with the occupation of any given energy state is given by Eq.3, it quickly becomes appar- ent that the average frequency v with which the system occupies any given energy level is given by

52 FRONTIERS OF ENGINEERING _^u V—= 1 e kRT (4) To So far, we have considered thermal equilibrium processes that determine the rate at which thermal fluctuations perturb the energy of a system. Now, let us consider a magnetic system consisting of the uniform magnetization state, such as a single-domain grain in a polycrystalline magnetic film. The magnetic do- main is in thermal contact with several thermal baths, including both vibrations of the crystal lattice (the phonon bath) and nonuniform spin fluctuations in the magnetization itself (the magnon bath.) (We can isolate the uniform magnetiza- tion state from all the other nonuniform modes by the same considerations that lead to the Planck distribution for an ensemble of harmonic oscillators, where we consider magnetic excitations to obey just such a distribution.) The fluctuations in the uniform state manifest themselves as random varia- tions in the magnetization orientation ~ relative to the anisotropy axis. If, how- ever, the thermal fluctuations cause the magnetic orientation to move to a posi- tion where the total magnetic energy is maximized, we assume that the magnetic state will change its ground state configuration, so that the probability distribu- tion for fluctuations will now be referenced to a new ground state oriented 180 degrees relative to the original magnetization direction with energy density Us; the system "hops" over the energy barrier imposed by the rotational anisotropy of the magnetic energy. The rate at which a sudden reorientation of the energy ground state (from UH to USA can occur is given by vet= e To _KUV 1 kRT (5) We note that a system with only anisotropy energy will hop between differ- ent ground state configurations with equal probability, so that an ensemble that starts in a uniformly magnetized state (such as a collection of grains that form a single magnetic bit in the disk media) will eventually demagnetize (i.e., IMI ~ O) as a result of thermal hopping over the energy barrier. Under the assumption that the magnetization fluctuations are uncorrelated, the relaxation process should follow the form for a Poisson point process (Helstrom, 1984), which has the solution IMP = MOe-Vt = KUV - Moe ; ~ = Toe kRT (6) This is the Arrhenius-Neel Law for thermal relaxation of magnetic systems (Neel, 1949~. It is important to consider that the solution is only appropriate if

LIMITS OF STORAGE IN MAGNETIC MATERIALS 53 all single-domain articles in the system have exactly the same anisotropy energy. A very elegant experimental confirmation of Eq.6 was recently obtained using lithographically patterned arrays of identical magnetic particles, individually measured using spin-dependent tunneling methods (Rizzo et al., 2002~. A note concerning the energy fluctuation time ~0 (sometimes referred to as the "attempt" time) is in order. This parameter is one of the least understood in the context of superparamagnetic behavior, in part because direct experimental determination of this time scale has only recently been achieved (Rizzo et al., 2002~. However, a rather simple theoretical understanding of nonequilibrium statistical mechanics can shed a great deal of light on the nature of the attempt time. The Onsager regression hypothesis states that the correlation time of ther- mal fluctuations for a system in thermal equilibrium is identical to the time scale at which the same system relaxes back into equilibrium after perturbation by an external stimulus (Chandler, 1987~. For example, suppose that a sudden mag- netic field pulse is applied to a magnetic system and that the ground state orien- tation is rotated by an angle All. The magnetic system energy will relax into the reoriented ground state in an exponential fashion, U(t)- UL. = /\Ue~° (7) The time scale ~0 in Eq.7 can be thought of as the time for inelastic scatter- ing events that decrease the net energy of the spins that comprise the magnetic system. According to Onsager's regression hypothesis, the ~0 in Eq.7 is identical to the attempt time ~0 in Eq.5. The relaxation time ~0 in Eq.7 is also called the "damping" time, empirically derived values that are obtained by fitting data to phenomenological models for magnetization dynamics. The most widely cited of these models is that of Lan- dau and Lifshitz (1935), AM -|7lp (M x H) + alto (M x (M x H)) (8) The first term describes the purely elastic process of spin precession, by which the spins in the ferromagnet coherently orbit along a path of constant energy. The second term is a phenomenological description of the damping process. By construction, the torque represented by the second term always points in a direction orthogonal to the spin precession and orthogonal to the magnetization. Both of these constraints ensure that the damping describes an inelastic process that conserves the net moment of the magnetic object under consideration. The step-response solution of Eq.8 for a thin-film geometry in the limit of small angle motion about the y-axis yields (Silva et al., 1999)

54 where FRONTIERS OF ENGINEERING Mx (t) ~ AMx (1 - cos(CoOt)e so ) t Mz(t) ~ ~Mz sin(coOt)e lo (9) TO = ; t00 = To AI(MS + HK )(HK ) (10) octets and the z-axis is normal to the film plane. For most magnetic materials, damping values are experimentally measured to lie in the range 0.1 > or > 0.01, and the magnetic moment for data storage media is typically on the order of poMS ~ 0~5 T. Thus, damping times for media are expected to range over 0.2 < ~0 < 2 ns. Experimental determination of damping is more easily performed with soft mag- netic materials that exhibit an effective anisotropy field of less than 8 kA/m (100 Oe). Figure 1 shows free induction decay data obtained from the alloy Nio Fee 2. The data are well fitted to Eq.9, with the measured signal proportional to the in- plane transverse magnetization Mx. Hb= JOOe 10 -5 o 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 2 4 6 8 10 FIGURE 1 Free induction decay data for a 50 nm Nio Fee 2 film. The magnetization is responding to a 50 ps rise time magnetic field pulse applied transverse to the initial magnetization direction. Data were obtained with a pulsed inductive microwave magne- tometer, described in detail in Silva et al. (1999~. The damping time is 1.2 ns, with an equivalent damping parameter of alpha = 0.008.

LIMITS OF STORAGE IN MAGNETIC MATERIALS 55 Upon inclusion of a statistical distribution for the energy barriers, one finds that the relaxation process is no longer exponential, but is instead logarithmic, with a logarithmic relaxation rate proportional to the width of the energy barrier distribution (Street and Wooley, 1949~. To understand why this is the case, a graphical analysis is a convenient pedagogical tool. In Figure 2, we see a plot for the exponential relaxation process at room temperature for a ferromagnetic system with various energy barrier heights, ranging over 3.6 < KuVlkBT < 37, assuming an energy fluctuation rate of 10 GHz. The time axis in Figure 2 is logarithmic. Only an order of magnitude variation in the height of the energy barrier results in a 12 order of magnitude variation in the relaxation time. At the same time, exponential relaxation looks like a step function when plotted on a logarithmic time axis. If one assumes that distribution of energy barriers is flat, integration of the temporal relaxation process is quasi-logarithmic (i.e., the av- erage response for a distribution of step functions in log time will be a linear function in log time). In Figure 3, we show the experimental data from The Physical Principles of Magnetism where the thermal switching times ~ were continuously varied by applying a longitudinal bias field antiparallel to the origi- nal direction of the magnetization in a patterned magnetic element (Morrish, 2001~. The data show the clear hallmarks of thermally driven reversal- exponential dependence of switching probability on applied field duration, and a continuously decreasing time constant that depends on the height of the energy barrier that prevents switching. The transition from exponential relaxation to logarithmic relaxation in an inhomogeneous system will occur under conditions when the energy barrier is negligible. This was observed in Rizzo et al. (1999) in an experiment where thin-film media were subjected to magnetic field pulses of varying intensity and duration. In Figure 4, the data from this experiment are reproduced. For pulses of less than 10 ns duration, the response is clearly exponential, with a time constant of several nanoseconds. For pulses of longer duration, the response is exponential, reflecting thermal activation with a broad distribution of energy barriers. The dependence of hysteresis on temperature is a fundamental limit that impedes the perpetual increase in the areal density of disk drives (Lu and Charap, 1994~. To understand why this is the case, we must consider the various noise sources in the magnetic recording process. There are three principle contribu- tions to the noise in disk drives: (1) Johnson noise, (2) media noise, and (3) magnetic fluctuation noise. Johnson noise is associated with voltage fluctua- tions in the read sensor and preamp used to read back the magnetic data. As long as the electrical resistance of the read head is kept constant by proper use of dimension scaling rules, this noise source can be kept under control. Media second noise source stems from the spatial variation of the magnetization distri- bution in the media. Although the media is a thin metallic polycrystalline film of magnetic material, growth conditions for the sputter deposition are chosen to

56 In CM 11 0 In (D O 11 11 o ~ ~ o In 0 In ~ ._ ,~ .— 11 11 An. 1 to to ::::f or. , :.i Y _ o _ a 1~ 1 1 1 1 .,....,. ....... .~ - ..... ~ ~ ......... , ......~...... . o ~ ......... ,.^, UP ,;..:..:.:.'.'.'f ....~.......... , *. ~ ~ ~ ~ .,... ,. ,,,,,...i ,' A O ~ ,,,.,..:.'.'.'.'; a U ~ X ,,,.,,,.'.':'." it .:..:....:...':', ~ .:.': :.:.':'.'.'' ~ 7 - :.':.' ,, ,. . .: :~.~ '; '." ' _~ . ~ ,,,..'.':'' ..,,,.,.,,.~,... .:"2 o ;L o. O i N0 -. ~b, ~0 g ._ - - o O ~— ~n a~ o O N ~ a O O 10 -0 U. 0 0 0 CO 0 0 ~ CM O O O 0D (D O O swlw o cd c~ o s~ t~o. o o

LIMITS OF STORAGE IN MAGNETIC MATERIALS 1.0 0.8 ~0.6 0.4 0.2 O.0 1~' ~ i ~ ~ ~ 1111 ~ I I j I 1~11 f ~ ~ f ~ Itl' I ~ I ~ I lTl' ~ 1_7 _ N: ~ ~ ~ ~ ~ \\ ~ ~ ~ '\ ~' \~4'. ~ ~N 'i: :'~: ~' ' \~s ~ _ . ~ = = _ _ 1 r ~ EAT I ~ I I I I I t I t t ~ ~ 1 ~ ~ I I I ~ I I I I ~ ~ I I L 14 sol 102 103 104 105 Pulse duration t (ns) 57 FIGURE 3 Probability for reversal of a lithographically patterned magnetic element, as a function of pulse duration. Different curves are for varying pulse amplitudes. Data are well fitted to exponentials, with a decreasing time constant for increasing pulse ampli- tude. Source: Rizzo et al., 2002. Reprinted with permission. 1 0.5 ~~ O -0.S I I i I ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ f ; a= Ski ~ ~ . A. , _ ~ I-l-ll~kAf~n 1 1 1 1 ~ 1 Il' 1 ~ 1 ~ ~ 1 1~1 1 ~ ~ ~ 1 10 100 Pulse duration tp(ns) FIGURE 4 Magnetization reversal process for thin-film recording media. The data in- dictate that the reversal process is logarithmic for field pulses longer than 10 ns, and exponential for shorter duration pulses. Source: Rizzo et al., 1999. Reprinted with . . permission.

58 FRONTIERS OF ENGINEERING minimize the exchange coupling between individual grains of the film. The exchange coupling between the spins within a ferromagnet is what leads to ferro- magnetic order below the Curie temperature. Nevertheless, frustration of the exchange interaction between crystallites in a continuous magnetic film does not preclude ferromagnetic order as long as the anisotropy energy for the grains is sufficient to prevent thermal erasure, as per Eq.6. By reducing the intergranular exchange coupling, the characteristic length scale for magnetic variations be- tween bits is minimized at the expense of magnetic uniformity within the bit. As a first approximation, media noise scales inversely as the square root of the number of grains in a bit. Thus, to keep media noise in check, grain size must also scale with the reduced bit dimension as areal density is increased. As a rule of thumb, each bit must contain ~ 100 grains for a tolerable level of media noise. By Eq.6, to prevent thermal erasure, any decrease in grain volume must be accompanied by an increase in the anisotropy energy Ku. An optimal energy barrier size that maintains data longevity for the 10- to 20-year lifetime of the disk drive is approximately KuVlkBT~ 50. The bit volume at an areal density of 100 Gbit/in2 is 5 x 104 nm3 (assuming 10 nm film thickness), with an average grain volume of only 500 nm3. To be thermally stable, the anisotropy energy density must be greater than 70 ,ueV/nm3. This is equivalent to an anisotropy "field" Hk of pf,Hk = 2KulMs ~ 2.7 T. The fabrication of thin magnetic films with such large crystalline anisotropies is not in itself a limitation. The Co-based alloys used for data storage exploit the large uniaxial anisotropy associated with hexagonal close-packed (hcp) structure of Co. (Single crystal hop Co has a uniaxial anisotropy of 0.6 T. Alloying Co with various noble and refractory metals can enhance the anisotropy by a factor of 2 to 4 [Lu et al., 20031.) A much more serious issue is the writing process. The coercive field re- quired to switch the magnetization orientation in the media scales roughly as half the anisotropy field. Thus, media for 100 Gbit/in2 recording densities mandates write fields of 1.4 T. To generate such large fields, the recording head itself must have saturation magnetization levels of approximately twice the coercivity, or ~/Ms ~ 2.7 T. (The write head is essentially a ferrous-core electromagnet that uses the soft magnetic properties of high-moment alloys to convert weak cur- rents of a few hundred mA into large magnetic fields.) At this point, we are now limited by the excess spin that can exist in a ferromagnet. FeCo alloys have the highest saturation magnetization levels of 2.4 T. with 2.5 Bohr magnetons per atom. Thus, exceeding densities of 100 Gbit/in2 may be possible, but fundamen- tal limitations associated with the generation of large recording fields will se- verely impede further progression of simple scaling laws. MAGNETIC FLUCTUATION NOISE Recently, it has been found that thermal fluctuations in the read head can also impede further improvements in areal density (Smith and Arnett, 2001;

LIMITS OF STORAGE IN MAGNETIC MATERIALS 59 Stutzke et al., 2003~. These "soft" fluctuations also increase in magnitude as the magnetic read sensor is scaled down in size due to the fixed thermal energy per excitation mode. In this case, however, the thermal fluctuations are not detected as irreversible jumps in the ground state configuration of the sensor, but simply as voltage fluctuations in the sensor output. The read head in a modern disk drive uses the giant magnetoresistance (GMR) effect for magnetic multilayers. In the late 1980s it was discovered that the resistance in an artificial superlattice of magnetic and nonmagnetic metallic films was a strong function of relative magnetization orientation in the superlat- tice structure. Resistance changes on the order of several tens of percent were realized for such structures. Implementation of the new technology was very quick; today, all commercial disk drives use GMR sensors for the read-back process. For such sensors, two magnetic layers are separated by a 2 to 5-nary thick nonmagnetic spacer of Cu. One of the magnetic layers is kept fixed through exchange biasing techniques; the other sensor is free to rotate under the influ- ence of stray magnetic fields from the recording media. The voltage produced when a current is driven through the multilayer device is given by ° ( ° 2 ) (1 1) where Io is the bias current, Ro is the nonmagnetic contribution to the device resistance, /\R is the GMR resistance magnitude for antiparallel orientation of the two layers, and ~ is the relative orientation of the two layers. The GMR sensor is biased so the magnetization of the free layer is oriented perpendicular to the magnetization of the fixed layer in order to linearize the response of the device to fields from the written bit pattern: ( 2 ) (12) where ~ = 2 + 60. The rotation of the magnetization by external fields is in proportion to the strength of the external field Ho and the effective anisotropy of the free magnetic layer: ~~ Ho Hk (13) However, the free layer can also respond to fluctuations associated with the thermal bath. To understand how strong these fluctuations can be, we must understand the nature of thermal fluctuations in ferromagnets far from the Curie temperature. In this case, fluctuations are quantized much the same as for the problem of black body radiation. Thus, the distribution of energy among the

60 FRONTIERS OF ENGINEERING different magnetic eigenmodes (also referred to as magnons) is given by a Planck distribution. In the limit of large temperatures relative to the energy of the eigenmodes, the distribution becomes classical and the energy per eigenmodes is simply kBT. The number of magnons N(co) for a given mode of frequency co is simply N`(~~ kBT (14) For a ferromagnetic sensor with an anisotropy of 1.6 kA/m (20 Oe) and a magnetization of 1 T. the eigenfrequency for the lowest order mode is simply that of Eq.10, with a characteristic frequency of 1.3 GHz. Thus, the number of excitations occupying the lowest order mode at an operational temperature of T= 400 K is N(co) ~ 7000. The number of unpaired spins in a free layer is proportional to the volume of the magnetic film (typically 0.18 ,um x 0.06 ,um = 5 nm = 5.4 x 104 nm3) and the magnetization density of the material. The relative fraction ntco) of spin excitations is therefore `~' N(r~) grab ~kBT coMsV (15) From linear magnon theory (Sparks, 1965), the rms amplitude 6Orms for these thermally driven magnetic excitations is related to the magnon density by ( n0= 2 £ ) (16) where £ iS the ellipticity for gyromagnetic precession in a thin-film geometry, £= ~ ~ Hk Solving for 6Orms' in the limit of small angle fluctuations, we end up with ~ `°Ms v (17) (18) Assuming a magnetic volume of 5.4 x 104 nm3, the fluctuations have an amplitude of 6Orms ~ 70 degrees: A significant fraction of the sensor dynamic

LIMITS OF STORAGE IN MAGNETIC MATERIALS 61 range! Of course, these fluctuations are largest at the resonance frequency co. Away from the resonance, we use the linearized solution to Eq.7 to determine the amplitude of this magnetic noise. The solution to Eq.7 in the limit of MS >> Hk and small angle motion is that of a harmonic oscillator, with a susceptibility spectrum given by X(~) = Xo con 'o2 _~2 -2i( if) ) (19) The noise power spectrum is proportional to | %(co) 12. Experimental mea- surements have verified that the magnetic fluctuations take the spectral shape of the ferromagnetic resonance, as shown in Figure 5. O.25 - m: ONTO :E ~ J. ~ 3 OTIS O.~5 Or 0.10 o I 0.05 e 0.~0 ~ T_ 400K r ~ ~ J ~ /~300K f `i ~ ~ f ~ h ~ ,:~ ~ \T- 10OK at. . P~ ~ Ash \ x }~' |BIAS; = ~ Aria ~~ ; ~.~r- ~ .. A ~ ,, ~~=;~ _ ,~~ i ~ I ~ I ~ I ~ I ~> :~Fll~-2mT l i ); thy. ~ U tint A ~ ~i' . ' ' . L' ~ }~— ,. ~ ~~/ ~ ~ — ~ ~ ~ WHI = 4 m to ~ /~ .3 It ~ 5 -hi, § art ~ . A iA^_ INANE ~ mA 0~ ~ An, ~K^ ~~~ ~~-~ 1 2 3 ~ <; t; NO Frequency (A FIGURE 5 Magnetic fluctuation noise in a GMR sensor. The resonant peak is at the ferromagnetic resonance frequency for the given temperature (upper figure) and given applied longitudinal field (lower figure). Increasing the longitudinal bias field increases the resonance frequency, in accordance with Eq.10. Source: Stutzke et al., 2003. Re- printed with permission.

62 FRONTIERS OF ENGINEERING The integral of the power spectrum is equal to the total noise power: 00 C| |X(~)| duo = (rearms ) o Solving for the normalization constant, we obtain C = ~ 50rms ~ 4 (roo)o2) ~ I J (20) (21) The noise amplitude at low frequencies 6Ba!c is approximately white in the limit coo >> 1, with an amplitude of bB4c = =~ ~ ~ Tromp ~ I ~ (1rrO~O ) ( %° ) (22) in units of radians per ~ Using Eq.10 in the limit of MS >> Hk, we end up with bB4c = DEBT ,~<U2MI12H512V (23) The rms fluctuations in the GMR signal are proportional to the integrated noise spectrum /~O, which we approximate as the product of Eq.23 and the square root of the electronic bandwidth Of: ^~ 4 I t~BT/\f = ~ ~ ~2MI12H512V = 2 :~£~^f ]i kBT] ~ --K JO U (24) where C()k = Unlock is the effective gyromagnetic frequency of the anisotropy. Notice that the noise power is proportional to the ratio of the thermal energy to

LIMITS OF STORAGE IN MAGNETIC MATERIALS 63 the anisotropy energy. As was the case for the thermal erasure of recorded bits in thin-film media, the ratio of the anisotropy to thermal energy determines the stability of the magnetization in the presence of thermal fluctuations. Using typical parameter values (or = 0.01, Hk = 1.6kA/m (20 Oe), MS = 800 kA/m (800 emu/cm3), T= 400 K, /\f= 400 MHz, and V= 5.4 x 104 nm3), the integrated noise amplitude is All ~ 12 degrees, reducing the maximum dynamic range of the GMR sensor to 90/12 = 7.5. This is the bare minimum necessary for proper operation of the disk drive. Any further reductions in the sensor volume will result in higher noise levels because the total thermal noise power of the magnetic fluctuations is independent of sensor size, whereas the magnetostatic energy associated with the rotation of the magnetization is directly proportional to device volume. The noise level can be reduced by increasing the anisotropy Hk, but only at the expense of the read head sensitivity, thus mandating improve- ments in the GMR resistance AR. Some progress has been made along these lines, but present values are already ARIR ~ 0.2. Even at ARIR of 100 percent, Hk can only be increased by a factor of 5. To maintain the same level of magnetic fluctuation noise, the sensor volume can be reduced by the same factor, with an effective increase in are al density by a factor of 3. However, once the techno- logical hurdles have been surpassed to obtain large GMR signals, this improve- ment in density no longer scales. The maximum possible recording densities will have been achieved. An alternative strategy is to reduce the damping parameter or for the GMR sensor material. By reducing the relaxation rate, thermal fluctuations are effec- tively isolated at the FMR frequency, far above the operational frequency of the drive. However, the nature of damping in ferromagnetic metals remains phe- nomenological; there has been little success in developing a theory for the damp- ing process that can be used to engineer low damped materials. Instead, mea- surements of a wide variety of alloys suggest that damping parameter values of less than 0.01 are not in the offing as a result of a simple search through material parameter space. Instead, we need to understand the fundamental mechanisms that give rise to damping. Numerous mechanisms have been suggested (Abraham and Kittel, 1952; Korenman and Prange, 1972; Suhl, 1998), but to date none has demonstrated the predictive power necessary to qualify as a true physical theory for damping. This remains a serious deficiency in the theory of ferromagnetic materials that deserves more attention in the physics community. FUTURE PROSPECTS It appears that the use of scaling in disk-drive technology is no longer a viable means of improving drive capacity. The problem is the fundamental nature of thermal fluctuations in magnetic systems at small volumes, which lim- its both the bit capacity of the recording media and the read sensors used to read

64 FRONTIERS OF ENGINEERING back the bits from the media. Alternative technical strategies have been pro- posed (perpendicular magnetic recording, patterned media, heat assisted mag- netic recording), but all of these approaches offer only marginal improvements over current are al densities based on incremental advances in recording effi- ciency. For all practical purposes, the most cost-effective means of increasing value for the data storage consumer is the same as for other commodity items- manufacturing finesse, improved reliability, and marketing. Nevertheless, it should be appreciated that magnetic recording technology has reached nanoscale dimensions in spite of the mechanical components that have always been considered a hindrance; bit dimensions are almost at 35 nm (length) x 180 nm (width) x 10 nm (thickness). By any measure, such numbers are nothing short of astonishing. However, moving to even smaller bit dimen- sions will require more than improvements in engineering. It will require a greater understanding of the fundamental physics underlying the phenomena of hysteresis and the thermodynamics of the ferromagnetic phase. Such under- standing might enable the discovery of new physics that would allow for new modalities of the recording process. It is worth the time to consider this last point in more detail. Hysteresis in a spin system, at its most elementary level, is an astonishing effect. It should be kept in mind that the switching of magnetic bits involves transitions between nearly degenerate energy levels. Recording a bit does not involve the storage of energy in the archival medium, but rather the expenditure of work to cause a thermodynamically irreversible alternation in the magnetization state of the me- dium. Thus, the magnetic bit is essentially an indelible record that some fraction of the electromagnetic work provided by the recording head has been converted to heat via the spin system of the ferromagnet, resulting in a net increase in entropy, /\S = KUV/T; the more entropy generated in the writing of a bit, the more stable the recorded mark. Such a process stands in marked contrast to our basic understanding of spin dynamics at the quantum level. One need only consider that the moment of a ferromagnetic volume is composed of electron spins, each of which would have a very short memory time if isolated from all the other spins in the ferromagnetic system, even at low temperatures. Electron spin resonance measurements typi- cally yield decoherence times ranging from nanoseconds to microseconds in most conductors. The ferromagnetic phase, enabled by the exchange interaction between the uncompensated electron spins, is stabilized against such relaxation processes, but at a cost. Switching the magnetization in a stable fashion requires the irreversible conversion of work into heat. Thus, the effort to increase recording densities is essentially a struggle to find increasingly efficient means of converting the same amount of work into heat per bit, even though the size of the bits is shrinking. Current recording technology relies upon intrinsic relaxation processes induced by large applied magnetic fields. Borrowing terminology from nuclear magnetic resonance, large

LIMITS OF STORAGE IN MAGNETIC MATERIALS 65 Zeeman splittings induced by the recording heads drives spin reversal via longi- tudinal relaxation processes. The rephrasing of the recording process in the context of spin resonance techniques suggests one possible strategy for increas- ing recording densities. It is well known that resonant techniques (e.g., adiabatic fast passage) are far more efficient means of manipulating spin states in para- magnetic systems. Perhaps similar methods could be developed for the im- proved focus of electromagnetic energy to yet smaller spin volumes in a ferro- magnetic medium. REFERENCES Abraham, E. and C. Kittel. 1952. Spin lattice relaxation in ferromagnets. Physical Review 88(5): 1200. Chandler, D. 1987. Introduction to Modern Statistical Mechanics. New York: Oxford University Press. Helstrom, C.W. 1984. Probability and Stochastic Processes for Engineers. New York: Macmillan. Korenman, V. and R.E. Prange. 1972. Anomalous damping of spin waves in magnetic metals. Physical Review B 6(7): 2769-2777. Landau, L., and E. Lifshitz. 1935. On the theory of the dispersion of magnetic permeability in ferromagnetic bodies. Physik. Z. Sowjetunion 8: 153-169. Lu, P.-L., and S.H. Charap. 1994. Magnetic viscosity in high-density recording. Journal of Applied Physics 75(10): 5768-5770. Lu, B., D. Weller, A. Sunder, G. Ju, X. Wu, R. Brockie, T. Nolan, C. Brucker, and R. Ranjan. 2003. High anisotropy CoCrPt(B) media for perpendicular magnetic recording. Journal of Applied Physics 93(10): 6751-6753. Morrish, A.H. 2001. The Physical Principles of Magnetism. New York: IEEE Press. Neel, L. 1949. Theorie du trainage magnetique des ferromagnetiques en grains fins avec applica- tions aux terres cuites. Annual Geophysics 5: 99-136. Pathria, R.K. 1996. Statistical Mechanics, 2nd Ed. New York: Butterworth-Heinemann. Rizzo, N.D., M. DeHerrera, J. Janesky, B. Engel, J. Slaughter, and S. Tehrani. 2002. Thermally activated magnetization reversal in submicron magnetic tunnel junctions for magnetoresistive random access memory. Applied Physics Letters 80(13): 2335-2337. Rizzo, N.D., T.J. Silva, and A.B. Kos. 1999. Relaxation times for magnetization reversal in a high coercivity magnetic thin film. Physical Review Letters 83(23): 4876-4879. Silva, T.J., C.S. Lee, T.M. Crawford, and C.T. Rogers. 1999. Inductive measurement of ultrafast magnetization dynamics in thin-film permalloy. Journal of Applied Physics 85(11): 7849- 7862. Smith, N., and P. Arnett. 2001. White-noise magnetization fluctuations in magnetoresistive heads. Applied Physics Letters 78(10): 1448-1450. Sparks, M. 1965. Ferromagnetic Relaxation Theory. New York: McGraw-Hill. Stoner, E.C., and E.P. Wohlfarth. 1948. A mechanism of magnetic hysteresis in heterogeneous alloys. Philosophical Transactions of the Royal Society of London, Series A 240(826): 599- 642. Street, R.C., and J.C. Wooley. 1949. A study of magnetic viscosity. Proceedings of the Physical Society, Section A 62(9): 562-572. Stutzke, N., S.L. Burkett, and S.E. Russek. 2003. Temperature and field dependence of high- frequency magnetic noise in spin valve devices. Applied Physics Letters 82(1): 91-93. Suhl, H. 1998. Theory of the magnetic damping constant. IEEE Transactions on Magnetics 34(4): 1834-1838.

Thermodynamics of Nanosystems CHRISTOPHER JARZYNSKI Los Alamos National Laboratory Los Alamos, New Mexico The field of thermodynamics was born of an engineering problem. Reflec- tions on the Motive Power of Fire, an analysis of the efficiency of steam engines published by the French engineer Sadi Carnot in 1824 led ultimately to the elucidation of the second law of thermodynamics, the unifying principle that underlies the performance of all modern engines, from diesel to turbine. Today engineers are excited about nanosystems, including machines that operate at molecular length scales. If history is any guide to the present, a grasp of thermo- dynamics at the nanoscale is essential for the development of this field. With this in mind, the focus of this brief talk is on the following question: If we were to construct an engine the size of a large molecule, what fundamental principles would govern its operation? To put it another way, what does thermodynamics look like at the nanometer length scale? It is perhaps not immediately obvious that thermodynamics should "look different" at the microscopic scale of large molecules than at the macroscopic scale of car engines. Of course, at the microscopic level, quantum effects might play a significant role, but in this talk we will assume the effects are negligible. For present purposes, the difference between the macro- and the microworld boils down to this. On the scale of centimeters, it is safe to imagine that matter is composed of continuous substances (e.g., fluid or solid, rigid or elastic, etc.) with specific properties (e.g., conductivity, specific heat, etc.~. On the scale of nanometers, by contrast, the essential granularity of matter (i.e., that it is made up of individual molecules and atoms) becomes impossible to deny. Once we consider systems small enough that we can distinguish individual molecules or atoms, thermal fluctuations become important. If we take a large system, such as a rubber band, and stretch it, its response can accurately be 67

68 FRONTIERS OF ENGINEERING predicted from knowledge of the properties of the elastic material of which it is made. But, imagine that we take a single RNA molecule, immerse it in water, and stretch it using laser tweezers. As a result of continual bombardment by the surrounding water molecules, the RNA molecule jiggles about in a way that is essentially random. Moreover, because the RNA is itself a very small system, these thermal motions are not negligible; the noise-to-signal ratio, so to speak, is substantial. Thus, the response of the RNA strand has a considerable element of randomness in it, unlike the predictable response of an ordinary rubber band. This naturally leads us to suspect that the laws of thermodynamics familiar in the context of large systems might have to be restated for nanosystems in a way that accounts for the possibility of sizeable thermal fluctuations. Admittedly, machines the size of single molecules have not yet been con- structed, but the issues I have raised are not merely speculative. The RNA- pulling scenario outlined above was carried out by an experimental group at University of California, Berkeley that measured the resulting fluctuations stretching the RNA strand (Liphardt et al., 2002~. Independently, researchers at the Australian National University in Canberra and Griffith University in Brisbane have used laser tweezers to drag microscopic beads through water, with the explicit aim of observing microscopic "violations" of the second law of thermodynamics caused by thermal fluctuations (Wang et al., 2002~. In a sense, the research is happening in reverse order to the research that gave rise to nine- teenth-century thermodynamics. We do not yet have a nanomachine (a twenty- first century analogue of a steam engine), but we have begun to play with its potential components, with the aim of teasing out the fundamental principles that will govern the device when (or if) we ultimately construct one. The study of the thermal behavior of tiny systems is by no means a new field. A century ago, Einstein's and Smoluchowski's quantitative explana- tions of Brownian motion the spastic movement of pollen particles first observed by the British botanist Robert Brown in 1827 not only helped clinch the atomic hypothesis, but also led to the puctuation-dissipation theo- rem, a remarkably simple relationship between friction and thermal noise. In the context of the RNA experiments mentioned above, the fluctuation- dissipation theorem predicts that: (Wdiss)= NEW (1) where (Weiss) is the average amount of work that is dissipated when we stretch the RNA; c,2w = (W2> - (Vf/)2 is the variance in work values; T is the temperature of the surrounding water; and kB = 1.38 x 10-23 J K-i is Boltzmann's constant. Here the words average and variance refer to many repetitions of the pulling experiment; because of thermal noise, the exact amount of work performed in stretching the RNA molecule differs from one repetition to the next, as illus- trated in Figure 1. By dissipated work, we mean the amount by which the total work performed (during a single realization) exceeds the reversible work,

THERMODYNAMICS OF NANOSYSTEMS 69 FIGURE 1 Distribution of work values for many repetitions (realizations) of a thermo- dynamic process involving a nanoscale system. The process might involve the stretching of a single RNA molecule or, perhaps, the compression of a tiny quantity of gas by a microscopic piston. The tail to the left of AF represents apparent violations of the second law of thermodynamics. The distribution p(W) satisfies the nonequilibrium work theorem (Eq.6), which reduces to the fluctuation-dissipation theorem (Eq.l) in the special case of near-equilibrium processes. WdisS—W—Wrev (2) (As discussed below, this is equivalent to W- /\F, where AF is the free energy difference between the initial and final states of the system.) The fluctuation-dissipation theorem, along with most subsequent work in this field, pertains to systems near thermal equilibrium. (Eq.1 will not be satis- fied if we stretch the RNA too rapidly!) In the past decade or so, however, a number of predictions have emerged that claim validity even for systems far from equilibrium. These have been derived by various researchers using a vari- ety of theoretical approaches and more recently have been verified experimen- tally. It is impossible to do justice to a decade of work in such a short space, but I will attempt to convey the flavor of this progress by illustrating one result on a toy model. Consider that familiar workhorse of thermodynamic pedagogy, a container filled with a macroscopic quantity of gas closed off at one end by a piston. Suppose we place the container in thermal contact with a heat bath at a tempera- ture T. and we hold the piston fixed in place, allowing the gas to relax to a state

70 FRONTIERS OF ENGINEERING of thermal equilibrium. Then we vary the position of the piston so as to com- press the gas, from an initial volume VA to a final volume VB. In doing so we perform external work, W. on the gas. If we carry out this process reversibly (slowly and gently), then the gas stays in equilibrium at all times, and the work performed on it is equal to the net change in its free energy: i Wrev = FB FA AF (3) If instead we carry out the process irreversibly (perhaps even rapidly and violently), the work, W. will satisfy the inequality W > /\F (4) with AF defined exactly as above. Eq.4 is simply a statement of the second law of thermodynamics, when we have a single heat bath. To understand what Eq.3 and 4 look like for microscopic systems, imagine that we scale our system down drastically, to the point that our container is measured in tens of nanometers and contains only a handful of gas molecules. Again we place it in contact with a heat bath, and we consider a process whereby we move a tiny piston so as to compress the volume of the container from an initial volume VA to a final volume VB. (I do not mean to suggest that a realistic nanomachine will be composed of nanopistons; this familiar ex- ample is used only for illustration). Suppose we repeat this process very many times, always manipulating the volume in precisely the same way. During each repetition, or realization, we perform work as the gas molecules bounce off the moving piston. However, because the precise motion of the molecules differs each time we carry out the process, so does the precise value of W. In effect, the thermal fluctuations of the gas molecules give rise to statistical fluctuations in the value of W from one realization to the next. After very many repetitions, we have a distribution of work values, p(W), where p(W)dW is the probability that the work during a single realization will fall within a narrow window dW around the value W. What is the relationship between this distribution and the free-energy difference /\F = FB - FA between the equilib- rium states corresponding to the initial and final volumes? Based on our knowl- edge of macroscopic thermodynamics (Eq.3 and 4), we might guess that the average work value is no less than OF, i.e., (W) _ ~ dW p(W)W 2 /\F (5) Recall that the free energy associated with a state of thermal equilibrium, at temperature T. is given by F = E- ST, where E and S are, respectively, the internal energy and entropy of that state. In Eq.3, FA and FB are associated with the equilibrium states corresponding to container volumes VA and VB.

THERMODYNAMICS OF NANOSYSTEMS 71 with the equality holding only if the process is carried out reversibly. However, this inequality does not rule out the possibility that p(W) has a small "tail" extending into the region W < AF; see Figure 1. If we were to observe a value of W less than AF for a process carried out with a macroscopic piston, this would be an extraordinary event a true viola- tion of the second law of thermodynamics. But in a system with relatively few degrees of freedom, this would not be such a shocking occurrence. We will refer to such an event (W < AF in a microscopic system) as an apparent violation of the second law. Eq.5 is correct, but we can do better. Instead of considering the average of W over many repetitions of our thermodynamic process, let us consider the aver- age of exp(-W/kBl) over these realizations. It turns out that this average obeys a simple and somewhat surprising equality: ~dW p(W)exp(-W/kBl) = exp(-AF/ IBM, or more succinctly ( ) (6) This is the nonequilibrium work theorem, which remains valid no matter how gently or violently we force the piston in changing the volume from VA to VB. This result has been derived using various theoretical approaches (Crooks, 1998, 1999; Hummer and Szabo, 2001; Jarzynski, 1997a,b) and confirmed by the Ber- keley RNA-pulling experiment discussed above (Liphardt et al., 2002~. Eq.5 and 6 both make predictions about the probability distribution of work values, p(W), but the latter is a considerably stronger statement than the former. For instance, Eq.6 immediately implies that p(W) cannot be located entirely in the region W > AF. (If that were the case, then e-W/ksT would be less than e-~F/ksT for every realization of the process, and there would be no way for Eq.6 to be satisfied.) Thus, if the typical value of W is greater than AF, as will be the case for an irreversible process, then there must also be some realizations for which W < /\F. Eq.6 not only tells us that p(W) must have a tail extending into the region W < AF, but it also establishes a tight bound on such apparent violations of the second law, namely, | dW p(W)<e _00 (7) for any n > 0. This inequality can be stated in words: the probability that we will observe a work value that falls below AF by at least n units of kBT is bounded from above by e-n. This means that substantial violations of the second law (n >> 1) are extremely rare, as expected. (The derivation of Eq.7 from Eq.6 is left as an exercise for the reader.) If perchance we change the volume of the container relatively slowly so that the gas remains close to thermal equilibrium at all times, then p(W) ought

72 FRONTIERS OF ENGINEERING to be a normal (Gaussian) distribution. (I will not try to justify this expectation here beyond saying that it follows from the Central Limit Theorem.) It takes a few lines of algebra to show that if p(W) is a Gaussian distribution that satisfies Eq.6, then its mean and variance must be related by Eq.1. Thus, although the nonequilibrium work theorem (Eq.6) is valid regardless of whether we move the piston slowly or quickly, in the former (near-equilibrium) case it reduces to the well-known fluctuation-dissipation theorem (Eq.l). Eq.6 is closely related to another result, which can be stated as follows. Suppose we perform many repetitions of the process described above, in which we change the volume of our tiny container of gas from VA to VB, according to some schedule for moving the piston. Then suppose we perform many repeti- tions of the reverse process, in which we move the piston according to the re- verse schedule, thus changing the container volume from VB to VA. For both sets of experiments, we observe the work performed during each realization of the process, and in the end we construct the associated distributions of work values, PA - B (W) and PB - A (We. Then these two distributions satisfy PA - B (W) = e(W—~F)/kBT PB - A ~ W) (8) where AF = FB - FA as before (Crooks, 1998, 1999~. This is remarkable because the response of the gas during one process, VA ~ VB, is physically quite different from the response during the reverse process, VB ~ VA. For example, when we push the piston into the gas, a modest shock wave (a region of high density) forms ahead of the piston. For the reverse case, a region of low density trails the piston as it is pulled away from the gas. Nevertheless, the two distributions are related by a simple equality, Eq.8 above. Eq.6 and 8 are two examples of recent results pertaining to microscopic systems away from thermal equilibrium. They make very specific predictions about the fluctuations of an observed quantity, in this case the work we perform on our system. The fluctuation theorem is a collective term for another set of such results (Evans and Searles, 2002), originally derived in the context of sys- tems in nonequilibrium steady states and recently confirmed by the Australian bead-dragging experiment mentioned earlier (Wang et al., 2002~. All of these predictions have potentially important design implications for nanomachines, for which thermal fluctuations are bound to play an important role. For instance, if we design a microscopic motor fueled by the transfer of molecules from high to low chemical potential, then it is of interest to estimate how often random fluc- tuations might cause this motor to run backward, by moving a molecule or two from low to high chemical potential, in flagrant (if brief) violation of the second law. Eq.7 addresses this sort of issue. I don't want to overstate the case here, but it seems reasonable to assume that the better we understand the laws of thermo- dynamics at the nanoscale, the more control we will have over such systems.

THERMODYNAMICS OF NANOSYSTEMS 73 CONCLUSIONS When considering the thermodynamics of systems at the scale of nano- meters, ;f~uctuations are important. If we carefully repeat an experiment many times, we will obtain noticeably different results from one repetition to the next. The take-home message of this brief talk is that although these fluctuations originate in thermal randomness, they obey some surprisingly stringent and gen- eral laws (such as Eq.6 and 8), which are valid even far from equilibrium. More- over, these laws are fundamentally microscopic; they do not follow by educated guess from our knowledge of macroscopic thermodynamics. However, the story is not yet complete. So far, we have a jumble of related predictions, as well as experimental validation, but nothing like the beautifully coherent structure of macroscopic thermodynamics. Perhaps as nanotechnology develops and the need for a better understanding of such phenomena becomes more pressing, a more complete and satisfying picture will emerge. REFERENCES Crooks, G.E. 1999. Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. Physical Review E 60(3): 2721-2726. Crooks, G.E. 1998. Nonequilibrium measurements of free energy differences for microscopically reversible Markovian systems. Journal of Statistical Physics 90(5/6): 1481-1487. Evans, D.J., and D.J. Searles. 2002. The fluctuation theorem. Advances in Physics 51(7): 1529- 1585, and many references therein. Hummer, G., and A. Szabo. 2001. From the cover: free energy reconstruction from nonequilibrium single-molecule pulling experiments. Proceedings of the National Academy of Sciences 98 (7): 3658-3661. Jarzynski, C. 1997a. Nonequilibrium equality for free energy differences. Physical Review Letters 78(14): 5018-5035. Jarzynski, C. 1997b. Equilibrium free-energy differences from nonequilibrium measurements: A master-equation approach. Physical Review E 56(5): 5018-5035. Liphardt, J., S. Dumont, S.B. Smith, I. Tinoco, and C. Bustamante. 2002. Equilibrium information from nonequilibrium measurements in an experimental test of Jarzynski's Equality. Science 296(5574): 1832-1835. Wang, G.M., E.M. Sevick, E. Mittag, D.J. Searles, and D.J. Evans. 2002. Experimental demonstra- tion of violations of the Second Law of Thermodynamics for small systems and short time scales. Physical Review Letters 89(5): 050601-1-050601-4.

Next: Counterterrorism Technologies and Infrastructure Protection »
Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2003 NAE Symposium on Frontiers of Engineering Get This Book
×
Buy Paperback | $57.00 Buy Ebook | $45.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This volume includes 14 papers from the National Academy of Engineering's Ninth Annual U.S. Frontiers of Engineering Symposium held in September 2003. The U.S. Frontiers meeting brings together 100 outstanding engineers (ages 30-45) to learn from their peers and discuss leading-edge technologies in a range of fields. The 2003 symposium covered these four areas: environmental engineering; fundamental limits of nanotechnology; counterterrorism technologies and infrastructure protection; and biomolecular computing. Papers in the book cover topics such as microbial mineral respiration; water-resource engineering, economics, and public policy; frontiers of silicon CMOS technology; molecular electronics; biological counterterrorism technologies; Internet security; DNA computing by self-assembly; and challenges in programming living cells, among others. A talk by Aerospace Corp. president and CEO William F. Ballhaus, Jr. titled The Most Important Lessons You Didn't Learn in Engineering School is also included in the volume. Appendixes include summaries of the breakout session discussion that focused on public understanding of engineering, information about the contributors, the symposium program, and a list of the meeting participants. The book is the ninth in a series covering the topics of the U.S. Frontiers of Engineering meetings.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!