This chapter discusses in more detail the recent accomplishments and directions that are expected to be taken in nuclear physics in upcoming years. Where the discussion in Chapter 1 focused on four overarching questions being addressed by the field, this chapter is separated into more traditional subfields of nuclear physics—(1) nuclear structure, whose goal is to build a coherent framework for explaining all properties of nuclei and nuclear matter and how they interact; (2) nuclear astrophysics, which explores those events and objects in the universe shaped by nuclear reactions; (3) quark-gluon plasma, which examines the state of “melted” nuclei and with that knowledge seeks to shed light on the beginnings of the universe and the nature of those quarks and gluons that are the constituent particles of nuclei; (4) hadron structure, which explores the remarkable characteristics of the strong force and the various mechanisms by which the quarks and gluons interact and result in the properties of the protons and neutrons that make up nuclei; and (5) fundamental symmetries, those areas on the edge of nuclear physics where the understandings and tools of nuclear physicists are being used to unravel limitations of the Standard Model and to provide some of the understandings upon which a new, more comprehensive Standard Model will be built.
The goal of nuclear structure research is to build a coherent framework that explains all the properties of nuclei, nuclear matter, and nuclear reactions. While extremely ambitious, this goal is no longer a dream. With the advent of new generations of exotic beam facilities, which will greatly expand the variety and intensity of rare isotopes available, new theoretical concepts, and the extreme-scale computing platforms that enable cutting-edge calculations of nuclear properties, nuclear structure physics is poised at the threshold of its most dramatic expansion of opportunities in decades.
The overarching questions guiding nuclear structure research have been expressed as two general and complementary perspectives: a microscopic view focusing on the motion of individual nucleons and their mutual interactions, and a mesoscopic one that focuses on a highly organized complex system exhibiting special symmetries, regularities, and collective behavior. Through those two perspectives, research in nuclear structure in the next decade will seek answers to a number of open questions:
- What are the limits of nuclear existence and how do nuclei at those limits live and die?
- What do regular patterns in the behavior of nuclei divulge about the nature of nuclear forces and the mechanism of nuclear binding?
- What is the nature of extended nucleonic matter?
- How can nuclear structure and reactions be described in a unified way?
New facilities and tools will help to explore the vast nuclear landscape and identify the missing ingredients in our understanding of the nucleus. A huge number of new nuclei are now available—proton rich, neutron rich, the heaviest elements, and the long chains of isotopes for many elements. Together, they comprise a vast pool from which key isotopes—designer nuclei—can be chosen because they isolate or amplify specific physics or are important for applications.
At the same time, research with intense beams of stable nuclei continues to produce innovative science, and, in the long term, discoveries at exotic beam facilities will raise new questions whose answers are accessible with stable nuclei.
Examples of the current program that offer a glimpse into future areas of inquiry are the investigation of new forms of nuclear matter such as neutron skins occurring on the surfaces of nuclei having large excesses of neutrons over protons, the ability to fabricate the superheavy elements that are predicted to exhibit unusual stability in spite of huge electrostatic repulsion, and structural studies in exotic isotopes whose properties defy current textbook paradigms.
Hand in hand with experimental developments, a qualitative change is taking
place in theoretical nuclear structure physics. With the development of new concepts, the exploitation of symbiotic collaborations with scientists in diverse fields, and advances in computing technology and numerical algorithms, theorists are progressing toward understanding the nucleus in a comprehensive and unified way.
Shell Structure: A Moving Target
The concept of nucleons moving in orbits within the nucleus under the influence of a common force gives rise to the ideas of shell structure and resulting magic numbers. Like an electron’s motion in an atom, nucleonic orbits bunch together in energy, forming shells, and nuclei having filled nucleonic shells (nuclear “noble gases”) are exceptionally well bound. The numbers of nucleons needed to fill each successive shell are called the magic numbers: The traditional ones are 2, 8, 20, 28, 50, 82, and 126 (some of these are exemplified in Figure 2.1). Thus a nucleus such as lead-208, with 82 protons and 126 neutrons, is doubly “magic.” The concept of magic numbers in turn introduces the idea of valence nucleons—those beyond a magic number. Thus, in considering the structure of nuclei like lead-210, one can, to some approximation, consider only the last two valence neutrons rather than all 210. When proposed in the late 1940s, this was a revolutionary concept: How could individual nucleons, which fill most of the nuclear volume, orbit so freely without generating an absolute chaos of collisions? Of course, the Pauli exclusion principle is now understood to play a key role here, and the resulting model of nucleonic orbits has become the template used for over half a century to view nuclear structure.
One experimental hallmark of nuclear structure is the behavior of the first excited state with angular momentum 2 and positive parity in even-even nuclei. This state, usually the lowest energy excitation in such nuclei, is a bellwether of structure. Its excitation energy takes on high values at magic numbers and low values as the number of valence nucleons increases and collective behavior emerges. The picture of nuclear shells leads to the beautiful regularities and simple repeated patterns, illustrated in Figure 1.2 and seen here in the energies of the 2+ states shown at the top of Figure 2.2. The concept of magic numbers was forged from data based on stable or near-stable nuclei. Recently, however, the traditional magic numbers underwent major revisions as previously unavailable species became accessible. The shell structure known from stable nuclei is no longer viewed as an immutable construct but instead is seen as an evolving moving target. Indeed the elucidation of changing shell structure is one of the triumphs of recent experiments in nuclear structure at exotic beam facilities worldwide. For example, experiments
at Michigan State University (MSU) in the United States and at the Gesellschaft für Schwerionenforschung (GSI) have shown that in the very neutron-rich isotope oxygen-24, with 8 protons and twice as many neutrons, N = 16 is, in fact, a new magic number.
One of the most interesting regions exhibiting the fragility of magic numbers is nuclei with 12 to 20 protons and 18 to 30 neutrons. The experimental evidence is exemplified in the lower portion of Figure 2.2 by the energies of the first excited 2+ states in this region. The figure shows the disappearance of neutron number N = 20 as a magic number in magnesium while it persists for neighboring elements.
Similarly, N = 28 loses its magic character for silicon, sulfur, and argon, while calcium, which is also magic in protons, retains its doubly magic character at N = 28.
There are at least three factors leading to such changes in shell structure: changes in how nucleons interact with each other as the proton-neutron asymmetry varies, the influence of scattering and decay states near the isotopic limits of nuclear existence (the “drip lines”), and the increasing role of many-body effects in weakly bound nuclei where correlations determine the mere existence of the nucleus. This new perspective on shell structure affects many facets of nuclear structure, from the existence of short-lived light nuclei, to the emergence of collectivity, to the stability of the superheavy elements.
Recent studies of calcium, nickel, and tin isotopes using techniques such as Coulomb excitation and light-ion single nucleon transfer reactions, both near traditional magic numbers and along extended isotopic chains, are beginning to answer questions about effective internucleon forces in the presence of large neutron excess, the relevance of the detailed shell-model template in the presence of weak binding, and the nature of nuclear collective motion. Excellent tests of the nuclear shell model were offered by recent studies of the tin (Sn) isotopes. The nucleus tin has a magic number (50) of protons, and its short-lived isotopes tin-100 and tin-132, with 50 and 82 neutrons, respectively, are expected to be rare examples of new doubly magic heavy nuclei. Unique data in the tin-132 region (see Figure 2.3) shows that tin-132 indeed behaves as a good doubly magic nucleus. Other experiments providing data around tin-100, in particular the first structural information on tin-101, have led to theoretical surprises. Further tests of shell structure and interactions in the heaviest elements will be discussed below.
It is expected that the shell model will undergo sensitive tests in the region of superheavy nuclei, whose very existence hinges on a dynamical competition between short-range nuclear attraction and huge long-range Coulomb repulsion. Interestingly, a similar interplay takes place in low-density, neutron-rich matter found in crusts of neutron stars, where “Coulomb frustration” produces rich and complex collective structures, discussed later in this chapter in “Nuclear Astrophysics.” Figure 2.4 shows the calculated shell energy—that is, the quantum enhancement in nuclear binding due to the presence of nucleonic shells. The nuclei from the tin region are excellent examples of the shell-model paradigm: the magic nuclei with Z = 50, N = 50, and N = 82 have the largest shell energies, and the associated closed shells provide exceptional stability. In superheavy nuclei, the density of single-particle energy levels is fairly large, so small energy shifts, such as the regions of enhanced shell stabilization in the super-heavy region near N = 184, are generally expected to be fairly broad; that is, the notion of magic numbers and the energy gaps associated with them becomes fluid there.
Another dimension in studies of shells in nuclei has been opened by precision studies, at the Thomas Jefferson National Accelerator Facility (JLAB) and at the
Japanese National Laboratory for High Energy Physics (KEK), of hypernuclei—nuclei that contain at least one hyperon, a strange baryon, in addition to nucleons. By adding a hyperon, nuclear physicists can explore inner regions of nuclei that are impossible to study with protons and neutrons, which must obey the constraints imposed by the Pauli principle. The experimental work goes hand in hand with advanced theoretical calculations of hyperon-nucleon and hyperon-hyperon interactions, with the ultimate goal being the comprehensive understanding of all baryon-baryon interactions.
Exploring and Understanding the Limits of Nuclear Existence
An important challenge is to delineate the proton and neutron drip lines—the limits of proton and neutron numbers at which nuclei are no longer bound by
the strong force and nuclear existence ends—as far into the nuclear chart as possible (see Figure 2.3 [top]). For example, experiments at MSU have produced the heaviest magnesium and aluminum isotopes accessible to date and have shown that magnesium-40, aluminum-42, and possibly aluminum-43 exist. Nuclei near the drip lines are very weakly bound quantum systems, often with extremely large spatial sizes. In recent years, experiments at Argonne National Laboratory (ANL), TRIUMF, Grand Accélérateur National d’Ions Lourds (GANIL), GSI, the European Organization for Nuclear Research (CERN), and Rikagaku Kenky jo (RIKEN) using high-precision laser spectroscopy have determined the charge radii of halo nuclei helium-6, helium-8, beryllium-11, and lithium-11 with an accuracy of 1 percent through the determination of isotope shifts of atomic electronic levels. With the advanced-generation Facility for Rare Isotope Beams (FRIB) it should be possible to extend such studies and to delineate most of the drip line up to mass 100 using the high-power beams available and the highly efficient and selective FRIB fragment separators.
Drip line nuclei often exhibit exotic decay modes. An example is the extremely proton-rich nucleus iron-45 that decays by beta decay or by ejecting two protons from its ground state. Another example of exotic decay modes, proton-rich nuclei exhibiting “superallowed” beta decays, is discussed in “Fundamental Symmetries,” later in this chapter. Moving toward the drip lines, the coupling between different nuclear states, via a continuum of unbound states, becomes systematically more important, eventually playing a dominant role in determining structure. Such systems where both bound and unbound states exist and interact are called “open” quantum systems.
Many aspects of nuclei at the limits of the nuclear landscape are generic and are currently explored in other open systems: molecules in strong external fields, quantum dots and wires and other solid-state microdevices, crystals in laser fields, and microwave cavities. Radioactive nuclear beam experimentation will answer questions pertaining to all open quantum systems: What are their properties around the lowest energies, where the reactions become energetically allowed (reaction thresholds)? What is the origin of states in nuclei, which resemble groupings of nucleons into well-defined clusters, especially those of astrophysical importance? What should be the most important steps in developing the theory that will treat nuclear structure and reactions consistently?
The Heaviest Elements
What are the heaviest nuclei that can exist? Is there an island of very long-lived nuclei in the N-Z plane? What are the chemical properties of superheavy atoms? These questions present challenges to both experiment and theory. As discussed earlier, the repulsive electrostatic Coulomb force between protons grows so much
in those nuclei with large proton number that they would not be bound except for subtle quantum effects. Theory predicts that stability will increase with the addition of neutrons in these systems as one approaches N = 184 (see Figure 2.5), but there is no consensus about the precise location of the projected island of long-lived superheavy elements and their lifetimes (some are predicted to have lifetimes as long as 105-107 years.
By using actinide targets and rare stable beams, such as calcium-48, elements up to Z = 118 have been produced and observed. The discovery of a nucleus with Z = 117, with a target of berkelium-249, is a case in point as well as an excellent example of international cooperation in nuclear physics (Box 2.1). Not only did this work discover a new element but new information obtained on the half lives of several nuclei in its decay path provided experimental support for the existence of the long-predicted island of stability in superheavy nuclei. Further incremental progress approaching Z = 118 and beyond is possible, but it requires new actinide targets beyond berkelium, and intense beams of rare stable isotopes such as titanium-50. However, there is a range of options for synthesizing heavy elements with exotic beams. By using neutron-rich radioactive targets and beams a highly excited system can be formed, which would decay into the superheavy ground state via evaporation of the excess neutrons. An area of related importance is the further study of the spectroscopy of the heaviest nuclei possible using reaccelerated beams and large acceptance spectrometers, looking at alpha-decay and gamma-ray spectroscopy up to at least Z = 106.
Neutron-rich matter is at the heart of many fascinating questions in nuclear physics and astrophysics: What are the phases and equations of state of nuclear and neutron matter? What are the properties of short-lived neutron-rich nuclei through which the chemical elements around us were created? What is the structure of neutron stars, and what determines their electromagnetic, neutrino, and gravitational-wave radiations? To explain the nature of neutron-rich matter across a range of densities, an interdisciplinary approach is essential in order to integrate laboratory experiments with astrophysical theory, nuclear theory, condensed matter theory, atomic physics, computational science, and electromagnetic and gravitational-wave astronomy. Figure 2.6 summarizes such linkages in this interdisciplinary endeavor.
In heavy neutron-rich nuclei, the excess of neutrons predominantly collects at the nuclear surface creating a skin, a region of weakly bound neutron matter. The presence of a skin can lead to curious collective excitations, for example, “pygmy resonances,” characterized by the motion of the partially decoupled neutron skin against the remainder of the nucleus. Such modes could alter neutron capture cross sections important to r-process nucleosynthesis (discussed further in “Nuclear
Astrophysics,” later in this chapter). One of the main science drivers of FRIB is to study a range of nuclei with neutron skins several times thicker than is currently possible. Studies of high-frequency nuclear oscillations (giant resonances) and intermediate-energy nuclear reactions will help pin down the equation of state of nuclear matter.
Another insight is being provided by electron scattering experiments. The Lead Radius Experiment (PREX) at JLAB uses a faint signal arising from parity violation by weak interaction to measure the radius of the neutron distribution in lead-208. This measurement should have broad implications for nuclear structure, astrophysics, and low-energy tests of the Standard Model. Precise data from PREX would provide constraints on the neutron pressure in neutron stars at subnuclear densities. Important insights come from experiments with cold Fermi atoms that can be tuned to probe strongly interacting fluids that are very similar to the low-density neutron matter found in the crusts of neutron stars (see Box 2.2).
Rather than tackling the nuclear problem from the femtoscopic perspective of nucleon motions and interactions, one can focus on a complementary view of the atomic nucleus as a mesoscopic system characterized by shapes, oscillations, and rotations and described by symmetries applicable to the nucleus as a whole. In this way, properties and regularities, which might not be explicit in a description in terms of individual nucleons, are highlighted, providing insights that can inform microscopic understanding. Such a perspective focuses on identifying what nuclei do and what that tells us about their structure, while the femtoscopic approach is essential to understanding why they do it.
The mesoscopic approach is motivated by the recognition of, and search for, regularities and simple patterns in nuclei that signal the appearance of many-body symmetries and associated emergent collective behavior. Despite the fact that the number of protons and neutrons in heavy nuclei is rather small, the emergent collectivity they show is similar to other complex systems exhibiting self-organization, such as those studied by condensed matter and atomic physicists, quantum chemists, and materials scientists. While few if any nuclei will exhibit idealized symmetries exactly, such a conceptual framework provides important benchmarks. In this perspective, an important goal is to determine the experimental signatures that spotlight these patterns and the interactions responsible for them. Already, research with exotic nuclei is showing the breakdown of traditional patterns (see discussion of Figure 2.2) and new ways of seeing the emergence of collective phenomena in both light and heavy nuclei.
U.S. and Russian Scientists Collaborate to
Create a New Chemical Element, 117
A team of U.S. and Russian physicists has created a new element with atomic number Z = 117, filling in a gap in chemistry’s periodic table. The new superheavy element, born in a Russian accelerator laboratory at JINR, in Dubna, required coordinated collaborative efforts between four institutions in the United States and two in Russia and more than 2 years to achieve, highlighting what international cooperation can accomplish. The identification of element 117 among the products of the berkelium-249 + calcium-48 reaction occurred in late 2009 and the results were published in April 2010.1 Production of the berkelium-249 target material, with a short half-life of T½ = 320 days, required an intense neutron irradiation at the High Flux Isotope Reactor (HFIR) of the Oak Ridge National Laboratory (ORNL), chemical separation from other reactor-produced products including californium-252, again at ORNL, followed by target fabrication in Dimitrovgrad, Russia, and six months of accelerator bombardment with an intense calcium-48 beam at Dubna, Russia—a continual intercontinental race against radioactive decay. Analysis of the experimental data was performed independently at Dubna and Lawrence Livermore National Laboratory, providing nearly round-the-clock data analysis by virtue of the 11- to 12-hour time difference between Russia and California. Six atoms of element 117—five of 293117 and one of 294117—were observed and 11 new nuclides were discovered in the decay products of those two new Z = 117 isotopes (Figure 2.1.1). The measured half-lives of new superheavy nuclei were observed to increase with larger neutron number. This work represents an experimental verification for the existence of the predicted island of enhanced stability. Scientists and students at Vanderbilt University and the University of Nevada also contributed to this successful experiment.
1 Y.T. Ogannessian, F.S. Abdullin, P.D. Bailey, et al. 2010. Physical Review Letters 104: 142502.
Nuclear Masses and Radii
The binding of nucleons in the nucleus contains integral information on the interactions that each nucleon is subjected to in the nuclear environment. Differences in nuclear masses and nuclear radii give information on the binding of individual nucleons, on the onset of structural changes, and on specific interactions. Examples of recent measurements of charge radii in light halo nuclei were discussed above. With exotic beams and devices such as Penning and atomic traps, storage rings, and laser spectroscopy the masses and radii of long sequences of exotic isotopes are becoming available, extending our knowledge of how nuclear
structure evolves with nucleon number. Figure 2.7 (left) shows the sensitivity of separation energies to nuclear structure. The inset displays the energy required to remove the last two neutrons from the nucleus. These energies have sharp drops after magic numbers but approximately linear behavior in between. Subtracting an average linear behavior therefore magnifies structural changes as seen in the color-coded contours in the two-dimensional plot in the proton-neutron plane.
Changes in nuclear properties as a function of nucleon number can signal quantum phase transitions between regions characterized by different symmetries. Although the behavior of such transitions is muted in finite nuclear systems, experimental studies have provided evidence for their existence and tested simple theoretical schemes for nuclei at the critical points. Theoretical studies that model nuclear shape variations in the limit of large valence nucleon number have shown how phase transitional character in large systems evolved toward the muted remnant of this behavior seen in finite nuclei and helped to identify empirical signatures of first- and second-order phase transitions that have been used to classify the phase transitions in, for example, the A ~ 150 and A ~ 134 mass regions.
The extra binding gained in the shape transition region near N = 90 is evident in the brown shaded area in Figure 2.7 (left). Representative spectroscopic data showing the ratio R4/2 of energies of the lowest 4+ and 2+ excitations are given for the same A ~ 150 region in Figure 2.7 (right). The phase transition is signaled by the concave-to-convex change of pattern between N = 88 and N = 90 associated with a breakdown of a subshell gap at Z = 64.
Probing Nuclear Shapes by Rapid Rotation
Gamma-ray spectroscopy is a basic tool for studying nuclear structure, shapes, and their changes—both from the energies and decay paths of excited nuclear states and by measuring nuclear level lifetimes from Doppler effects. Recently, a great diversity of phenomena has been discovered as increasingly sensitive instrumentation reveals unexpected behavior in our quest to observe higher excitation energies and angular momentum states in nuclei.
Figure 2.8 illustrates this progress for the rare earth nucleus erbium-158. The future of gamma-ray spectroscopy is brighter than ever with the development of the next generation of detector systems comprising a highly segmented shell of germanium detectors covering a complete sphere around a source using the new technology known as “gamma-ray tracking.” Such systems will have a sensitivity or resolving power about 100 times better than present-day systems. Since gamma-ray spectroscopy is one of the most powerful experimental approaches to unraveling
Intersections of Dense Nuclear Physics with
Cold Atoms and Neutron Stars
Nuclear systems—from atomic nuclei to the matter in neutron stars to the matter formed in ultrarelativistic heavy ion collisions—are complex many-particle systems that exhibit a great range of collective behavior such as superfluidity. This facet of nuclear systems, shared with matter studied by condensed matter physicists, atomic physicists, quantum chemists, and materials scientists, has opened up splendid opportunities for productive and valuable cross-fertilization among these fields. Of growing importance is the intersection of nuclear physics and ultracold atomic gases.
Atomic gas clouds allow physicists to control experimental conditions such as particle densities and interaction strengths, a control intrinsically unavailable to nuclear physicists. Such control has inspired nuclear physicists to develop more unified pictures of nuclear matter, beyond the constraints of laboratory nuclear systems, and to see commonalities with atomic systems. The experimental flexibility of cold atom systems makes them ideal to explore exotic phases and quantum dynamics in these strongly paired Fermi systems.
The quark-gluon plasmas in ultrarelativistic heavy ion collisions are the hottest materials one can produce in the laboratory, with temperatures of trillions of degrees. On the other hand, clouds of ultracold trapped atoms are the coldest systems in the universe, reaching temperatures as low as one billionth of a degree above absolute zero.1 Nonetheless, despite this difference in temperatures and energies, the two systems share significant physical connections, enabling cross-fertilization between high-energy nuclear physics and ultracold atomic physics. As discussed later in Chapter 2, “Exploring Quark-Gluon Plasma,” both systems, when strongly interacting, have the smallest viscosities (compared with their entropy, or degree of disorder) of any system in the universe. The transition observed in strongly interacting cold fermionic atom clouds from paired superfluid states, analogous to superconducting electrons in a metal, to BEC states of molecules consisting of two fermion atoms, captures certain aspects of the transition from a quark-gluon plasma to ordinary hadronic matter made of neutrons, protons, and mesons.
Superfluid pairing in low-density strongly interacting fermionic atomic systems is very similar to that pairing in low-density neutron matter in neutron stars. Figure 2.2.1 compares the predicted energy of a low-density cloud of cold superfluid neutrons with that of cold atomic fermions as the density increases, and shows how the two systems behave in common. Although the energy scales are vastly different, the attractive interactions between fermions in both systems produce extremely large superfluid pairing gaps, on the order of one-third to one-half the Fermi energy, and in this sense these systems are the highest temperature superfluids known. Experiments in cold atoms (illustrated in the inset of Figure 2.2.1) can measure the energies and superfluid pairing gaps of cold fermions from weak to strong coupling, and provide sensitive tests of theories used to compute the properties of matter in the exterior of neutron stars, large neutron-rich nuclei, and quark matter. These properties are key to understanding the limits of stability and pairing in neutron-rich nuclei and the cooling of neutron stars.
One can also study analogues of nuclear and quark-gluon plasma states with cold atoms: Simple examples include the binding of fermionic atoms in three distinct (hyperfine) states, as in lithium-6, analogous to quarks of three colors of quarks, into three-atom molecules, the analogs of nucleons; or the binding of bosonic atoms with fermionic atoms into molecules. One can also exploit similarities of the tensor interaction between nucleons to the magnetic interaction between atoms with large magnetic dipole moments, e.g., dysprosium, to make analogs of the pion condensed states proposed in dense neutron star matter. Strongly interacting ultracold atomic plasmas also present unusual opportunities to study the dynamics of strongly interacting quark-gluon plasmas. Further examples include the formation and interaction of vortices and possible exotic superfluid
phases of matter. Future experiments with optical traps will allow one to study the properties of the inhomogeneous matter that exists in the crust of neutron stars. And, strongly interacting clouds of atoms with differing densities of up and down spins, as can be engineered in optical traps, share some common features with strongly interacting quark matter with differing densities of up, down, and strange quarks. In both contexts, superfluid pairing gaps that are modulated in space in a periodic pattern may develop, yielding a superfluid and crystalline phase of matter, hints of which may have been seen in very recent cold atom experiments.
1 National Research Council, 2007, Controlling the Quantum World, Washington, D.C.: The National Academies Press.
the structure of nuclei, these new, highly sensitive arrays will greatly enhance, for example, the discovery potential of FRIB, which will produce key nuclei—crucial for understanding new structural phenomena of the types discussed on these pages—but often in very small amounts. These prospects are supported by the advances already obtained with existing current-generation instruments.
New Facets of Nucleonic Pairing
Nucleonic superfluidity plays a large role in nuclear structure. A generic feature of superfluidity is that elementary particles called fermions (such as protons or neutrons) combine to form specially constructed pairs (Cooper pairs) that are bosons and exhibit very different behavior and interactions than their constituent particles. In loosely bound nuclei, pairing may be the decisive factor for stability against particle decay. A striking example is the unbound nature of odd-neutron He nuclei while their even-neutron neighbors are bound. Nucleonic pairing is also important for the structure of neutron star crust. As the number of nucleons can be controlled experimentally, nuclei far from stability offer new opportunities to study pairing. For instance, it has been suggested that, in neutron-rich nuclei, neutron pairs (di-neutrons) are well localized in the skin region. In heavier nuclei with similar neutron and proton numbers, pairing carried by deuteron-like proton-neutron pairs with nonzero angular momentum is expected. Such yet-unobserved correlations are believed to profoundly impact nuclear binding in nuclei with approximately equal numbers of protons and neutrons (N ~ Z nuclei), to influence isospin symmetry and beta decay, and to modify the equation of state of diluted symmetric nuclear matter. Pairing can be probed with a variety of nuclear reactions that add or subtract pairs of nucleons. These reactions can be studied in inverse kinematics (experimental conditions in which the usual roles of target and projectile are interchanged) with a variety of exotic nuclear beams with intensities >103/s. Because of finite-size effects and different polarization effects in nuclei and nuclear matter, a theoretical challenge will be to relate experiments on nucleonic superfluidity in finite nuclei to pairing fields in neutron stars (see Box 2.2).
A new twist on the pairing story has been provided by studies at the Brookhaven National Laboratory (BNL) and JLAB. These studies precisely probed nuclear interactions on short distance scales, showing that energetic protons are about 20 times more likely to pair up with energetic neutrons than with other protons in the nucleus when nucleons overlap (see Figure 2.9). As discussed earlier, in studies of pair correlations at lower energies, such proton-neutron predominance has not been observed. This can be traced back to variations in the nuclear interaction when changing the relative distance between the two nucleons.
An understanding of the properties of atomic nuclei is essential for a complete nuclear theory, for an explanation of element formation and properties of stars, and for present and future energy and defense and security applications. Nuclear theorists strive for a comprehensive, unified description of all nuclei, a portrait of the nuclear landscape that incorporates all nuclear properties and forces and can deliver maximum predictive power with well-quantified uncertainties. Such a framework would allow for more accurate predictions of the nuclear processes that cannot be measured in the laboratory, from the creation of new elements in exploding stars to the reactions occurring in cores of nuclear reactors. Developing such a theory requires theoretical and experimental investigations of rare isotopes, new theoretical concepts, and extreme-scale computing, all carried out in partnership with applied mathematicians and computer scientists (see Box 2.3).
There is a well-delineated path toward such a description at the nucleonic level across the nuclear chart that merges three approaches: (1) ab initio, (2) configuration-interaction (CI), and (3) nuclear density functional theory (DFT). Ab initio methods use basic interactions among nucleons to fully solve the nuclear
High-Performance Computing in Nuclear Physics
One of the trends in science today is the increasingly important role played by computational science. Yesterday’s terascale computers, capable of a trillion calculations per second, are being replaced by petascale computers, which are a thousand times faster, and scientists are even now working toward exascale computers, which will be a thousand times faster again (at a million trillion calculations per second). All of this computing power will provide an unprecedented opportunity for nuclear science (see Figure 2.3.1). Scientific computing, including modeling and simulation, has become crucial for research problems that are insoluble by traditional theoretical and experimental approaches, too hazardous to study in the laboratory, too time-consuming, or too expensive to solve.
High-performance computing provides answers to questions that neither experiment nor analytic theory can address. As such, it becomes a third leg supporting the field of nuclear physics. Nuclear physicists perform comprehensive simulations of strongly interacting matter in the laboratory and in the cosmos. These calculations are based on the most accurate input, the most reliable theoretical approaches, the most advanced algorithms, and extensive computational resources. Until recently working with petascale resources was hard to imagine, and even at the present time such an ambitious endeavor is beyond what a single researcher or a traditional research group can carry out. To this end, collaborative software environments have been created under the DOE’s Scientific Discovery Through Advanced Computing (SciDAC) program, where distributed resources and expertise are combined to address complex questions and solve key problems.1 In each partnership, mathematicians and computer scientists are collaborating with nuclear physicists to remove barriers to progress in nuclear structure and reactions, QCD, stellar explosions, accelerator science, and computational infrastructure. Computational resources required for these calculations are currently obtained from a combination of dedicated hardware facilities at national laboratories and universities, and from national leadership-class supercomputing facilities.
Although significant advances have been achieved in computer hardware as well as in the algorithms used in today’s computations, the forefront computational challenges in nuclear physics require resources that can only be achieved in national supercomputing centers or by dedicated special-purpose machines. Collaborative frameworks such as SciDAC will need to continue in order to prepare for, and to fully utilize, computing resources beyond the petascale when they become available to nuclear physicists. As the nature of the computers will be quite different from that of today’s computers, the codes and algorithms will need to evolve accordingly. Given the scale of the computational facilities, it is clear that one should view these numerical efforts like experiments in their style of operation. Currently, the nuclear physics community can efficiently use between 1 and 10 sustained petaflop resources; hence a staged evolution to the exascale seems appropriate.
In summary, the field of nuclear physics is poised to be transformed through the deployment of extreme-scale computing resources. Such resources will provide nuclear physics with unprecedented predictive capabilities that are needed for the systematic exploration of fundamental aspects of nature that are manifested in the structure and interactions of nuclei and
hadronic matter. Future high-performance computing resources will generate enhancements to nuclear physics program that cannot be imagined today.
many-body problem. Deriving internucleon interactions from quantum chromodynamics (QCD) is a fundamental problem that bridges hadron physics and nuclear structure. While excellent progress has been made in this domain (see the section “The Strong Force and the Internal Structure of Neutrons and Protons”), the lattice calculations have not yet been done with pions as light as those in nature. Meanwhile, QCD-inspired interactions derived within the framework of effective field theory and precise phenomenological forces carefully adjusted to scattering data are commonly used in nuclear structure and reaction calculations. Ab initio techniques have been extended to mass A = 14 and also can be applied to mediummass doubly magic systems. Configuration-interaction methods adopt the notion of a nuclear potential, which the nucleons themselves both create and move in. This approach has promise up through the region of mid-mass nuclei and heavy near-magic systems. The nuclear DFT focuses on nucleon densities and currents instead of on the particles themselves and is applicable throughout the nuclear chart. The road map for this effort involves the extension of ab initio approaches all the way to medium-heavy nuclei, the development of configuration interaction approaches in a variety of model spaces, and the quest for a nuclear density functional for all nuclei up to the heaviest elements (see Figure 2.10). Special, related challenges are the description of the role of the continuum in weakly bound nuclei and the development of microscopic reaction theory that is integrated with improved structure models.
The nuclear many-body problem is of broad intrinsic interest. The phenomena that arise—shell structure, superfluidity, collective motion, phase transitions—and their connections with many-body symmetries, are also fundamental to fields such as atomic physics, condensed matter physics, and quantum chemistry. Although
the interactions of nuclear physics differ from the electromagnetic interactions that dominate chemistry, materials, and biological molecules, the theoretical methods and many of the computational are shared. Figure 2.10 gives selected examples of many-body calculations.1
The aim of nuclear astrophysics is to understand those nuclear reactions that shape much of the nature of the visible universe. Nuclear fusion is the engine of stars; it produces the energy that stabilizes them against gravitational collapse and makes them shine. Spectacular stellar explosions such as novae, X-ray bursts, and type Ia supernovae are powered by nuclear reactions. While the main energy source of core collapse supernovae and long gamma-ray bursts is gravity, nuclear physics triggers the explosion. Neutron stars are giant nuclei in space, and short gamma-ray bursts are likely created when such gigantic nuclei collide. And last but not least, the planets of the solar system, their moons, asteroids, and life on Earth—all owe their existence to the heavy nuclei produced by nuclear reactions throughout the history of our galaxy and dispersed by stellar winds and explosions.
Among the open questions that will guide nuclear astrophysics in the coming decade are these:
- How did the elements come into existence?
- What makes stars explode as supernovae, novae, or X-ray bursts?
- What is the nature of neutron stars?
- What can neutrinos tell us about stars?
Answering these questions requires understanding intricate structural details of thousands of stable and unstable nuclei, and so draws on much of the work described in the preceding section on nuclear structure. This can be seen in Figure 2.11, which illustrates the principal nuclear processes that shape the visible universe. Each step of each process depends on the nature of that particular nucleus. As an example, a small change of just 10 percent in the energy of a single excited state of one particular nucleus, the famous Hoyle state in carbon-12, would make heavy elements, planets, and life as we know it disappear.
Unraveling the nuclear physics of the cosmos, therefore, requires a broad range of experimental and theoretical approaches. In the last decade, ever more sensitive laboratory measurements of low-energy nuclear reactions enabled precise solar models revealing a deficit of solar neutrinos detected on Earth. Knowledge of this
1 Portions of this paragraph are adapted from Department of Energy, 2007, Computing Atomic Nuclei, SciDAC Review 6:42.
deficit of solar neutrinos combined with the results of advanced neutrino detectors led scientists to the discovery that neutrinos have mass (as discussed in more detail late in this chapter under “Fundamental Symmetries”) and confirmed the accuracy of solar models. Laboratory precision measurements also revealed that the nuclear reactions that burn hydrogen in massive stars via the carbon-nitrogen-oxygen (CNO) cycle proceed much more slowly than had been anticipated, changing the predictions for the lifetimes of stars. A few key isotopes in the reaction sequence of the rapid neutron capture process (r-process) responsible for the origin of heavy elements in nature have now been produced by rare isotope facilities. Advanced experimental techniques also enabled measurements of the nuclear properties that characterize their role in the r-process, despite short lifetimes and small production quantities. The same sensitive techniques enabled precision mass and decay measurements of the majority of the extremely neutron-deficient rare
isotopes in the rapid proton capture process powering X-ray bursts. The results explain the existence of two classes of X-ray bursts, short and long bursts. In addition, a new rare class of X-ray bursts, so-called superbursts, were discovered and nuclear physics provided the likely explanation of a deep carbon explosion. New multidimensional core collapse supernova models included much more realistic weak interaction physics and nuclear matter properties owing to new results from laboratory experiments and nuclear theory calculations. Contrary to earlier work, some of these supernova models do now explode although many questions about the explosion mechanism remain. In these supernova explosion models, a new type of nuclear process producing heavy elements, the so called neutrino-p process, was found. The discovery of the most massive neutron star to date has eliminated many theoretical predictions about the nature of nuclear matter.
Future nuclear astrophysics efforts are emerging along two frontiers: (1) the study of unstable isotopes that exist in large quantities inside neutron stars and are copiously produced in stellar explosions but difficult to make in laboratories and (2) the determination of extremely slow nuclear reaction rates, which are important for the understanding of stars. Enabled by technical advances, dramatic progress is expected in the coming decade at both frontiers. The FRIB facility in the United States will, together with other rare isotope laboratories around the world, provide unprecedented access in the laboratory to the same unstable isotopes that play crucial roles in cosmic events. And a new generation of high-intensity stable beam accelerators to be located deep underground, as has been proposed for the United States, will enable the measurement of extremely slow stellar nuclear reactions without disturbance from cosmic radiation.2
A precision frontier also has emerged in the area of measuring neutron-induced reaction rates using neutron beams. Work is needed at this frontier not only on understanding the origin of those elements produced by neutron capture reactions, but also on applications of nuclear science that depend on neutron capture processes. These applications include the design of novel nuclear reactors and stockpile stewardship, as discussed in Chapter 3.
Nuclear theory is of special importance for nuclear astrophysics for many reasons:
- The extreme densities and temperatures encountered inside stars alter the properties of nuclei compared to what is measured in terrestrial laboratories.
2 Such a facility would also facilitate research in fundamental symmetries, as discussed later in this chapter under “Fundamental Symmetries,” as well as in NRC, 2012, An Assessment of the Science Proposed for the Deep Underground Science and Engineering Laboratory (DUSEL), Washington, D.C.: The National Academies Press.
Nuclear theory is needed to calculate the necessary corrections, such as thermal excitations and electron screening.
- In some astrophysical environments such as the r-process or the interiors of neutron stars, extremely rare isotopes exist that cannot be produced in sufficient quantities to fully characterize their properties even with the most powerful rare isotope facilities on the horizon. Experimental data on rare isotopes are needed to advance nuclear theory models, which can then be used to predict the remaining data still out of reach of experiments.
- Many astrophysical reaction rates cannot be measured directly because the rates are too small and the beams too weak. Indirect techniques, where a faster surrogate reaction is used to constrain the slow astrophysical reaction, require reliable reaction theory. In addition, nuclear theory is needed to calculate reaction rates where no experimental information exists.
- Dense nuclear matter can be produced in the laboratory for short times, but can only be observed indirectly from the resulting particle emission. A significant theory effort is necessary to interpret laboratory reaction measurements, and experimental constraints must be used to advance the reliability of the nuclear matter equation of state needed in many astrophysical scenarios.
Progress in nuclear astrophysics must also go hand in hand with progress in astrophysics and observational astronomy. Astronomical observations of the manifestations of nuclear processes in the cosmos provide the link between laboratory and nature. The last decade has seen extraordinary progress in astronomy, with high-precision observations of the composition of very old stars at the largest telescopes on Earth and in space and with surveys scanning hundreds of thousands of candidate stars to find the targets. A new generation of X-ray space telescopes has opened up a novel era in the understanding of phenomena related to neutron stars. Gamma-ray observatories detected the decays of rare isotopes in space, ejected by stellar explosions. Neutrino telescopes provided neutrino images of the sun and had earlier registered neutrinos from a nearby supernova. In the coming decade this progress is bound to continue. Any ongoing large-scale surveys to search for old stars will only pan out in the coming decade, and a new generation of larger ground-based telescopes will enable detailed spectroscopy on many of the resulting targets. Existing X-ray observatories will be complemented with new facilities that push observations toward harder X-rays and possibly gamma-rays and will provide new data on neutron stars and stellar explosions. New-generation gravitational wave detectors are expected to detect signals from supernovae and neutron stars for the first time. Neutrino observatories are ready, and with a little bit of luck they might observe a galactic supernova, an achievement that would revolutionize our understanding of such an event. And a new thrust in astronomy toward wide-field
and high-repetition surveys is expected to shed new light on supernovae and to lead to the discovery of new, possibly nuclear-powered, transient astrophysical phenomena.
Astronomy, astrophysical modeling, and nuclear physics need to work together to achieve progress in nuclear astrophysics. Communication across field boundaries, coordination of interdisciplinary research, and exchange of data are essential for these fields to jointly address the open questions. The Joint Institute for Nuclear Astrophysics, funded by the Physics Frontiers Center Initiative of the National Science Foundation (NSF), has been critical in forming and maintaining a unique worldwide platform to foster such interdisciplinary collaboration between the different nuclear astrophysics communities.
Finally, it will be important to strengthen efforts to coordinate research across field boundaries, to form broad interdisciplinary research networks that integrate the wide range of required expertise, and to facilitate the exchange of data and information between astrophysics and nuclear physics, and between experiment, observations, and theory. Such interdisciplinary research networks are also needed to attract and educate the next generation of nuclear astrophysicists, who, with emerging new facilities in nuclear physics, astrophysics, and high-performance computing, are likely to make transformational advances in our understanding of the cosmos.
The complex composition of our world—some 288 stable or long-lived isotopes of 83 elements—is the result of an extended chemical evolution process that started with the big bang and was followed by billions of years of nuclear processing in numerous stars and stellar explosions (see Figure 2.12). The steady buildup of heavier elements in stars by the successive fusion of hydrogen, helium, carbon, oxygen, neon, and silicon marks the beginning of a new round in the ongoing cycle of nucleosynthesis. The freshly synthesized elements are ejected by stellar winds or supernova explosions and then mixed with interstellar gas and dust from which a new generation of stars is born to repeat the cycle.
Nuclear physics provides the underlying blueprint for this chemical evolution by determining the composition of new elements generated in each astrophysical event. Observations of rare iron-poor, hence old, stars, reveal the composition of the early, chemically primitive galaxy and provide a “fossil record” of chemical evolution. By deciphering the structure of the nuclei involved and by advancing observations, we can trace our chemical history back, step by step, perhaps all the way to the very first supernovae that illuminated the universe. This “nuclear archeology” will advance our understanding of the early universe, of the formation of our galaxy, and also of the future of the universe.
The Eve of Chemical Evolution: How Did the First Stars Burn?
How were the first heavy elements created by the potentially extremely massive stars formed after the big bang? The pattern of the elements ejected in their deaths might still be observable today in the most iron-poor stars of the galaxy, survivors of an early second generation of stars. Candidate stars with iron content a few 100,000 times lower than that of the sun have been found (see Figure 2.13). Comparing the signatures of these elements with predictions from theoretical models of first stars requires a quantitative knowledge of the nuclear reaction sequences generating these elements. This opens up an observational window into the properties of first stars that is complementary to the planned, very difficult direct observations with future infrared telescopes. The reward might be not only a deeper understanding of the beginnings of chemical evolution in our galaxy but also clues about the nature of the early universe and the formation of structure in the cosmos.
Stars: What Elements Are Formed from the Cauldrons of the Cosmos?
Stars are the nuclear furnaces that forge many of the chemical elements in nature. The composition of the material that stars eject into space depends sensitively on the rate at which the various nuclear fusion reactions occur in their interior. While the reaction sequences have been identified, many reaction rates are still not known accurately, limiting predictions of element formation and stellar evolution. A prominent example is the rate of capture of helium on carbon. With a few exceptions, which mark major milestones in nuclear astrophysics, a direct experimental determination of the low-energy stellar fusion rates has not yet been possible. Some of these pioneering measurements have been enabled by experiments in the low background environments of laboratories deep underground. Models of stars therefore employ uncertain theoretical nuclear reaction rates mostly derived by extrapolating experimental data obtained at higher energies or indirectly.
Addressing this problem will remain a formidable challenge in the coming decade. Advances in experimental techniques such as high-intensity stable beam accelerators in underground laboratories, intense rare isotope beams, and advanced detection and target systems will be needed (see Figure 2.14). On the theoretical
side, ab initio calculations of nuclear reactions and models that account for cluster structures in nuclei are particularly promising guides for predicting reaction rates at the energies nuclei have in stars. Theory also needs to address the impact of electrons, which always accompany nuclei and modify reaction rates differently in a laboratory target and in stellar plasma.
The Alchemist’s Dream: How Are Gold, Platinum, and Uranium Created in Nature?
A large gap in our understanding of the chemical evolution of our galaxy surrounds the origin of the elements heavier than iron, such as gold, platinum, or uranium, which comprise more than half of the elements in the periodic table. A
slow neutron capture process (s-process) in red giant stars is thought to produce about half of these elements, ending with the production of lead and bismuth. The other half, including the heaviest elements found on Earth, such as uranium and thorium, require an astrophysical environment with an extraordinary density of neutrons. While such an environment has not been identified with certainty, theory predicts that under such conditions, captures of neutrons are very fast, enabling the synthesis of heavy elements beyond bismuth. During the brief duration of this rapid neutron capture process (r-process), exotic short-lived nuclei with extreme excesses of neutrons come into existence as part of the ensuing chain of nuclear reactions. Most of these exotic nuclei have never been made in the laboratory. This will change with the advent of next-generation rare isotope beam facilities like FRIB, which will allow experimental nuclear physicists to produce such nuclei and to determine their properties. The goal is to finally understand how and where nature produces precious metals like gold and platinum and heavy elements like thorium and uranium. Physics questions concerning the neutron-induced processes that constitute the r-process are closely related to neutron-driven applications such as nuclear reactors.
Although the ultimate goal—namely, to identify the astrophysical site of the r-process—has not been reached yet, progress in nuclear physics and astrophysics has been made in the past decade toward unraveling the origin of the r-process elements. Existing radioactive beam facilities have provided experimental data on some of the key nuclei participating in the r-process. Important recent milestones include the half-life measurement of nickel-78 (see Figure 2.15), high-precision ion
trap mass measurements of zinc-80, and constraints on the neutron capture rate on tin-132. These data provide guidance for theoretical models, which are used to predict the properties of the many nuclei out of current experimental reach. This has led to recognizing the importance of forbidden beta decay transitions and the direct mechanism in neutron captures and, accordingly, to a more realistic description of nuclear fission.
A variety of astrophysical models have been developed that might provide the conditions necessary for an r-process and eject sufficient amounts of matter into space to account for the observed element abundances. The most promising ones involve core collapse supernovae and the merging of two neutron stars. As a breakthrough, observations of the surface composition of iron-poor stars have opened an unprecedented window into the gradual enrichment of the early galaxy with r-process elements. These stars preserve the composition of the early, chemically less evolved galaxy at the time and location of their formation. The observations tell us that r-process events must have started very early in the evolution of the universe, and that they generate a very robust and characteristic pattern for the abundance of elements throughout the history of the galaxy.
Progress in nuclear physics is needed to connect advances in observations and theoretical astrophysics. In addition to new facilities, the data-driven advances expected in nuclear theory will allow predicting the properties of the nuclei that remain out of reach experimentally and quantifying the errors of such extrapolations. This will reduce the uncertainty in astrophysical models related to nuclear physics to the point where various astrophysical assumptions can be rigorously tested against observations, enabling a data-driven approach to solving the r-process puzzle. New approaches in astrophysics are also needed because none of the existing models achieves the conditions and event frequencies inferred from observations for the r-process. Future large-scale astronomical surveys, followed by high-resolution spectroscopy with the largest telescopes available, need to increase the sample of iron-poor stars formed in r-process-rich environments in the early galaxy to provide statistically relevant information on the frequency of r-process events and the nuclear abundance patterns they produce. Detections of the traces of nearby supernovae in Earth’s geological record might also provide clues on the r-process site, and future gamma-ray observatories might be able to detect or at least delimit the radioactive isotopes produced by a supernova r-process.
Dust Grains from Space: Can They Reveal the Secrets of Stellar Cores?
The slow neutron capture process is known to occur in red giant stars. But how does matter flow in the deep interiors of stars to generate the necessary free neutrons, and how have these processes changed over the history of chemical evolution? Progress has been achieved in the past decade by analyzing presolar
grains—small dust grains that formed in the envelope of a red giant star and travelled through space to be finally incorporated into solar system meteorites. Analyzing the composition of these messengers from space, and comparing them with s-process models that include precise neutron capture rates for stable isotopes measured in an experimental tour de force over many decades, has now led to constraints on the flow of matter in the deep interiors of stars and the dependence of neutron capture rates on galactic age.
In the coming decade experimental data of similar quality need to be obtained for lighter isotopes just slightly heavier than iron, and for so-called branch points. Branch points are unstable nuclei where, depending on nuclear properties, temperature, and neutron density, the reaction sequence splits, producing different isotopes. Once the nuclear properties are experimentally determined, the observed isotopic abundances can be used to infer temperature and neutron density deep inside red giant stars. This is applied nuclear physics par excellence! However, to measure neutron captures on these unstable nuclei, radioactive beam facilities will have to work in concert with neutron beam facilities, where radioactive samples can be quickly irradiated to measure neutron capture rates. Where this is not possible, experimenters and theorists will have to develop new indirect techniques to extract the relevant information from other types of nuclear reactions. The reactions producing neutrons for the s-process are also very uncertain and need to be measured in the coming decades at energies that are closer to the astrophysical conditions than has been possible so far. New radiation detection techniques as well as new high-intensity low-energy accelerators placed in underground facilities to shield experiments from background induced by cosmic rays provide a path forward.
Blasting Earth with Radioactivity: What Is the Origin of Iron-60?
Neutron captures in stars also produce a long-lived radioactive iron isotope, iron-60, which is ejected in supernova explosions and decays with a half-life of a few million years. Isotopic anomalies found in the solar system indicate that iron-60 was present in the early solar system, and its decay heat might have contributed significantly to planetary melting. Using sensitive nuclear physics techniques, iron-60 has also been discovered in deep sea sediments and on the surface of the moon, possibly indicating an interaction of the solar system with a nearby supernova 2 to 3 million years ago. And the decay radiation of iron-60 has now been detected by gamma-ray telescopes in space. Thus understanding the origin of iron-60 holds the key to learning about conditions inside supernovae, the frequency of supernovae, the possible impacts of a nearby supernova on biological evolution, and the formation of the solar system and planetary systems in general. Developing that understanding requires knowing the efficiency with which nuclear reactions can produce and destroy iron-60 in a given stellar model. Progress has been
made on this front by measuring the half-lives and rates of neutron captures on nuclei in the vicinity of iron-60, but the data are still very uncertain. In addition, it has been shown that iron-60 production is sensitive to the rate of various other nuclear reactions governing the evolution of stars, such as the triple alpha process or alpha capture on carbon. Despite decades of effort to measure these rates, the uncertainties surrounding them still prevent a precise prediction of the composition of elements produced in stars.
Are There Additional New Processes in the Universe Creating Heavy Elements?
The prevalent view of the origin of the elements heavier than iron and nickel has been that they are made by three distinct processes: the s-process, the p-process, and the r-process. The observations of the composition of old stars show that this traditional picture is not complete as there must be at least one additional nucleosynthesis process producing elements heavier than iron but lighter than most r-process elements in the early galaxy: a so-called light element primary process. The nature of this process remains an open question. At the same time, theory has predicted an unexpected new process producing heavy elements to occur in core-collapse supernovae. During a few seconds of the explosion, hot matter is ejected from the surrounds of the newly born neutron star in the center of the supernova. This matter has a completely unexpected and counterintuitive property: It has more protons than neutrons, caused by interactions with the overwhelming fluence of neutrinos accompanying the explosion. Upon reaching colder temperatures after ejection, nuclei can be formed by combining protons and neutrons. The excess protons can then be captured together with additional neutrons created by proton-antineutrino collisions to produce heavy elements, a process dubbed the vp-process. In the coming decade it will have to be determined if the vp-process and the light element primary process are the same, what their contributions to the chemical evolution of the galaxy are, and what the underlying nuclear physics is. The vp-process involves extremely neutron-deficient rare isotopes, which need to be studied at rare isotope facilities.
Massive stars end their lives in a violent supernova explosion triggered by the collapse of their cores under their own weight. Core-collapse-induced supernovae can be brighter than billions of stars, and the associated neutrino burst is among the most powerful events in the universe. Such supernovae play a central role in astrophysics. They create and eject most of the elements necessary for life (see Figure 2.16). They are a major energy source driving the evolution of the galaxy by triggering the formation of new stars. And the compact remnants that they leave
behind—neutron stars and black holes—are the seats of numerous astrophysical phenomena. Yet, it still is not fully understood what makes supernovae explode.
How Does Stellar Collapse Trigger a Supernova Explosion?
Nuclear physics plays a central role in core collapse supernovae. The collapse of the star’s core is powered by gravity but initiated and controlled by the rate at which nuclei are able to capture electrons. With progressing collapse, nuclei in the center are becoming very densely packed, forming a core of nuclear matter that weighs about half of the sun’s mass. The repulsive force between the densely packed nuclei halts the collapse and enables an explosion. What happens next is not clear. While the collapsed core stores enough energy to power the supernova, a yet insufficiently known mechanism is needed to transfer that energy to the outer layers of the star and expel them violently.
Solving the supernova puzzle has been a formidable interdisciplinary challenge for many decades, involving (magneto-) hydrodynamics, nuclear physics, particle physics, computer science, and relativity. A complication is that the exploding material moves in turbulent patterns in all directions. This requires multidimensional simulations that push the fastest computers to their limits and beyond.
An important achievement in the last decade was the development of two-dimensional realistic simulations that include the flow and interactions of neutrinos. Such simulations succeeded in predicting explosions of lighter massive stars, albeit not always with the observed features. These explosions were in most cases achieved with the so-called delayed explosion mechanism, where the explosion energy is provided by the strong flux of neutrinos emerging from the compressed, hot stellar core that ultimately becomes a neutron star.
In the coming decade, realistic three-dimensional simulations are anticipated. They will likely tell us whether the simplest neutrino-driven mechanism is sufficient to cause stars to explode and which roles turbulence, rotation, and magnetic fields might play. That is, provided that the underlying nuclear physics is reliably known.
Many of the important rates at which nuclei capture electrons have been significantly improved in recent years by employing modern nuclear structure models. For stable nuclei these rates have been validated successfully using nuclear charge exchange reactions. Such reactions, where an accelerated nucleus interacts with a target in such a way that a proton is exchanged with a neutron, or vice versa, can probe the same nuclear properties that also determine the capture of an electron via the weak interaction. Similar measurements for the many unstable nuclei, which dominate core composition during collapse, will only become possible once rare isotope facilities like FRIB are operational.
The amount of pressure generated by compressing the collapsing core in a
supernova to very high density is at the heart of the explosion mechanism. This pressure is determined by the equation of state for nuclear matter at extremely high densities. Current models differ substantially in their predictions, thus producing uncertainties in supernova simulations—for example, concerning the robustness of the explosions obtained. Laboratory measurements and neutron star observations, together with progress in nuclear structure theory, have the potential to significantly reduce these uncertainties in the coming decade.
In the delayed explosion mechanism, the efficiency of energy transfer by neutrinos depends sensitively on the energy spectra of the various neutrino species (flavors), which in turn depend on the weak interactions of neutrinos with the surrounding medium and on neutrino oscillations. In the last decade the understanding of neutrino interactions and their implementation in supernova models has improved dramatically. A recent example is the realization that flavor oscillations induced by interactions of neutrinos with other neutrinos do matter. Neutrino interactions with exotic phases of nuclear matter during the collapse, for example with “nuclear pasta” (described in more detail in the subsection “Neutron Stars”), might also play a role, as does inelastic neutrino scattering on nuclei, which affects neutrino energy spectra.
Observables that directly inform us about the processes in the deep interior of a supernova are difficult to obtain. A nuclear-physics-based diagnostic is radioactive titanium-44, which decays with a half-life of 60 years and whose decay radiation can be detected with gamma-ray observatories (see Figure 2.17), making a detailed
understanding of the relevant rare isotope reaction sequences necessary. The future detection of neutrinos or gravitational waves from a nearby supernova might well provide the data needed to answer the question of the explosion mechanism, though with a galactic supernova rate of a few per century one might have to wait for some time. Nevertheless, supernova models must be ready to interpret such observations.
Thermonuclear explosions of stars can be observed in the cosmos on a daily basis and contribute much to the variability of the night sky as viewed with large telescopes. Even the relatively feeble novae explode with the equivalent of a trillion gigatons of TNT and can often be viewed with the naked eye. The most powerful explosions of this type, thermonuclear supernovae, observationally classified as type Ia, outshine entire galaxies and serve astronomers as distance markers out to the edge of the universe.
Thermonuclear energy is often released in reactions with rare isotopes that do not exist on Earth but are produced under the extreme temperatures and densities arising during the explosion.
X-Ray Bursts: Can They Be Used as Probes of Accreting Neutron Stars?
X-ray bursts are the most common known thermonuclear explosions of astrophysical origin. A thin layer of hydrogen and helium is accumulated on the surface of a compact neutron star via mass transfer from an orbiting companion star. Typically about once a day, the layer explodes, producing a bright, easily observable burst of X-rays lasting tens of seconds or minutes. In the burning zone of the X-ray burst, hydrogen and helium are completely converted into 1016 tons of rare isotopes, generating the energy for the explosion. Most of the resulting ashes accumulate on the neutron star surface, where they decay.
The last decade has seen major advances in the understanding of such events. Accelerator facilities that produce rare isotopes have allowed us to measure most of the lifetimes and masses of the very neutron-deficient isotopes produced in the explosions. This has led to important constraints on some of the reactions that generate the burst’s energy. Extensive observations with space-based X-ray observatories have discovered rare superbursts, which have been explained theoretically as the reignition of residual carbon in the ashes of the regular X-ray bursts. Yet, many puzzles remain, such as the origin of residual carbon in burst ashes, the nature of multipeaked bursts, occasionally observed very short burst intervals, and light curve anomalies. A particular challenge is to understand bursts well enough to actually extract information about the underlying neutron star, and to predict
the composition of the ashes that accumulate on the neutron star surface over time and affect many neutron star observations (see the subsection “Neutron Stars”). While a vast amount of observational data is being collected by Earth-orbiting X-ray observatories, the knowledge of many nuclear fusion reactions needed to interpret the observations is still sparse owing to the limited beam intensities of existing rare isotope facilities for the very neutron-deficient isotopes made in X-ray bursts (see Figure 2.18). This is expected to change in the next decade with the advent of a new generation of rare isotope beam facilities where most of the relevant reactions can be measured.
Novae—Are They Sources of Cosmic Radioactivity?
Novae are an astrophysical phenomenon known since ancient times, when bright new stars suddenly appear in the night sky only to disappear again after a few months. They are now understood to be explosions of a thin hydrogen and helium layer accumulated on a compact white dwarf star via mass transfer from an orbiting companion star. Unlike X-ray bursts, novae emit a lot of visible light. On the other hand, unlike X-ray bursts, novae eject their nuclear ashes into space, possibly explaining the significant amounts of carbon-13 and nitrogen-15 isotopes found today on Earth.
In the last decade, measurements were made of many of the important nuclear rates for novae in a concerted effort at various rare isotope and stable beam facilities. It was observed that some novae produce unexpectedly large amounts of heavier elements such as sulfur. Unfortunately the rates of a few key reactions that produce such heavier elements in novae are still unknown, preventing a quantitative interpretation of these observations. The reaction rates that are particularly difficult to measure involve rare isotopes and will challenge rare isotope experimentation in the coming decade. The situation is similar for the production of radioactive fluorine-18, sodium-22, and aluminum-26 in novae. The gamma-radiation from the decay of these isotopes in nova ejecta might become detectable with next-generation gamma-ray observatories. Alternatively, the presence of these rare isotopes might be revealed in spectral line shifts detected by infrared telescopes.
What Triggers Thermonuclear Supernovae?
Thermonuclear supernovae are the most powerful thermonuclear explosions in the cosmos. They are believed to consume an entire white dwarf star as fuel, dispersing the ashes into space. Their observed brightness is largely powered by the radioactive decay of nickel-56 into, ultimately, iron-56. This makes thermonuclear supernovae one of the main sources (next to core-collapse supernovae) of iron in the universe. The empirical relationship found between light curve shape and absolute brightness enables astronomers to calibrate thermonuclear supernovae and to use the observed brightness as an indicator of their distance, forming a measuring stick out to the edges of the universe. Indeed, as the 2011 Nobel prize in physics recognized, supernova measurements are at the heart of the new cosmological paradigm of an accelerating universe composed mainly of dark energy.
Yet, what triggers the explosion of the white dwarf star and how the explosion rips through the star, producing the observable distribution and composition of the ejecta, is still unclear. The nuclear fusion reactions that power the explosion are also not understood. The rates of these reactions have to be inferred from data at higher energies where cross sections are sufficiently high for measurements with current techniques. Recent experimental and theoretical progress indicates surprisingly large uncertainties in this approach. On the one hand there are hints of an unexpected reduction of fusion probabilities at very low stellar energies, while on the other hand there is speculation about large enhancements due to unknown resonances.
The challenge for the next decade is to push the sensitivity of nuclear physics experiments to enable reliable estimates of these fusion rates. Advances in our understanding of the rate of electron capture by nuclei and advances in the multidimensional modeling of the explosion itself will allow us to explore theoretically the dependence of supernova features on their stellar environment.
Thermonuclear supernovae convert roughly half of their stellar mass into radioactive nuclei. Detecting the decay radiation from these radioactive nuclei would require a next-generation gamma-ray telescope. Owing to the great penetrating power of gamma-rays, this would be a unique opportunity to probe velocity distributions of matter deep inside the explosion. Coupled with the large amount of observational data expected from large-scale surveys of transient phenomena in the next decade, such a telescope would offer the opportunity to validate the various possible progenitor scenarios, yielding the basic understanding of thermonuclear supernovae needed to quantitatively access systematic errors in the measurements of cosmological distances.
In no other area is the overlap between nuclear physics, astrophysics, and condensed matter physics stronger than in neutron stars (see Boxes 2.2 and 2.4 and Figures 2.6 and 2.19). These are gigantic nuclei, somewhat heavier than the sun, but with a radius of about 10 km and an average density much above that of normal nuclei. Some 100 million neutron stars move around in our galaxy alone. Neutron stars can be studied with telescopes rather than with accelerators and detectors, offering the unique opportunity to understand cold nuclear matter on a macroscopic scale.
The equation of state for cold nuclear matter, the relationship between pressure and density, is key to our understanding of neutron stars. This relationship depends on the properties of very neutron-rich nuclei in the outer crust and on the possible existence of exotic types of matter, such as nuclear pasta—nuclear matter intermediate between regular nuclei and essentially homogeneous neutron-rich nuclear matter—and quarks or condensates of particles such as kaons or pions, which might exist in the center of neutron stars. Hyperons, particles that unlike neutrons and protons include strange quarks, may also exist in neutron stars, although their role is poorly understood owing to insufficient knowledge of their interactions with neutrons and protons.
What Are the Size, Weight, and Temperature of Neutron Stars?
Measuring the basic properties of neutron stars like mass, radius, and cooling rates is one way to constrain the nature of nuclear matter. Progress in astronomy in the last decade has yielded a range of neutron star masses from timing observations of pulsars. Based on current data, neutron star masses vary, lying between 1.2 and 1.97 times the mass of the sun. The unambiguous detection of a single neutron star with a mass nearly twice that of the sun weeds out various theories of dense
nuclear matter. Astronomers continue to search for massive neutron stars, and any discovery of a still heavier neutron star would rule out more theories.
Radius measurements have proven to be more difficult. One approach is to observe the cooling surfaces of neutron stars that were heated by decades of X-ray bursts, but the interpretation of these data remains challenging. New attempts for the simultaneous determination of mass and radius have been carried out using data from soft X-ray burst spectra, which depend on both mass and radius. In the coming decade, long-term observations of changes in pulsations from particular double pulsar systems will offer the opportunity to measure the moment of inertia of a neutron star and will give information on its mass and radius.
Another observable through which neutron stars can be studied is their rate of cooling—neutron stars are born hot and cool over their lifetime. The rate of cooling via the emission of neutrinos depends sensitively on the interior composition. Observations that combine temperature measurements with age estimates and models of neutron star atmospheres can yield information about cooling timescales and would constrain models of the cooling processes. An impressive example is the recent discovery of rapid cooling of the neutron star in the Cas A supernova remnant. Remarkably, this neutron star has cooled significantly in just 10 years. This is perhaps the signature of the neutrons in the neutron star forming a superfluid.
The only way (besides neutrino detection) to peek into the deep interior of a neutron star is through gravitational waves. It is hoped that gravitational waves from the collisions between neutron stars, or between a neutron star and a black hole, will be detected at a rate of at least a few per year in the coming decade with ground-based detectors such as LIGO and VIRGO. The data gained would provide additional constraints on neutron star radii. Future gravitational wave observations of neutron star mergers, giant flares, and continuous wave emission from spinning neutron stars have the potential to directly probe the properties of matter in the interior of the star.
How Do Rare Isotope Crusts Shape Neutron Star Observations?
The crust of a neutron star is a relatively thin rigid layer with a thickness of a few 100 m (see Figure 2.19). The outer crust is essentially composed of nuclei in the form of very neutron-rich rare isotopes. The inner crust contains more exotic forms of matter such as superfluid neutrons and possibly nuclear pasta, which is the term used to describe nuclear matter forming into rods and sheets instead of the “drops” typical of nuclei.
The crust and its response to external influences can be observed directly. One example of such an external influence is the occasionally observed “giant flares” that occur on the surface of highly magnetized neutron stars and are energetic enough to directly impact Earth’s ionosphere over galactic distances. Oscillations
observed during these flares are interpreted as starquakes. Much as earthquakes are being used to probe the composition of the crust of the Earth, attempts have been made to use these starquakes to probe neutron star crusts. Similarly, pulsar glitches—sudden changes in the rotation of the neutron star crust, which can be detected with radio telescopes that observe the radio beam emitted by the rapidly spinning neutron star—provide insights into the structure of the crust and evidence for the superfluidity of neutrons. Together these studies have opened up the field of neutron star seismology.
Another example of an external influence is a neutron star that collects a steady flow of gas from an orbiting companion star. This process, together with the various types of thermonuclear bursts that occur in the accumulated gas layer, heats the crust over years or decades. Occasionally this flow of matter gets disrupted. In some cases, modern X-ray observatories have then been able to observe the cooling of the freshly heated crust over many years. Since over time the released heat comes from deeper and deeper layers, the cooling rate contains information about
the nature of such layers. Attempts have been made to interpret observed cooling rate changes as indicating that neutrons in the crust are in a superfluid state.
Accumulating matter on the crust of a neutron star from a companion can also induce reactions of rare isotopes that result in density variations, and it can form magnetically confined mountains. Because of the rapid spins of typical neutron stars, both effects are predicted to cause potentially detectable gravitational wave radiation.
The actual crust composition of such mass-accumulating neutron stars depends on the ashes from thermonuclear explosions on the surface, on the rate of electron capture by nuclei, on the properties of the neutron-rich nuclei produced by these captures, and on the rate of a special class of fusion reactions that occur at high density. Fundamental insight into these questions can be gained with the next generation of rare isotope accelerator facilities, which will be able to produce many of the rare isotopes in neutron star crusts.
Can “Neutron Star Matter” Be Studied in the Laboratory?
Unfortunately it is not possible to create neutron star matter—cold and extremely neutron-rich nuclear matter—in the laboratory. It turns out, however, that heavy nuclei exhibit a sizeable outer layer that consists predominantly of neutrons—a neutron “skin”—which can be studied in experiments (see “Nuclear Structure” section). The study of hypernuclei (short-lived nuclei where some protons or neutrons are replaced by hyperons) can also provide useful information. These approaches probe neutron star matter at or below the density of matter in the nucleus.
However, deep inside a neutron star much higher densities are reached. Nuclear matter at these higher densities can also be produced in the laboratory, albeit for short times, during the collision of two heavy nuclei. While the dense nuclear matter created by collisions has much higher temperatures than in neutron stars, the key question of how the equation of state of nuclear matter depends on the degree of neutron excess, and how this dependency changes with density, can still be addressed through such collisions. The approach is to identify signatures in the reaction products that probe proton-neutron asymmetry and study them for collisions of nuclei with varying amounts of neutrons. Rare isotope beams will probe these dependencies over a wide range of neutron-proton asymmetry.
Neutrinos hardly interact with matter. For this reason they can easily escape from deep interiors of stars like our Sun, from supernovae, and from the interior of our own planet, Earth. The observation of stellar neutrinos constitutes a unique
opportunity to look deep into stars and probe extreme astrophysical environments that cannot be simulated in laboratory experiments. However, the very same property that turns neutrinos into messengers from stellar interiors, their extremely small probability of interacting with matter, also makes them extremely hard to detect. Neutrino observatories on Earth therefore require extraordinarily large detectors in underground sites. In the last decades such detectors have been operated with spectacular success, initiating the field of neutrino astronomy. The importance of nuclear physics in addressing fundamental questions about the properties of neutrinos is discussed in detail later in this chapter under “Fundamental Symmetries.” At this point we focus on how neutrino observations can be used to shed light on open questions in nuclear astrophysics.
Can the Sun Be Used as a Calibrated Neutrino Source?
The solution of the solar neutrino problem through neutrino detection and through precision laboratory measurements of the nuclear reaction rates powering the sun is a triumph of nuclear astrophysics (see the section “Fundamental Symmetries”). The goal now is to improve our knowledge of solar hydrogen burning to a level that effectively turns the sun into a calibrated neutrino source. This will require knowledge of the rates of nuclear fusion in the sun to an accuracy of a few percentage points, which will require major advances in the experimental determination of these rates—for example, through accelerators in underground laboratories. Earthbound neutrino detectors with special sensitivities to neutrinos from different solar reactions are in place or in the planning stage and will be capable of performing accurate neutrino spectroscopy. This will further refine our knowledge of the fundamental parameters by which the different neutrino types mix in nature, and at the same time it will probe our understanding of the interior of the sun, its composition, its stability over time, and the processes that transport energy to the surface.
Can Neutrinos Be Used to Peek Inside a Supernova?
In February 1987, a number of neutrinos that travelled over 10 billion times the distance between the sun and Earth were observed by detectors in the United States and Japan to give the first indication that in the Large Magellanic Cloud a star, Sanduleak-69 202, had exploded as a supernova. This observation was the birth of extrasolar neutrino astronomy and demonstrated unambiguously the theoretical expectation that supernovae observationally classified as type II are indeed triggered by core collapse and that neutrinos are produced in extraordinarily large numbers.
Several advanced neutrino detectors are now in operation and are ready to
observe neutrinos from the next nearby supernova with unprecedented detail. Supernova models show that time evolution and energy spectra of supernova neutrinos carry detailed information about the dynamics of the explosion, the explosion mechanism, and the composition of matter in the center of the supernova. A detailed measurement of the supernova neutrino flux as a function of time will be key to understanding the elusive explosion mechanism.
Another opportunity to observe supernova neutrinos, which does not depend on the occurrence of an individual nearby supernova, is the detection of the diffuse background of all neutrinos ever generated by supernovae across the universe that fills the entire cosmos. Detection limits from current neutrino observatories are within an order of magnitude of the theoretically predicted flux, and with further improvements a detection of this background might be possible.
Are Neutrino Reactions Responsible for the Existence of Fluorine in Nature?
The extreme numbers of neutrinos streaming out of the hot core of a developing supernova interact with nuclei in the outer shells of the star about to explode. These interactions happen sufficiently frequently to alter the composition significantly, creating a set of rather rare isotopes of boron, fluorine, lanthanum, and tantalum. The origin of these isotopes is not well understood, but this neutrino-induced nucleosynthesis process provides a natural explanation for their existence. The production rates for these isotopes are quite sensitive to the neutrino energy spectrum and hence temperature. This offers the possibility of using the observed quantities of these isotopes as a unique supernova neutrino thermometer. Doing so would require, besides reliable models of stellar evolution, an accurate knowledge of the interaction of neutrinos with nuclei.
Can Neutrinos from Earth’s Interior Help in Our Understanding of Earth’s Heat Sources?
More than 4 billion years after its formation in the solar system, Earth’s interior is still hot and molten. This heat is largely maintained, it is believed, by the decay of the radioactive elements potassium-40, thorium-232, and uranium-238. Little is known about the distribution and abundance of these elements, but just as with the sun and supernovae, neutrinos emitted upon their decay can be detected with large detectors. Detectors at both KamLAND in Japan and Borexino in Italy have recently observed antineutrinos emanating from Earth’s thorium and uranium. An improved understanding of the balance between residual heat of formation and the continuing heat from radioactivity is emerging. More detectors, including one located on the oceanic crust, would help to define Earth’s heat sources for geophysical modeling.
Can the Tiny Interaction Probabilities of Neutrinos Be Measured?
The rates of neutrino-nucleus interactions inside supernovae and within terrestrial neutrino detectors need to be understood to fully exploit the potential that lies in the detection of neutrinos from astrophysical objects. Theoretical work to determine such interactions needs to be benchmarked with experiments. The challenge is that neutrinos hardly interact with matter. Even with the most intense laboratory neutrino beams and the largest available detectors, direct measurements of neutrino interactions with nuclei are difficult and have been carried out in only a few cases. Charge exchange reactions, where an accelerated ion beam interacts with a target in such a way that a proton is exchanged with a neutron (or vice versa), and reactions where electrons scatter off nuclei probe some of the nuclear physics that determines the rate of neutrino nucleus interactions. These approaches have been successfully used in the past and are expected to be applied to cases of interest in the coming decade at stable and rare isotope beam accelerator facilities. The results will be complemented with predictions from nuclear theory, which can be constrained by the experimental data, to determine astrophysical neutrino-nucleus interaction rates.
QCD is the theory that describes how quarks and gluons interact. A basic feature of QCD is that although quarks interact strongly when they are separated by distances about the size of a proton, for smaller separations the interaction strength decreases. Physicists realized in the 1970s that this property of QCD implies that the protons and neutrons found in ordinary nuclei can “melt” under extreme conditions, at temperatures above some 2 trillion degrees Celsius. Above this temperature, all matter becomes quark-gluon plasma (QGP) as protons and neutrons merge and release their quark and gluon constituents. For the first few microseconds following the big bang, the entire universe was filled with quark-gluon plasma.
The Relativistic Heavy Ion Collider (RHIC) was built to study the nature of matter at extremely high energy density and to produce states of matter not seen since the universe was microseconds old and then measure their properties. It is now known that by colliding nuclei at very high energies, RHIC creates rapidly expanding droplets of quark-gluon plasma. Experiments at RHIC allow nuclear scientists (in the United States, at 59 universities and 6 national laboratories in 29 states) to answer questions about the microsecond-old universe that cannot be answered by any conceivable astronomical observations made with telescopes and satellites.
Since operations began in 2000, RHIC has provided spectacular evidence that
QGP exists—but it is different than anyone expected. Before 2000, QCD calculations that are known to be reliable at temperatures even higher than those produced at RHIC were used as a qualitative guide to the expected features of the QGP at RHIC. Those calculations suggested that quarks and gluons flew for relatively long distances before bumping into another quark or gluon. If RHIC had created a gaslike plasma of this sort, analogous to familiar electromagnetic plasmas in tokamaks and stars, the produced QGP would have exploded spherically. Instead, the RHIC quark-gluon plasma behaves more like a liquid—in fact, a nearly perfect liquid that flows with very low viscosity. The nonspherical debris patterns from off-center nuclear collisions, and their description as the expansion of a perfect fluid, were the first striking discoveries at RHIC. This liquid QGP is also remarkably effective at slowing quarks as they plow through it, even when the quarks are very heavy and energetic. All observations indicate that the RHIC QGP is not a conventional plasma: There is no evidence at all of any particle-like quark or gluon excitations that travel appreciable distances between interactions. Instead, it is more like a puree-consistency soup than a dilute gas. If it is kicked, its only responses are hydrodynamic waves, like those in water as it reacts to a dropped pebble.
Liquids are characterized by tight coupling between microscopic constituents. As an example, one can consider increasing the coupling in an ordinary liquid such as water. The liquid becomes more “perfect” as the coupling gets stronger, since as the distance that particle-like excitations can travel decreases, a hydrodynamic description becomes ever more accurate and the role of dissipation in damping the flow becomes ever smaller. QGP is a good example of a strongly coupled liquid.
QGP is not the only example of a fluid with no apparent particulate description. The challenge of understanding such liquids appears in several formerly disparate frontier areas of contemporary physics. For example, the interactions among trapped ultracold fermionic atoms, with temperatures around one-millionth of a degree above absolute zero, can be controlled by experimentalists (see Box 2.2). When the interaction is tuned to its maximum strength, so that the atoms travel no appreciable distance between collisions, the atoms behave collectively like a liquid with no particle-like excitations resulting from the underlying atomic degrees of freedom. Indeed, when measuring the shear viscosity of this fluid in appropriate units (by taking the ratio of the shear viscosity to the entropy density), physicists have discovered that this fluid and QGP, almost 20 orders of magnitude hotter, are the two most perfect liquids ever studied in the laboratory (see Figure 2.20). Some of the biggest challenges in condensed matter physics also revolve around understanding phases of matter with no apparent particulate description. Prominent examples include the “strange metal” phase of the cuprate high-temperature superconductors above their superconducting transition temperature, as well as heavy fermion metals containing rare earth elements that are tuned to the vicinity of a zero temperature phase transition, and a lattice of spins in what are known
as “spin liquid phases.” In all these cases, the textbook understanding (whether in terms of a dilute gas of quarks and gluons or atoms or Fermi’s theory of electrons in a metal) breaks down, failing even at a qualitative level to describe the experimentally observed phenomena. The puzzles raised by experiments in each of these systems are at the core of their respective disciplines. Developing new frameworks for describing such systems represents a fundamental challenge in modern physics that cuts across the boundaries between disciplines.
At very short distances or very high temperatures, the relevant physical laws describing the properties of the QGP are well understood: QCD provides a solid microscopic framework that predicts that QGP does become a dilute gas of particle-like quarks and gluons at very short distance scales and/or very high temperatures.
The challenges and the interest generated by experimental discoveries at RHIC all arise from the fact that in the temperature regime being explored at RHIC, the laws of QCD yield a strongly coupled fluid, rather than a dilute gas of quarks and gluons. Thus, the central challenges in heavy ion physics in the coming decade hinge on detailed investigation of the newly discovered quark-gluon plasma liquid: to quantify its properties, to understand how those properties emerge from the microscopic laws of QCD, and, perhaps most important, to find the right language with which to understand the properties of liquid QGP and with which to gain qualitative insights—insights whose ramifications can then ripple across the many other frontier domains in which strongly coupled liquids with no particulate description present such challenges.
One candidate for a new paradigm to understand strongly coupled fluids goes by the name “gauge/gravity duality.” Ideas based on this duality germinated in the late 1990s among string theorists, and in the last decade have bloomed in the hands of both nuclear theorists and string theorists, who together have applied them to the challenges posed by the experiments at RHIC. The basic discovery is that there are many gauge theories (“cousins” of QCD) that feature strongly coupled plasmas in which rigorous calculations can be performed even though conventional methods break down. The key is that in all these examples the quark-gluon plasma turns out to have an equivalent gravitational description in terms of a black hole that lives in four spatial dimensions. One trades the challenges of quantum field theory in three spatial dimensions for classical gravity in one higher dimension, with the extra dimension geometrically encoding the details of how the quarks and gluons interact on different length scales. Via this duality, an extraordinarily small viscosity of the fluid (corresponding to an imperfection index of 1 in Figure 2.20) emerges from a simple and straightforward calculation and it becomes immediately apparent that any fluid that can be described in this way must be nearly perfect. This common feature of so many strongly coupled fluids is related by the duality to a common feature of all black holes—namely, their ability to absorb any object thrown into them and to dissipate any trace of the disturbance. Calculations done via gauge/gravity duality have also yielded qualitative insights into jet quenching (described below) and even predictions for the results of experiments to come. At present, it is not clear whether the qualitative successes of gauge/gravity duality as applied to quark-gluon plasma are just that, or whether they are a sign that QGP itself has a dual gravity description. If the latter were to be the case, quantitative understanding of QGP properties could one day teach us not only about other strongly coupled fluids in condensed matter and atomic physics but also about the nature of the quantum gravitational theory dual to QCD.
Framed by the larger context above, here are some compelling questions raised by the recent discoveries at RHIC that heavy ion collision experiments at
RHIC and at the Large Hadron Collider (LHC) can address in the coming decade:
- The near-perfect liquid QGP discovered at RHIC and now produced also at the LHC must have a particulate description if looked at with a good enough microscope; how, and at what short length scales, can its individual quark and gluon constituents be resolved? And, how does a strongly coupled liquid emerge from constituents that at short length scales are coupled only weakly?
- Experiments at RHIC indicate that the quark-gluon plasma liquid forms and reaches local equilibrium remarkably quickly, in about the time it takes light to travel across one proton. How does this happen? How does the system go from the strong gluon fields hypothesized to occur inside large nuclei to the flowing QGP liquid?
- Does the quark-gluon plasma liquid produced at RHIC and the LHC dissolve even the very small particles formed from heavy quarks and their antiparticles? Does the quark-gluon plasma prevent a heavy quark and antiquark from binding to each other only when they are farther apart than some “screening length”? How close together do they have to be for them to feel the same attraction that they would feel if they were in vacuum?
- How do the energetic particles produced in the earliest stages of a heavy ion collision interact with and deform the fluid? Are very high energy quarks or very heavy “bottom quarks” weakly coupled to the fluid or do they rapidly become part of the soup?
Experiments at RHIC and lattice QCD calculations both indicate that as QGP cools, the reassembly of quarks and gluons into hadrons takes place over a broad temperature range. But, some theoretical calculations indicate that quark-gluon plasma in which there is a greater excess of quarks over antiquarks, as produced in lower energy collisions, should cool through a true phase transition, much like the condensation of water droplets from cooling vapor. If so, there is a sharp phase transition line in the phase diagram of QCD that must end at a critical point. Is there such a critical point in the experimentally accessible domain?
Here we go into slightly more depth on various achievements of the last decade, before returning below to the challenge of addressing the questions for the next decade.
The distributions of angles and momenta of the end products of a RHIC collision bear witness to the enormous collective motion developed as the tiny drop of fluid produced in the collision expands explosively. In those collisions that are not head-on, the initial droplet is almond-shaped, not spherical. The fluid motion that develops as such a droplet expands is anisotropic—the fluid explodes with greater force about the “equator” of the almond than from its poles, an effect referred to as “elliptic flow” (Figure 2.21). One of the early RHIC discoveries was that a description of these collisions using “ideal hydrodynamics” works surprisingly well, capturing the patterns of how the strikingly large azimuthal asymmetry depends on the impact parameter of the collision and on the identity and momenta of the particles in the final state debris. One of the inputs to ideal hydrodynamics is an equation of state, which is taken from numerical calculations of QCD thermodynamics on a discrete space-time lattice. The other input is the assumption of a perfect liquid
with no shear or bulk viscosity (meaning no internal friction that damps out flow). Shear viscosity η enters the hydrodynamic description of relativistic systems in the dimensionless ratio η/s, where s is the thermal (entropy) density. A convenient figure of merit to quantify the internal friction is 4πη/s, called the imperfection index in Figure 2.20. A zero value of the imperfection index is an unachievable idealization like the frictionless inclined plane of high school problems, but in this case its unachievability is a consequence of the laws of quantum mechanics. For typical gases, which are well-described in terms of particles, values of 4 πη/s are in the thousands. Terrestrial liquids like water, liquid nitrogen, and liquid helium can have values of 4πη/s as low as 8 to 30 (Figure 2.20). The comparison of RHIC data to recent theoretical calculations done using viscous (nonideal) hydrodynamics demonstrates that the QGP produced at RHIC certainly has 4πη/s < 5, and likely has 4πη/s < 2. This makes QGP and the strongly coupled fluid made of ultracold fermionic atoms described in Box 2.2 the two most perfect liquids ever studied in the laboratory.
Determining a reliable lower bound on η/s and thus quantifying the approach to perfection of the QGP liquid remains a challenge, in part simply because η/s is so small. One of the largest, current sources of uncertainty in the value of η/s arises from our lack of knowledge of the precise initial density profile of the almond-shaped droplet of fluid formed in the collision. Another uncertainty comes from lack of precise information about how soon after the impact of the two nuclei a hydrodynamic description becomes valid. The success of hydrodynamics indicates very rapid thermalization, but disentangling a precise determination of just how rapid from a precise determination of η/s is a challenge. What is needed are additional observables that get at these questions from new angles. Two examples, QCD jets and QGP, are described below.
QCD jets, or directed sprays of particles that emerge from the “hard” large-angle scattering of quarks and gluons within colliding nuclei, are ubiquitous in high-energy collisions of all kinds. Just as differential absorption of X-rays in ordinary matter can be used to explore the density distribution and material composition inside the human body, the absorption of jets in the QGP can be used to obtain direct tomographic information about the properties of the strongly coupled fluid. Jet quenching refers to a suite of experimental observables that together reveal what happens when a very energetic quark or gluon plows through the strongly coupled plasma. It should be noted that these energetic particles are not external probes; they must be produced within the same collision that produces the strongly coupled plasma itself. RHIC was the first facility with energy sufficient to produce these
probes in abundance, and even more energetic particles have now been produced in heavy ion collisions at the LHC.
The most pictorial manifestation of jet quenching comes from an analysis in which one looks at the angular distribution of all the energetic particles in an event in which at least one particle with an energy above some threshold has been detected. In proton-proton or deuteron-gold collisions, two back-to-back jets are seen, where each jet is recoiling against the other owing to the conservation of momentum. In gold-gold collisions, however, a single spray of particles around the one used to select the event is seen, but the backward-going jet is missing, as shown in Figure 2.22. Instead, in the backward direction one finds an excess of the much lower energy particles characteristic of the debris from the droplet of QGP itself. The interpretation is that in the selected events one jet emerged relatively unscathed while the recoiling partner quark or gluon plowed into the plasma, dumped its energy into the plasma, and as a result heated the QGP rather than producing a high-energy jet.
Evidence for jet quenching can be seen clearly by measuring the reduction of the number of high-momentum particles observed in heavy-ion collisions. A very energetic quark or gluon loses energy in the QGP predominantly by radiating gluons. How much energy is radiated in gluons is determined by a single material property of the strongly coupled liquid, called the jet quenching parameter, which
is basically a measure of how good the QGP is at slowing the most energetic quarks or gluons shooting through it. Determining the value of this parameter from RHIC data is intrinsically uncertain because the jets studied at RHIC may not be energetic enough to validate the assumptions behind present calculations. It is nevertheless interesting that the jet quenching parameter seems to be larger than it would be in weakly coupled QGP and is comparable to that in the strongly coupled QGP found in QCD-like theories obtained via gauge/gravity duality.
QGP Shining Brightly
Experimenters at RHIC have recently achieved the long-standing goal of seeing the light (ordinary photons) emitted by the hot, glowing droplets of quark-gluon plasma produced in heavy-ion collisions, as shown in Figure 2.23. What made this a challenge is that there are many more photons produced by the decay of pions (which are formed well after the QGP explodes) than there are photons
from the primordial glowing plasma, and these decay photons must be carefully measured and subtracted. By comparing the spectrum of photons from the plasma to a thermal spectrum—in effect by measuring the color of the luminous glow—experimenters have shown that the time-averaged temperature of the expanding, cooling droplet of quark-gluon plasma is about 30 percent greater than the temperature at which lattice calculations of QCD thermodynamics predict that protons and neutrons melt into QGP. The initial temperature, at the time of thermalization, must be greater still.
A single head-on collision of gold nuclei at RHIC generates about 5,000 charged particles and 8,000 particles in total. These numbers tell us how much QGP is made in each collision and, with further experimental inputs, also constrain the energy density of the droplets of QGP. Curiously, these strikingly large numbers of particles are lower than had been predicted before RHIC began operations. This observation, together with the suppression of high-transverse-momentum particles at forward angles in deuteron-gold collisions, may indicate that the initial gold nuclei contain fewer low momentum gluons than would be expected from just adding up the contents of independent protons and neutrons. This reduction in the number of gluons, known as “saturation,” in turn reduces the number of particles created by gluon-gluon collisions. Saturation results from a characteristic property of gluons in QCD that makes them quite unlike the photons that comprise ordinary light—namely, two gluons can merge into one. The gluon momentum scale below which saturation is thought to arise, denoted Qs, increases with nuclear size and with collision energy, so saturation is expected to be a larger effect at the LHC than at RHIC. Gluons in the incident nuclei with momenta below Qs are thought to be universal, in the sense that their properties should be the same in nucleons as in nuclei when the collision energy is adjusted so that the two systems have the same Qs. The component of the wave function of nucleons or nuclei that describes gluons in this regime is called “color glass condensate” (CGC). The CGC hypothesis is consistent with measurements of particle yields at forward angles at RHIC and makes predictions for the dependence of the total multiplicity of particles in collisions at higher energy that can be tested in heavy ion collisions at the LHC.
Novel Particle Production Mechanisms
Evidence that the droplets of matter formed in heavy ion collisions at RHIC are composed of collectively flowing quarks that are not bound up into protons and neutrons (“deconfined” quarks) comes from detailed measurements of a wide variety of particle species, which reveal surprising patterns in heavy ion collisions
that are not seen in more elementary collisions. Species that contain three “valence” quarks (like protons, neutrons, and other baryons) are significantly overabundant at intermediate transverse momenta (2-5 GeV) relative to their abundances in proton-proton collisions. This observation is well described by models in which both baryons and mesons (particles containing one valence quark and one valence antiquark) are produced as a large rapidly expanding droplet of a liquid containing deconfined quarks falls apart (cavitates) into a mist of fine droplets—the baryons and mesons. The baryons and mesons coalesce from quarks drawn from the expanding QGP liquid, a process that is quite different qualitatively and quantitatively from the standard mechanisms by which baryons and mesons form in elementary collisions. Furthermore, the elliptic flow patterns of all baryons of varying masses are the same, and those of all mesons of varying masses are the same, a sign that the elliptic flow was generated during the expansion of the primordial QGP fluid, since then the only thing that matters to the elliptic flow of the observed particles is how many valence quarks each of them took from the primordial fluid. This hypothesis has been beautifully confirmed by scaling the baryon and meson elliptic flow by factors of three and two respectively, as in Figure 2.24, obtaining a universal curve that establishes that the dominant features of the flow pattern are developed at the quark level. It remains an open question how a fluid with no apparent particulate description falls apart in a way that “knows” the number of valence quarks in a baryon or meson.
Impact of the RHIC Program
The scientific impact of the RHIC program has been spectacular for a number of reasons. First, RHIC has the ability to produce very large data samples, allowing the possibility of honing in on specific very discriminating observables. Second, the flexibility provided by colliding protons on protons, deuterons on gold, and gold ions on each other opens the way to a natural set of calibration measurements, all made with the same detectors at the same energy. Third, the wide range of available energies allows for a systematic study of observables that identify the properties of QGP and a systematic exploration of the phase diagram of QCD. The combination of flexibility in its design and the dedication of much of its operations to heavy-ion collisions makes RHIC unique among past and present accelerators. RHIC’s successes and opportunities can be traced directly to these attributes. Finally, it is also worth noting that the RHIC program and its exciting science discoveries have attracted outstanding young physicists into nuclear physics, further cementing its impact.
In the coming decade nuclear physicists will perform experiments at both RHIC and LHC to address the new scientific questions raised by the RHIC discoveries. The LHC will ultimately achieve heavy-ion collisions with 27 times the collision energy at RHIC, producing QGP that starts out somewhat hotter (perhaps by a factor of two) and that provides probes of this plasma with much higher energy than at RHIC. By 2012, RHIC is expected to reach 20 times its original design luminosity (number of collisions per second). Advances in accelerator physics (“stochastic cooling”) will enable RHIC to reach this “RHIC II benchmark” about 4 years earlier than had been envisioned at the time of the 2007 Long Range Plan. Many of the detector upgrades described in 2007 are already in place, and others are anticipated between 2012 and 2014.
Answering the big questions posed above will require measurements at both the upgraded RHIC and the new LHC. First results from lead-lead collisions at the LHC show an elliptic flow pattern nearly identical to that found at RHIC, suggesting little or no diminution of the perfect-liquid phenomena observed at RHIC. If this is confirmed by further measurements and theoretical calculations, the experiments at the LHC will yield the highest energy probes of this liquid. However, the greater flexibility and luminosity at RHIC will give it many advantages in varying the masses and energy of the colliding nuclei to systematically investigate the properties of QGP in various regimes.
The Search for the Critical Point
Lattice QCD calculations show that in a matter- antimatter symmetric environment the transition between a gas composed of mesons and baryons to the QGP occurs smoothly as a function of increasing temperature, with many thermodynamic properties changing dramatically but continuously within a narrow temperature range. In contrast, if nuclear matter is squeezed to higher and higher densities without heating it significantly—a feat accomplished in nature in the cores of neutron stars (see Box 2.4)—sharp phase transitions (as in the boiling or freezing of water) may result. A map of the expected QCD phase diagram (Figure 2.25) predicts that the continuous crossover currently being explored in heavy-ion collisions at the highest RHIC energies will become discontinuous if the excess
of matter over antimatter is larger than a certain critical value. This critical point where the transition changes its character is a fundamental landmark on the QCD phase diagram. While new lattice QCD methods have enabled significant progress in the past 5 years toward the goal of locating the QCD critical point, its location remains unknown.
The excess of matter over antimatter in the exploding droplet produced in a heavy-ion collision can be increased by decreasing the collision energy, which reduces the production of matter-antimatter symmetric quark-antiquark pairs and gluons relative to the matter brought in by the colliding nuclei. Decreasing the collision energy also decreases the initial temperature. A series of heavy-ion collision measurements scanning the collision energy can therefore explore the transition region of the QCD phase diagram all the way down to collision energies in which the initial temperature no longer reaches the transition. RHIC completed the first phase of such an energy scan in 2011, taking data at a series of energies to search for a critical point in the phase diagram (if one exists) at a “baryon chemical potential” (a measure of the excess of matter over antimatter) up to about half that in cold nuclear matter. Recent theoretical developments have identified specific event-by-event fluctuation observables most likely to be enhanced in collisions that cool in the vicinity of the critical point. They have also predicted ratios of different fluctuation observables, which, if seen, would fingerprint enhanced fluctuations in the data as being due to the proximity of the critical point. The large range of temperatures and chemical potentials permitted by the flexibility of RHIC, along with important technical advantages in the measurement of fluctuation observables at a collider and the recently upgraded detectors, give RHIC scientists an excellent opportunity to discover a critical point in the QCD phase diagram if, indeed, this landmark falls in the experimentally accessible regime. Later in the decade, the Facility for Antiproton and Ion Research (FAIR) at the GSI Helmholtz Centre for Heavy Ion Research will extend the search to even higher matter over antimatter excess.
Perhaps the biggest surprise from jet quenching measurements at RHIC is that heavy charm and bottom quarks plowing through the plasma seem to lose energy at a rate comparable to that of light quarks. If the QGP had a particulate description, this would be as surprising as seeing no difference between a bowling ball and a ping-pong ball plowing through a gas of ping-pong balls. Yet data from RHIC on heavy quarks identified indirectly via the isolated electrons produced in their decays show that these “bowling balls” feel the strongly coupled QGP just as much as light quarks do. They not only lose energy, they also seem to be dragged along by the expanding fluid, behaving just like another component of the fluid. At
Phases of Dense Matter and Neutron Stars
As the liquid quark-gluon plasma (QGP) that filled the microseconds-old universe expands and cools, it undergoes a phase transition in which it condenses into protons and neutrons, much as steam condenses into droplets of water. This phase transition is merely the beginning: Just as the phase diagram for steam + water + ice (Figure 2.4.1, top) features a rich tapestry of phases and phase transitions, so too does the phase diagram for QCD, illustrated in Figure 2.26 in the main text, in which many phases of matter are expected at high densities and low (relative to trillions of degrees) temperatures. Within just the nuclear matter regime of the phase diagram, where the important axes include the neutron-to-proton ratio as well as the temperature and the density or pressure, varied phases with diverse physical properties emerge (Figure 2.4.1, bottom). Examples include (1) nuclear matter (a liquid of neutrons and protons) with different patterns of superfluid pairing, (2) a dilute gas of deuterons and alpha particles, and (3) rigid crystals made of charged nuclei immersed in a superfluid of neutrons.
Neutron stars are magnificent laboratories within which various phases of dense matter are found. Just below the surface of a neutron star, increasingly neutron-rich nuclei form a rigid crust until a critical density of 4.3 × 1011 g/cm3 is reached. Below this depth, the excess neutrons cannot be bound by nuclei and they begin to occupy the space between the nuclei. These unbound neutrons, interacting with one another, become superfluid. Further down, instead of a crystal of nuclei one finds “pasta phases” in which rodlike (noodle-like) and slablike (lasagna-like) structures are embedded in the superfluid. Eventually the pressures are so high that these structures simply merge to form a uniform fluid of neutron-rich matter. This matter (and the neutron fluid above it) are expected to be superfluid only below some density-dependent and therefore depth-dependent critical temperature. It is exciting that observations of the neutron star in the center of the debris from the 320-year-old Cassiopeia-A supernova indicate that it cooled detectably between 2000 and 2010, since cooling that is this rapid could be a sign of the onset of superfluidity, which is expected to happen over decades at different depths.
The properties of matter, as found in neutron stars, at densities significantly greater than that inside large nuclei, have so far eluded fundamental description. What kinds of particles (neutrons and protons? mesons? quarks?) are most important at different densities? And, how does matter make the transition from superfluid neutrons and protons to quark matter at higher densities? Is it sharp or gradual? At higher densities, the usual descriptions of dense matter as a collection of neutrons and protons interacting by static forces must break down: Not only do poorly characterized forces between many neutrons and protons come into play, but a picture only in terms of individual neutrons and protons becomes invalid as the neutrons and protons begin to overlap substantially at higher densities. Indeed, one expects an onset of quark degrees of freedom as these overlaps become important.
One of the central physical characteristics of nuclear matter is its stiffness—that is, how hard one has to squeeze it in order to increase its density and hence how well it can support matter above it against the overwhelming gravitational forces trying to collapse neutron stars. The stiffness of matter that is a few times denser than nuclei determines the structure of neutron stars, which is governed by a balance between the attraction due to gravity and the stiffness of the matter of which the star is composed. Uncertainties in the properties of matter at densities above those in large nuclei are reflected in uncertainties in the maximum possible mass a neutron star can have, an important factor in distinguishing a possible black hole from a neutron star by measurement of its mass. Astrophysical determinations of neutron star masses and radii strongly constrain the possible stiffness of nuclear matter at high densities. Essentially, the stiffer the matter, the higher the maximum mass that a neutron star can have, but the lower must be the maximum densities found in the centers of neutron stars. Observations of high-mass neutron stars therefore place a lower limit on how stiff matter must be. While it has been known for decades that many neutron stars have masses about 1.4 times that of our sun, the exciting recent discovery of a neutron star with a mass that is reliably determined to be 1.97 ± 0.04 times that of our sun significantly raises the lower bound on the stiffness of nuclear matter and significantly reduces our uncertainty in this fundamental aspect of the phase diagram of QCD. This discovery also tells us that maximum densities in neutron stars do not rise to an order of magnitude above those in laboratory nuclei. Lower maximum densities leave less room for exotic matter in the interior and thus constrain the ways in which neutron stars can cool by emitting neutrinos. The observation of high-mass neutron stars, especially when combined with future neutron star cooling data, presents a deep challenge to our understanding of high density interacting nuclear matter, and at the same time points to the directions that must be taken to make significant advances in solving this outstanding problem.
a qualitative level, this is exactly what is expected for a strongly coupled QGP liquid based on calculations of how heavy quarks interact with such a fluid that have been done in QCD-like theories using gauge/gravity duality. The experimental conclusions about heavy quarks are not yet definitive because at present the experiments cannot separate charm quarks from bottom quarks. Upgrades to the detectors in the Pioneering High Energy Nuclear Interaction Experiment (PHENIX) and the Solenoidal Tracker at RHIC (STAR), which are expected to come on line between 2011 and 2014, are designed precisely to address this issue, by separately identifying those particles formed from charm quarks and those formed from bottom quarks.
A second frontier in jet quenching studies is the move from observables based on the measurements of one or two particles to studies of how the angular shape of a jet is modified by the strongly coupled plasma. With the increased luminosity anticipated from 2012 onward, the RHIC experiments will be able to do such measurements for jets with energies more than a hundred times the temperature of the QGP produced in RHIC collisions. At the LHC, such studies have begun
using jets with energies that are several hundred times the temperature of the QGP produced in those collisions. It will be very interesting to compare the jet modifications seen in these two regimes and to learn in which regime the effects are larger and in which they can best be measured quantitatively.
At the LHC, the precision of theoretical analyses of jet quenching is expected to improve owing to the higher energy of the jets produced. In addition, new kinds of measurements directly related to LHC’s extremely high collision energy will become available. For example, RHIC only recently achieved sufficiently high luminosities to see events in which a single jet is produced back-to-back with one very high energy photon. Such events will be copiously produced at the LHC. The photon flies right through the quark-gluon plasma, allowing the experimenters to measure the initial energy of the jet. Comparison to jets of that same energy in elementary collisions will allow direct measurements of how the strongly coupled plasma modifies the jet.
Studying how the strongly coupled liquid responds to an energetic quark or gluon shooting through it represents a third frontier. There are some indications of a “sonic boom” in the fluid, excited by the supersonic projectile, but the relevant features in the data can also arise from event-by-event fluctuations in the initial shape of the collision zone. Calculations of similar processes in the strongly coupled plasmas of both QCD and the QCD-like theories analyzed via gauge/gravity duality indicate that a quark or gluon can excite a sonic boom, but they also show that more momentum is transferred to the more prosaic wake of moving fluid left behind the energetic probe. The best experimental avenue to resolving these questions is to study collisions between nuclei of different sizes—for example, copper-gold collisions, which can easily be produced at RHIC. The sonic booms, if present, should be similar in these collisions, but the confounding event-by-event fluctuations in the initial shape of the collision zone can be minimized in the head-on collisions of two different-size nuclei.
This many-faceted experimental program, combined with advances in the theoretical modeling needed to interpret the data, will move the field beyond the qualitative conclusions drawn from the present jet-quenching measurements to quantitative statements about essential properties of QGP.
The hallmark of the QGP is deconfinement, meaning that at high temperatures the quarks and gluons are not confined within baryons and mesons. The very nature of the QGP “screens” the attractive force that normally binds quarks into baryons and mesons. This poses a quantitative question: How close do the quarks have to be in order for their attraction not to be screened? Mesons made from a heavy quark and its heavy antiquark can be used to answer this question
because these heavy mesons, referred to generically as “quarkonia,” are significantly smaller than protons and typical mesons or baryons. In free space, charmanticharm mesons (called J/Ψ mesons) are roughly half the size of protons, and bottom-antibottom mesons (called Y mesons) are roughly one quarter the size of protons. It is expected that as the temperature of the QGP increases, one first reaches a regime in which protons and most baryons and mesons have dissolved but the J/Ψ and Y mesons do not fall apart. At higher temperatures, the J/Ψ mesons dissolve because the QGP has become hot enough to screen the attraction between a quark and an antiquark separated by the size of a J/Ψ, and only at a temperature still higher (by roughly a factor of two) do the smallest mesons known, the Y mesons, dissolve. Lattice QCD calculations of the screened quark-antiquark potential support this picture, but a definitive experimental confirmation is not yet in hand. Unless obscured by other effects, the sequential deconfinement of these heavy quarkonia should be signaled by a reduction in the number of such mesons detected in the debris of the collision.
This scenario suffers from several complications. For example, given the novel particle production mechanism described above, even if quarkonia dissolve in the QGP liquid, as the QGP cools and reassembles into mesons and baryons, charm and anticharm quarks may find each other and regenerate quarkonia. This confounding complication can be resolved via experimental measurements of the degree to which quarkonia participated in the collective flow of the exploding fluid. This extremely challenging measurement will become possible only with increased luminosity at RHIC. Another concern is that both the J/Ψ and the Y mesons can be excited into larger states, which should dissolve at lower temperatures. Data on both Y and J/Ψ mesons, in heavy ion collisions and in collisions between a proton (or deuteron) and a heavy ion, which serve as a control, can help disentangle this complication. RHIC will have access to (rare) mesons only after its luminosity upgrade is complete in 2012. It is expected that the LHC (with its higher energy) and RHIC (with its higher luminosity and longer heavy ion runs) will see roughly the same number of J/Ψ and Y mesons per year. The combined analysis of data from RHIC and the LHC should allow comparisons of the production rates for various different quarkonium states in QGP produced with initial temperatures differing by roughly a factor of two. These measurements are demanding, but they have the potential to confirm the pattern of sequential deconfinement and to yield a quantitative measure of the effectiveness with which QGP at varying temperatures can screen the quark-antiquark attraction.
In addition to varying the temperature (RHIC compared to LHC) and the meson size (J/Ψ compared to Y), experimentalists can study quarkonia moving through the strongly coupled quark-gluon plasma at varying speeds. Calculations performed via gauge/gravity duality predict that strongly coupled QGP with a given temperature screens the quark-antiquark attraction more effectively for a
quark-antiquark pair that is moving than for one at rest. This indicates that high-velocity quarkonia will dissolve in a lower temperature QGP than quarkonia at rest, making them less numerous. Modeling based on weakly coupled quark-gluon plasma suggests the opposite. High-velocity quarkonia are rare, but the almost completed RHIC luminosity upgrade should bring them within reach, providing us with another measurement that can discriminate between fundamentally different theoretical descriptions of the QGP.
Since 2011, RHIC has had a new ion source, the Electron Beam Ion Source (EBIS), which makes it possible to accelerate uranium nuclei. These highly deformed nuclei are about 30 percent larger from pole to pole than across their equators. The goal of the initial brief uranium-uranium run anticipated at RHIC in 2012 is to demonstrate that experimentalists can select a subset of all the collisions in which the nuclei collide tip-to-tip, and a different subset in which they collide side-on-side. If this can be done, uranium-uranium collisions in the coming years can play a significant role in answering currently open questions about how perfect the QGP liquid is and about jet quenching. The side-on-side collisions will produce elliptical droplets of QGP with an initial density profile different than in the off-center collisions of spherical nuclei. This opens a path to separating the effects of variations in this density profile from the effects of a small but nonzero η/s. The tip-to-tip collisions will achieve higher energy densities than current RHIC collisions in a smaller transverse area. Currently, reducing the transverse area by selecting off-center collisions necessarily also reduces the energy density. The comparison between uranium-uranium and gold-gold collisions will allow separate control of the size of the QGP droplet and its energy density, allowing for clean studies of the path-length dependence of jet quenching observables, one of the key discriminants between different theoretical calculations that model the energy loss of energetic quarks moving through the QGP.
Direct Photon Flow
How RHIC has seen the glow of the QGP was described above. With the much higher luminosities anticipated in several years, it will become possible to measure the angular distribution of this electromagnetic radiation. Once they are produced, photons do not interact further with the QGP. This means that the angular asymmetry of the light from the glowing QGP will enable experimenters to see earlier stages of the expanding QGP droplet, opening another pathway to a quantitative determination of the figure of merit η/s.
Lattice gauge theory is an essential tool for calculating the properties of strongly interacting matter in thermodynamic equilibrium directly from the fundamental theory of QCD. As a result of a series of theoretical breakthroughs and the rapid rise of computing power, lattice QCD has had a significant impact on the understanding of experimental data and static properties of strongly coupled QGP. Lattice calculations like those in Figure 2.26 have determined the equation of state for QCD matter, and hence the temperature of the crossover transition between ordinary matter and quark-gluon plasma, with an accuracy of better than 10 percent. Lattice QCD calculations of the static screening potential between two heavy quarks are an essential phenomenological input to analyses of heavy quarkonia within quark-gluon plasma. Progress has been made using lattice QCD to map out the phase diagram of QCD at nonzero temperature and moderate matter-over-antimatter asymmetry. In the coming decade it should be possible to use lattice techniques to determine the location of the QCD critical point, with important consequences for experimental efforts to detect this important feature of the QCD phase diagram. Lattice calculations of the dynamical properties of strongly coupled QGP are even more challenging. Pioneering calculations of transport coefficients, which describe how the plasma responds to external perturbations, have begun. These calculations address quantities including the shear and bulk viscosities and electrical conductivity of QGP, the diffusion constant for heavy quarks, and the fluctuations of conserved charges and of the density. They are still in their infancy: For example, almost all of them neglect the presence of quarks and antiquarks in the QGP. To have quantitative relevance for RHIC and LHC phenomenology, such calculations must be performed with dynamical light quarks, which will become possible in the coming decade as computing capability approaches the exascale regime.
Relativistic Dissipative Fluid Dynamics
In the last several years, nuclear theorists for the first time established a consistent framework for relativistic hydrodynamic calculations that include accounting for the effects of the shear and bulk viscosities. They then developed the codes needed to describe the anisotropic expansion of the exploding droplets of QGP produced in heavy ion collisions. An important microphysical input to these calculations is the equation of state, obtained from the lattice QCD calculations discussed above. These viscous hydrodynamic calculations have become a pillar of quantitative RHIC phenomenology, and they provide the benchmarks for early LHC data. It is these calculations that have made it possible to say that the RHIC
results establish that the figure of merit for QGP, η/s, is less than 0.5 and probably less than 0.2. Nuclear theorists are currently developing the next generation of improvements to these calculations that will be needed to use data to bound the figure of merit η/s from below. For example, they are improving the treatment of how the QGP falls apart into mesons and baryons and they are doing systematic surveys of the effects of the initial density profile of the almond-shaped droplet of QGP and its fluctuations. Heavy-ion collisions are highly dynamical, meaning that our view of the fundamental properties of the new states of matter created in them is distorted by the complex evolution of the fireball as it expands and cools. Viscous hydrodynamic calculations are one example of the sophisticated and calibrated modeling needed to interpret the measurements. As theoretical advances occur there is a progression from qualitative insights to semiquantitative ones (as with viscous hydrodynamics today) to quantitative analyses that take systematic effects into account.
Significant conceptual progress has been made over the past decade on one of the most intractable problems of QCD: the high-energy, as opposed to short-distance, limit of strong interaction phenomena. The fundamental new insight is that a semiclassical description of gluon fields is appropriate when a hadron or nucleus is probed at very high energy and the probe coherently interacts with many gluons at once. In the extreme high-energy limit, which is reached earlier for large nuclei than for individual hadrons, the system is described by a superposition of randomly oriented classical gluon fields (the “color-glass condensate”). This classical state, whose properties are the same for all hadrons, is expected to be weakly coupled as long as the momentum scale Qs below which it is found is large enough. The entropy released by the decoherence of such classical color fields is thought to be responsible for a large share of the particle multiplicity measured in the collisions of nuclei at RHIC. These new insights raise new challenges. One is to develop calculations that will allow measuring particle production at the LHC, at forward angles at RHIC, and, it is hoped, at a future electron ion collider to yield a quantitative determination of Qs that will lead to an understanding of the saturated gluonic matter that is predicted to exist inside every nucleus. Another challenge is to understand how the shards released by shattering these classical gluon fields so quickly form the strongly coupled fluid seen at RHIC.3
3 Portions of this paragraph were adapted from the DOE/NSF, Nuclear Science Advisory Committee (NSAC), Subcommittee on Nuclear Theory, 2003, A Vision for Nuclear Theory. Available at http://www.nucleartheory.net/docs/NSAC_Report_Final8.pdf.
The experimental discovery that the QGP produced in RHIC collisions is a strongly coupled plasma with low viscosity, not a dilute gaseous plasma, poses a challenge to theorists. While lattice QCD is the proper tool for understanding the static equilibrium properties of such a strongly coupled plasma, it does not allow us to calculate its dynamical evolution or response to probes. Gauge/gravity duality is a new tool, originally developed in the context of string theory, that has been brought to bear on this problem in recent years. Whereas the former tool of choice, perturbation theory, makes this problem tractable by assuming weak coupling, gauge/gravity duality works best in the opposite regime where couplings are very strong, as is now known to be appropriate. The duality has been now established as a way to gain qualitative insight into the physics of strongly coupled plasma, with its liquidlike behavior and absence of particle-like constituents. These calculations have already yielded new qualitative insights. For example, they show that η/s = 1/ (4π) in any very strongly coupled plasma containing many “colors” of gluons that has a standard gravitational description. In addition, they explain how QGP can be so strongly coupled that it is a liquid with such a low viscosity and no particle-like excitations, even though its equilibrium properties are within a few tens of a percent of what they would be for a dilute gaseous plasma. They also predict that even the heaviest quarks should be dragged along by the moving fluid, as the data seem to indicate.
To better assess the semiquantitative agreement of analytical calculations with experimental results, theorists need to better understand which properties of strongly coupled gauge theories are universal—that is, independent of “microscopic details”—and thus can be calculated via gauge/gravity duality in theories that are cousins of QCD rather than QCD itself. This question must be addressed by extending these calculations to more observables and to more QCD-like theories. Opportunities for the next decade also include finding gravitational descriptions of dynamical processes that are more akin to heavy-ion collisions themselves than to dynamical probes of a static plasma, and gaining insight into how hydrodynamic behavior can begin so quickly after a collision.
Cold Dense Nuclear and Quark Matter
QCD has been shown to provide rigorous analytical answers to the question, What are the properties of matter squeezed to arbitrarily high density? It has long been known that cold dense quark matter, as may occur at the center of neutron stars, must be a color superconductor, as indicated in the lower right section of the phase diagram displayed in Figure 2.25. Theoretical advances during the last decade have made this subject more quantitative and much richer. An analytic ab
initio calculation of the pairing gap and critical temperature at very high densities has now been done, and the properties of quark matter at these densities have been determined: It is a superfluid and, though color superconducting, has a massless “photon” and behaves as a transparent insulator. At densities that are lower but still above those of ordinary nuclei, color superconducting quark matter may (in a particular sense) crystallize, developing a rigidity several orders of magnitude greater than that of a conventional neutron star crust. Or, it may change continuously into nuclear matter as a function of decreasing density, with quarks and pairs of quarks combining to make protons and neutrons. It remains an outstanding challenge to understand this regime of the phase diagram of QCD. The challenge has very recently become pressing by virtue of the discovery of the heaviest neutron star so far, which has a well determined mass that is 1.97 ± 0.04 times that of the sun (see Box 2.4). The equations of state for ordinary nuclear matter that allow for such a heavy neutron star indicate that the density of the nuclei at its center must be roughly five times that of ordinary nuclei. At such densities it is far from clear that a description in terms of compressed but otherwise ordinary nuclear matter is reliable. Insight might be gained by experiments done with trapped ultracold fermionic atoms (see Box 2.2), in which partial analogues (with simplified quark-quark interactions) of the cold quark matter to cold nuclear matter transition may be engineered in the laboratory, yet another illustration of the developing insights that connect the study of QCD matter to other fields of physics.
The strong force accounts for the vast majority of the visible matter in the universe. The study of that strong force and the constituents that make up the internal structure of nuclear building blocks—the proton and the neutron—are the subjects of this section.
QCD, introduced in the section “Quark-Gluon Plasma,” provides the underlying first-principles description of the strong force in the Standard Model. In QCD, protons and neutrons are described as three-quark bound states of the lightest two quarks (up and down). The strong force that binds them has some remarkable properties, like an energy field (the gluons) that is self-generating and a strength that increases with distance. For example, as discussed in Chapter 1, approximately 99 percent of the proton’s mass (or rest energy) comes from the motion of the quarks inside it and from the mediators of the strong force, the massless gluons, interacting with each other. The quark masses play almost no direct role. This is in stark contrast to an atom, whose mass is largely due to the rest mass of its nucleus and its orbiting electrons. In seeming contradiction to this very dynamical generation of proton mass is the success of the “valence quark” picture, which describes
neutrons and protons with just three massive quarks and correctly predicts many basic properties of neutrons and protons, such as their overall charge and magnetic moment. Reconciling these two pictures in the context of QCD would be a major accomplishment. None of the other known forces of nature present such a challenge—namely, that an explanation of the most basic features cannot rely on the assumption that matter is made of constituents that interact only very weakly with each other.
Two other phenomena whose explanation is believed to lie at the heart of QCD are (1) confinement, the fact that individual quarks and gluons have never been detected in isolation, and (2) dynamical chiral symmetry breaking, a phenomenon by which the nearly massless quarks of QCD acquire energy from their interactions with gluons to generate a massive proton. While these phenomena are not obvious in the basic equations that describe QCD, they play a principal role in determining the observable characteristics of atomic nuclei. Yoichiro Nambu was awarded a share of the Nobel prize in physics in 2008 in recognition of his contribution to understanding dynamical chiral symmetry breaking. Enormous advances in the use of relativistic quantum field theory—the unification of relativity and quantum mechanics invented by Feynman, Schwinger, and Tomonaga—have occurred in the last few decades. These conceptual advances, together with dramatically improved computational tools, have enabled great strides in our understanding. Now, partly through feedback between numerical and analytic methods, theory can explain the origin of chiral symmetry breaking and is beginning to demonstrate its far-reaching consequences for the properties of protons and neutrons. A connection between confinement and chiral symmetry breaking is suspected, and its elucidation is now the focus of intense theoretical activity.
Some of the open questions for this field are the following:
- What are the internal structural properties of protons and neutrons and how do those properties arise from the motions and properties of their constituents?
- How do those properties change when protons and neutrons are combined into complex nuclei?
- Can QCD describe the full spectrum of hadrons in both their ground and excited states?
The last decade has also seen tremendous growth in the development of precise experimental and theoretical tools to disentangle the various mechanisms by which the interaction between quarks and gluons results in the measured properties of nucleons. Through these combined experimental and theoretical efforts, nuclear scientists are making significant headway in understanding the nature of the visible matter in the universe at a fundamental level.
Broadly speaking, much of what is known about the properties of any particular force of nature or how the force manifests itself in the world around us is determined from a few kinds of measurements that are compared to the prevailing theory. This is as true with QCD as it is with electromagnetism and gravity. Understanding how nuclei and various phenomena emerge from the underlying theory of QCD and discovering the new phenomena that will guide us to a more complete picture is the challenge being addressed by the current and future experiments in QCD.
The fundamental composition of the lightest bound states formed by an interaction can be determined by establishing their internal “landscape.” With gravity, for example, an understanding of the solar system can be gained through mapping the locations of the planets of as a function of time. In electromagnetism, it is achieved through precision measurements of the energies of the electrons orbiting in atoms. For QCD, the fundamental quantities that provide the spatial map of the interior of neutrons and protons are the electromagnetic “form factors,” which provide a picture of the average spatial distributions of their charge and magnetism. Historically, these properties have been determined largely from the scattering of energetic electron beams from light nuclei. The two most common targets are hydrogen, which has a single proton as its nucleus, and deuterium, which consists of one proton and one neutron, allowing access to information about neutrons. As shown in Figure 2.27, electrons in the beam give off electromagnetic energy as they encounter the target. Using the interaction of these electromagnetic
waves with the target to “see” its structure is in essence the same as using light to see objects with a microscope: The shorter the wavelength of the light, the higher the resolving power. By tuning the energy of the beam and the angle at which the electron is scattered from the target, the interior of the neutrons and protons in the target can be probed with increasingly higher spatial resolution as the energy of the beam increases.
The earliest of these spatial maps determined the overall size of protons and neutrons, which turns out to be roughly 1 femtometer (10−15 meters), and established the extraordinary fact that neutrons, while they are overall electrically neutral, have a distribution of charge and magnetism indicative of a complex internal structure. Since then, and especially in the last two decades, advances in the technology used to create both beams and targets have led to a new class of precise measurements of these spatial maps and an impressive number of surprising and unanticipated results.
The most important advances have been in the ability to align (“polarize”) the spins of both the electron beam and the protons and neutrons in the target nuclei. The electron beam at JLAB can currently be polarized to 85 percent, with excellent beam quality that allows for measuring subtle differences in scattering rates to a precision of a few parts per billion. Targets of polarized helium-3 nuclei are now routinely used for a broad range of studies of the internal structure of the neutron as well as for other applications, some of which are discussed in Box 2.5.
Early in this millennium, polarization-based experiments at JLAB revealed that the proton’s charge and its magnetism distributions are quite different. This is in stark contrast to what had previously been assumed and written in textbooks based on decades of unpolarized electron scattering measurements. This is illustrated in Figure 2.28, where the ratio of the electric to magnetic form factors is shown as a function of the momentum transferred (Q) by the electron to the proton target. The form factors relate directly to the spatial distributions of charge and magnetism—higher Q corresponds to higher resolving power. The discrepancy between the two methods is now thought to be due to “two-photon” contributions that obscured the interpretation of the unpolarized scattering results. The differences between the proton’s charge and magnetism distributions reveal the effects of orbital motion of the quarks inside protons and neutrons, a key feature that must be incorporated into the valence quark picture. This orbital motion must play an important role in the understanding of the proton’s spin, since, as noted below, it appears that the gluons play almost no role.
A subtle but unique complication with mapping out the spatial interior of the proton comes from the fact that its internal components are highly relativistic and do not orbit around a stationary point in the proton’s interior. This situation is quite unlike the distribution of neutrons and protons in a nucleus (see Figure 1.3) or of electrons in an atom (or in a solid material). Recent theoretical efforts have
Polarized Nuclei for Medicine and Materials
Over the last two decades, nuclear spin-polarized helium-3 targets have been developed extensively for electron scattering experiments carried out at facilities such as JLAB (Figure 2.5.1), the Mainz Microtron facility, the Bates Laboratory at MIT, and now also the Duke High Intensity Gamma Source (HIGS). Because to a good approximation the nuclear spins of the protons in the helium-3 are paired off, polarized helium-3 provides a practical target for studies of properties of neutrons such as charge, magnetism, and spin distributions. In parallel with the production of polarized targets, the production of spin-polarized nuclei through optical pumping has been adopted and further developed for other applications such as medical imaging, precision experiments to test fundamental symmetries and few-nucleon systems, and the study of magnetism in materials using polarized neutron scattering.
Magnetic resonance imaging (MRI) using polarized helium-3 or xenon-129 enables direct observation of inhaled gas in airways, such as in lungs or sinuses, which is not possible with standard MRI, which is based on detecting water in tissue. Figure 2.5.2 shows, for example, an MRI lung image of a firefighter who was at ground zero following the 9/11 attack on the World Trade Center.
The use of polarized helium-3 nuclei to create and analyze polarized neutrons for the study of magnetic materials is expanding rapidly worldwide as part of the expanding field of neutron scattering. Neutrons with their spins aligned in the same direction as a sample of polarized helium-3 nuclei will essentially pass through the sample, whereas those with spins antialigned are strongly absorbed. As a result, the measurement of neutron transmission through a polarized helium-3 sample allows one to spin-polarize or analyze cold, thermal, and epithermal neutron beams, essentially becoming a “neutron spin filter.” Neutron spin filters can uniquely separate and identify scattering due to magnetism as opposed to scattering due to nuclear effects. Such filters could be used to address a broad range of issues in physics, chemistry, materials science, and biology. Examples include profile measurements in thin films and multilayered materials, measurements of magnetic moments, measurements of magnetization density distributions in paramagnets and ferromagnets, and measurements of magnetic domain sizes in spin glasses and amorphous magnets.
Neutron spin filters based on polarized helium-3 have some significant advantages over more traditional methods of polarizing neutrons, such as the ability to accommodate large divergence beams and to be effective over a wide range of energies. Beginning in the 1990s, development and application of neutron spin filters for both neutron scattering and fundamental symmetry experiments has been carried out at nuclear reactors and neutron spallation sources such as the Institut Laue-Langevin in Grenoble, France, and, in the United States, the National Institute of Standards and Technology and DOE’s ANL and Los Alamos National Laboratory. In several cases, the polarized gas is produced remotely and then transported to a variety of neutron instruments for the study of new magnetic materials that are important for both magnetic recording and medical physics applications (see Figure 2.5.3). For example, the structure of magnetic nanoparticles is being elucidated through small-angle polarized neutron scattering, and patterned magnetic arrays are being studied by neutron reflectometry.
As the pioneering programs have developed, their polarization production techniques have been exported to other neutron laboratories throughout Europe, Asia, and the United States, including the Spallation Neutron Source (SNS) at ORNL. The application of neutron spin filters is expected to increase substantially over the next decade now that the SNS is in operation and with a major expansion under way at NIST’s Center for Neutron Research. New developments in the science and technology of optical pumping and polarization techniques continue to improve not only the neutron-scattering experiments but also medical applications and the targets used for nuclear physics experiments.
shown that by looking only at the two “transverse” dimensions, that is, those perpendicular to the axis defined by the incident electron, a two-dimensional map can be cleanly and uniquely defined. Such two-dimensional maps of the proton’s and neutron’s charge are shown in Figure 2.29. They result from a global fit to electron scattering data and are for nucleons polarized in the x direction and observed from a reference frame riding along with a photon (Breit frame). One can see the apparent development of an electric dipole moment in the y direction. The dipole behavior is more evident for the neutron since it has no charge, therefore no “monopole” contribution, which acts like a background. This phenomenon is entirely due to the interplay of special relativity and the internal structure of nucleons.
Precise information about the neutron’s spatial structure is much harder to achieve: this ambitious goal was highlighted in the last decadal study. Within the
last 5 years, the combined results of polarization experiments at JLAB, at the Mainz Microtron facility in Germany, and at the Bates Laboratory at the Massachusetts Institute of Technology (MIT) have substantially changed this situation, so that the neutron now is nearly as well understood as the proton.
Numerical lattice QCD calculations of the structure of nucleons, starting from the fundamental degrees of freedom of quarks and gluons on a spatial and temporal grid, have been ongoing for more than two decades. While the calculations have not been able to completely control all systematic errors, they have provided important insight into hadron structure. Recent conceptual and algorithmic developments in lattice QCD, as well as major advances in supercomputer architecture, hold the promise that calculations will soon become fully complementary to experimental measurements as precise and reliable tools for characterizing the interior of protons and neutrons. Basic properties such as the distribution of charge and magnetism inside the neutron and the proton are benchmarks for testing the validity of the theory of the strong interaction. More complex quantities such as the distribution
of mass and angular momentum inside the nucleon, where one needs to know both the coordinates and the momenta of the constituents, will require synergy between theory and experiment to be fully characterized.
As already noted, the masses of the quarks in a proton make up only a small fraction of the mass of the proton itself. The vast majority of the proton’s mass comes from the “sea” of gluons and the kinetic energy of the virtual quark/antiquark pairs that pop in and out of existence from the gluon field. These sea quarks and antiquarks are charged like normal quarks, and they must therefore contribute locally to the proton’s overall charge distribution. Although protons and neutrons are usually thought of as being made of just the lightest two quarks (those labeled up and down) the sea of quarks and antiquarks within them in fact includes all six types. The heaviest quarks are rare, but the third-lightest quark—called the strange quark—could be enough of a presence to modify the internal distribution of charge and magnetism. If they are sufficiently numerous, strange quarks could be valuable as a tracer with which to tease out the role of the quark/antiquark sea in building up the structure of protons and neutrons. As a result, the distribution of strange quarks and antiquarks within protons and neutrons has been a subject of intense theoretical and experimental investigation. Through the scattering of polarized electrons from an unpolarized target and looking for tiny changes in the scattering rate when the beam’s spin is reversed, experimenters can pick out pieces of the scattering that come from the weak force through its unique sensitivity to the handedness of the scattering. Combining the weak interaction data with the precision electromagnetic data makes it possible to disentangle the pieces of the proton’s charge and magnetism coming from up, down, and strange quarks. The most sensitive of such experiments have been carried out at JLAB, but the combination of all the data taken within the last decade from JLAB, the Mainz Microtron facility in Germany, and MIT’s Bates Laboratory has now very tightly constrained the possible contributions from strange quarks to a less than 5 percent contribution to the proton’s magnetism and a significantly smaller contribution to the proton’s charge.
Model calculations inspired by QCD support the experimental findings. Models with substantial input from lattice calculations are in very good agreement with current experimental results. In the future, direct first principles calculation of the strange quark content of the proton and neutron are expected to become possible for comparison with experimental data.
Experiments looking at the role of the strange quark will have an impact well beyond the study of ordinary matter. These experiments, and the technological advances that have made them possible, are now enabling a new class of highly precise measurements of the weak force through electron scattering that can search for evidence of new interactions beyond our present understanding. These experiments at the “precision frontier,” described in the section “Fundamental
Symmetries,” provide powerful tests of the Standard Model in ways that complement the planned program of the Large Hadron Collider (LHC).
If mass is the first property one thinks of in characterizing a proton, then spin is a close second. Thousands of MRIs performed every day rely on the fact that the magnetic moment of the proton, which is associated with its spin, has an anomalously large value. Yet while the origin of the mass of the proton can at least be postulated, the way in which the proton’s spin arises from the dance of the quarks and gluons within it remains unknown. It should be possible to understand the proton’s spin through the lens of QCD. Before the predictions of the theory were well understood, it was assumed that since every quark in a proton has the same spin as the proton itself, the proton’s spin could be explained by assuming it was made of three quarks, two of whose spins pointed in opposite directions and so cancelled out. If this naïve picture was a good guide, the observed spin of the proton would be due to the spin of its third quark, with the gluons, the sea quarks, and orbital motion all playing unimportant roles. The failure of this picture was first identified at CERN in the late 1980s, where it was discovered that the spin of all the quarks within a proton, when added up, accounted for only a small fraction of the proton’s overall spin. Efforts at laboratories around the world, including Brookhaven National Laboratory BNL, CERN, Deutsches Elektronen-Synchrotron (DESY), JLAB, and the Stanford Linear Accelerator Center (SLAC), using a number of complementary techniques, have established that quark spins account for only about 30 percent of the proton spin, and very little if any comes from the spin of the gluons. By elimination, this means that orbital motion must be a key player after all. Figure 2.30 gives an accounting of the proton’s spin; identifying the missing pieces has been the subject of substantial effort in the last decade.
Once again, major technical developments in polarized beams and targets, in this case polarized colliding proton beams at RHIC, are enabling the key experiments. The polarized proton beams make it possible to constrain the spin contribution of the gluons to the proton, virtually unknown until now. Figure 2.31 shows results indicating that the gluon spin contributes less than 10 percent of that of the proton over the region in which it has been measured. In the next decade, this limit should be improved dramatically at RHIC as the colliding beam energy ramps up to 500 GeV. The RHIC-spin program has also demonstrated that the charged weak current can provide stringent constraints on the light quark contributions to the total spin. As more data are added to the mix, they will make it possible to determine what fraction of the 30 percent of the proton spin that comes from quarks comes from the sea of quark/antiquark pairs and what fraction comes from the three extra quarks responsible for the charge of the proton. These results are
forcing scientists to reconsider the contribution of orbital motion to the spin of the proton and should help create a more complete picture of how the dynamics of quarks and gluons cooperate to produce the protons and neutrons that form the visible matter of the universe.
At DESY, the HERMES experiment provided direct evidence that the quarks bound together in the proton have significant orbital angular momentum, which might account for some of what’s missing. New experiments have been devised to measure the orbital angular momentum carried by the quarks as well as the gluons. Since orbital motion comes about from a sideways motion relative to the direction of a force, processes that can identify the transverse components of the momentum of quarks and gluons are a key focus of planned experiments.
The HERMES experiment opened a new window into the structure of the nucleon. After the energy upgrade to 12 GeV at JLAB is completed, experiments at the lab will systematically probe the correlated space and momentum distributions of the quarks, exposing the pattern of orbital motion. These new experimental results incorporated in the new conceptual framework being developed by QCD theorists, the so-called “generalized parton distributions,” are the best known method to determine the total angular momentum of the quarks in the proton due to their orbital motion.
Another challenge is to determine the contribution of each flavor of quark. Unraveling the proton’s flavor structure came to the forefront when, in the 1990s, an unexpectedly large flavor asymmetry in the light quark sea was discovered. Prior to that it had been assumed, based on the simplest estimates from QCD, that the light quark sea had an identical distribution of up and down quarks. A new experiment at Fermilab makes use of the 120 GeV proton beam from its main injector, which turns out to be a nearly ideal energy to study this light quark asymmetry. These measurements will play a role in interpreting data from the LHC at CERN, where experiments will operate in a regime in which detailed knowledge of the distributions of the most energetic quarks in the nucleon is essential.
As discussed in the section “Exploring Quark-Gluon Plasma,” the main objective of experiments at the LHC is to test our understanding of the properties of matter at the highest energies. In these experiments, collisions of energetic particles result in showers of new particles whose properties are measured with sophisticated detectors. The recorded results of the collision contain both the signature of the interactions that occurred at the highest energies and the imprint of the strong interaction effects (QCD) that occur as the particles slow down moving away from the center of the collision. Uncovering the high-energy effects requires both ingenious experimental techniques and full control of the theoretical predictions of the Standard Model. Apart from QCD, the other interactions described by the Standard Model are weak, and their consequences are well enough understood to help constrain QCD. The strong interactions, on the other hand, require special
treatment to obtain the required precision. Theorists have shown that lattice QCD calculations can reliably determine the effects of strong interactions needed for obtaining the parameters of the Standard Model, as well as uncovering new phenomena at the frontier of high-energy physics. This is one of the main motivations behind the support of the lattice QCD effort in this country and worldwide. If the experimental findings at the LHC were to show that new, strongly coupled QCD-like theories are needed to extend the Standard Model beyond our current understanding, the lattice methodologies developed for these quantitative QCD calculations could prove to be invaluable at a much higher energy scale.
How do electron transport properties become modified in a semiconductor, and what does this teach us about the range and nature of the force relevant in complex assemblies? This is a central question in condensed matter physics, essential for an understanding of how the properties of materials exploited for useful electronic devices emerge from the underlying theory of quantum electrodynamics. An analogous question in nuclear physics—How do complex nuclei found in nature or created at accelerators emerge out of QCD?—will be addressed in the next decade (see Figure 1.4). Owing to the nature of the strong interaction, this is indeed a grand challenge, but the first steps toward understanding the construction of nuclei from QCD are being taken through the study of quark-based phenomena in nuclei. For example, one can look for situations where more than one proton or neutron must be involved in elements of the nucleus assembly, or try to understand how quarks behave as they travel through nuclear matter, or how the internal properties of neutrons and protons are affected when they are embedded in nuclei rather than in their free state. Below are some of the important advances in these and other areas within the last decade.
How Can the Properties of a Proton or Neutron Be Modified in a Nucleus?
The effects on the properties of a proton or neutron of the medium in which it finds itself are examples of the emergent phenomena associated with an extended system. As such, they fall into a class of inquiry that intrigues scientists in almost all fields of physics and that is responsible for lucrative practical applications such as the electronic devices that emerged from the field of semiconductor physics. In nuclear physics, the EMC effect (named after the European Muon Collaboration that discovered the effect in the 1980s at CERN) is one such phenomenon, believed to arise from modifications of neutrons and protons inside the nuclear medium. Measurements reveal a clear difference between the quark distributions in heavy nuclei compared to the lightest nucleus, deuterium. A central question has been to distinguish conventional nuclear structure effects from processes that are not described by models using only nucleons as the building blocks. One particularly fruitful approach is to focus on light nuclei with between 2 and 12 nucleons, where the nuclear structure is well understood experimentally and well described by existing models. The EMC effect should be most dramatic in the densest nuclei, like helium-4, which has two neutrons and two protons in a tightly bound package. The nucleus beryllium-9 is an example where the average density is only a little more than half that of helium-4 despite its additional five nucleons. It came as a surprise that the observed EMC effect was as large in beryllium-9 as in helium-4 or in carbon-12, which is also tightly bound. A close experimental examination of the nuclear structure of beryllium-9 revealed that it behaves more like two alpha particles (i.e., two dense helium-4 nuclei) and a neutron rather than like nine evenly distributed nucleons, so there is a significant likelihood that the scattering takes place from one of the alpha particles within the beryllium-9 nucleus. This measurement is an important step toward understanding a long-standing puzzle, and new experiments at JLAB and Fermilab, coupled with theoretical developments, are expected to provide the final resolution.
Can the Strong Interaction Be Weakened at the Femtoscale?
The transition between the long-distance, low-energy regime, where quarks are strongly interacting and QCD is difficult to apply (the nonperturbative regime), to the short-distance high-energy (perturbative) regime, where approximation techniques similar to those used in quantum electrodynamics (QED) can be applied, is revealed in nuclear reactions. At high energy, the concept of asymptotic freedom applies, in which the coupling strengths decrease and the quark and gluon interactions become weak enough to be described in the approximation of perturbation theory. Apparently at lower energy the quarks acquire mass, and their response
to the strong force becomes ever stronger as they accumulate a cloud of virtual quarks and gluons that bind to them with extra heft. This feature is displayed in Figure 2.32. A lattice QCD calculation shows that the effective quark mass is momentum dependent, rising as the momentum becomes lower. Deep inelastic scattering studies on the proton have shown that the transition from the nonperturbative regime to the perturbative regime occurs when the momentum transferred to the quark is between 1 and 2 GeV. As seen from Figure 2.32 this is where the change in the quark mass shows a sharp rise. However, in this low-energy regime, the confining interactions have become so strong that the notion of the quark mass ceases to be well defined. In this regime, the proton is a complicated, many-body system whose mass arises primarily from the interaction energy between its constituents. In Figure 2.32 (right panel), a lattice QCD calculation shows that the proton mass changes very little even when the quarks are assumed to be massless. In the 12 GeV JLAB upgrade, enough energy will be available to allow the probing of this transition between low-energy and high-energy regimes, where quarks and gluon dynamics can be described with analytic methods.
Can the Transition from Free Quarks to Bound Quarks Be Understood?
Confinement—the complete absence of free quarks in nature—is a striking and unique property of the strong interaction. A principle effort in nuclear physics is to understand confinement in the context of QCD. In the deep inelastic scattering of an electron off a quark in a nucleus, the struck quark transforms into multiquark bound states, or hadrons, through a process that is not understood and has been only qualitatively described. The central method for studying this hadronization process is to use the nucleus as a laboratory for testing ideas about short-distance behavior. Current thinking posits that a prehadron forms first that is less likely to interact with the nucleus than a bare quark. Scattering data are combed for evidence of either quark or prehadron scattering in the nuclear medium. One curious phenomenon expected to result from this feature of prehadron formation is the so-called “color transparency” of nucleons and mesons as they pass through nuclear material: The nuclear material becomes increasingly more transparent as the momentum imparted to a nucleon or meson increases through the transition from the strongly interacting to the weakly interacting regime. Electron-scattering experiments in which a meson is detected are an excellent way of exploring this feature of QCD. As the momentum imparted to the meson increases, its cross section within the nuclear medium decreases and the medium appears to become transparent. JLAB experiments have yielded evidence for the color transparency effect for mesons produced in nuclei. Another example: If quark-nucleon scattering occurs rather than prehadron nucleus scattering, the detected hadron distributions will be broadened in momentum from the increased interactions with the nuclear
medium. Evidence for such broadening effects is currently being sought using a variety of techniques, including deep inelastic electron scattering in which a meson is produced and Drell-Yan production in which two protons produce two muons. The deep inelastic scattering experiments will be performed at JLAB after its 12 GeV upgrade is complete, and the Drell-Yan experiment is under way at Fermilab, with possible future experiments at RHIC.
Another way to understand the hadronization process is characterizing the amount of energy lost by the quarks as they pass through nuclear material. Developing the picture of both color transparency and hadronization is essential for understanding the behavior of high-energy quarks in ordinary nuclear matter. The empirical description of quark propagation in ordinary nuclei is the baseline against which to compare results from heavy-ion collision experiments at RHIC and the LHC, as described in the section “Exploring Quark-Gluon Plasma.” These results show that quark-gluon plasma is remarkably effective at slowing and even stopping high-energy quarks propagating through it.
At the beginning of this century, a new theoretical QCD framework was developed. Known as the Soft-Collinear Effective Theory (SCET), it provides a systematic method for describing processes that produce energetic hadrons and jets, observables allowing an important window onto strong interactions. This theory yields a universal set of simple theoretical tools for handling a wide range of QCD problems in much the same way that nonrelativistic quantum mechanics provides powerful techniques for analyzing many interesting systems in atomic physics. Over the past decade, the study of SCET has allowed nuclear theorists to make discoveries about the nature of QCD, including new approximate symmetries of nature, new techniques for precision analyses, and new treatments of strong interaction nuclear effects that were previously thought to be intractable. Recent examples include (1) the highest precision determination of the strong coupling constant itself, which determines the strength of all QCD interactions, from jets where low-energy QCD plays a key role; (2) improved accuracy for calculations of deep inelastic electron scattering from protons at energies relevant for data from the JLAB upgrade; (3) the categorization of new parameters that describe key QCD effects in heavy B meson decays that are needed to interpret charge-parity symmetry violation studies; and (4) the resolution of discrepancies in the comparison of less accurate theoretical calculations with data on J/Ψ distributions. Many promising applications are on the horizon, including the use of SCET to systematically study jet quenching and radiation in heavy-ion collisions at RHIC and the LHC and to study the substructure of hadrons inside jets.
Can a Nucleus Become Glasslike?
The electron-proton collider at DESY-Hamburg (HERA) recently showed evidence of a rapid rise in the density of gluons as the momentum of a struck quark decreases inside the proton. In fact, it appeared that this gluon density was continuing to rise even at the lowest quark momenta achievable at DESY, meaning that as experiments gained the ability to see the effects of gluons carrying less and less momentum, they showed that protons can be stuffed with more and more gluons in total. At some point this trend would have to stop and the gluon density should reach saturation. An ingenious technique that might be capable of detecting this saturation is based on the observation that at very high gluon density all mesons, nucleons, and nuclei should appear to be identical. Nuclei in this extreme regime are a form of universal matter, known as “color glass condensate.” The density of gluons in a heavy nucleus can far exceed their density in a free proton, so heavy nuclei may be an amplifier for observing the saturation of the gluon density. Colliding electrons and heavy nuclei at high enough energies to get to the highest
possible gluon density would be a key element in an experimental program at a future high-energy electron-ion collider.
How Do the Nucleonic Models Emerge from QCD?
Understanding how the successful low-energy models of nuclear physics based on a nucleonic description emerge from QCD is an exciting and mostly unfinished story. The picture of nuclei as a collection of neutrons and protons has been enormously successful, despite the fact that there is no explicit reference to the quarks and gluons within the neutrons and protons or to the explicit QCD interactions. As discussed in the subsection “Towards a Comprehensive Theory of Nuclei,” the common approach has been to use measured nucleon-nucleon interactions to build theoretical frameworks that explain quantitatively the structure of as many nuclei as possible. Before QCD was discovered, it was imagined that this approach would lead to the fundamental theory of nuclear physics. It became evident, though, that if a free nucleon-nucleon interaction is used, then additional forces, like one directly involving three nucleons, are necessary to explain nuclear bound states. This bottom-up approach to nuclear physics has worked well for describing the structure of light nuclei. Other approaches make use of effective nucleon-nucleon interactions, and these have been successful in describing medium- to heavyweight nuclei. Both approaches have benefited from advances in computational power, as these kinds of calculations for realistic systems were impossible in the past (see Figure 2.10). These calculations have allowed nuclear scientists to take enormous strides in developing a quantitative picture of nuclei, with neutrons and protons as the basic building blocks. One approach to systematically tie effective models to QCD, the underlying theory, is to perform lattice QCD calculations for the nucleon-nucleon interaction. During the past decade, work in this direction began, but it is still a daunting challenge. Filling the gap between nucleon degrees of freedom and quark-gluon degrees of freedom, and in so doing understanding how the nucleon picture emerges from QCD, is a major direction for contemporary nuclear theory.
Chiral perturbation theory is an example of an effective theory that goes a long way to using QCD to bridge the gap between the nucleon picture and the microscopic quarks and gluons. Because it incorporates all the appropriate symmetries, chiral perturbation theory is the precise description of how neutrons, protons, and pions interact at low energies in QCD. This description is built through successive approximations, with each better level of approximation allowing the description to be extended to higher energies at the expense of introducing new parameters that must be measured experimentally before the theory can be used to make predictions. Predictions of chiral perturbation theory can be precisely tested
in relatively low-energy experiments being carried out at facilities like the High Intensity Gamma Source (HIGS) at Duke University and the gamma-ray beams at Lund, Sweden, and Mainz, Germany. Complementary investigations are also performed at very low momentum transfer as well as neutral pion decay at JLAB. Chiral perturbation theory is a method for applying QCD to low-energy questions involving a few neutrons, protons, or pions even though a quark-by-quark and gluon-by-gluon description cannot be applied.
Lattice calculations provide another important tool with which to understand how QCD describes the evolution from a quark and gluon picture to a picture in which the actors are interacting neutrons and protons. This promising approach is now being pursued throughout the world. Calculations of properties of nucleon-nucleon interactions and the lightest bound nuclei are still very challenging in lattice QCD, because of the large range of energy scales involved. Lattice QCD calculations are performed in a volume with a well-defined lattice spacing. The size of the volume has to be large enough to accommodate the low energy scales involved in nuclear binding, but at the same time the lattice spacing has to be small in order not to distort the high-energy interactions of quarks and gluons. This is a typical example of a multiscale problem with no clear separation of scales, making for a substantial computational challenge. Systematic breakthroughs in computational methods combined with tremendous advances in computational power are making it possible to take on this challenge. Nucleon-nucleon scattering lengths have been computed, albeit with quarks that are heavier than the quarks in QCD. Lattice theorists exploit this strategy and systematically constrain the quark masses toward realistic values, given the available computational power. Preliminary calculations of the binding energies of helium-3 and helium-4 have been computed using this strategy. As new theoretical approaches and algorithms are developed, the artificial world of heavy quarks will evolve into an accurate representation of nuclear physics. These efforts remain a major computational challenge for the next decade.
A new and better understanding of effective field theories (EFT) and the marriage of lattice QCD and chiral perturbation theory offer a reliable approach to comparing QCD to nature. The EFT approach includes a systematic procedure for determining the number of parameters needed to describe the interactions to a certain level of precision. The traditional approach requires measuring these parameters experimentally and then using the EFT to predict other quantities. However, advances in lattice QCD will allow some of these parameters to be calculated from first principles, increasing the predictive power of EFTs for QCD like chiral perturbation theory and increasing the number of experimental observables that can be used to establish with precision the connection between the fundamental building blocks (quarks and gluons) and the physics of light nuclei in QCD.
Mendeleev’s organization of elements into the periodic table of elements in the mid-nineteenth century had a profound impact on the direction of physics. Identification of the systematic patterns of his table and their eventual explanation using quantum mechanics and the quantum theory of electrodynamics are at the foundation of chemistry, much of physical science, and much of modern engineering. The discovery of isotopes and periodic trends in nuclei has similarly been a key development in nuclear physics. The pattern of the isotopes is key, for example, to understand the formation of chemical elements in the interiors of stars and in the evolution of the universe. The identification of the families of leptons and quarks, at present thought to be the most fundamental building blocks of nature, has led to our present understanding of the classification of hadrons and their excited states, but the picture is still incomplete.
One of the most basic applications of QCD is to explain the organization of the masses of hadronic systems: the mesons, the bound states of a quark and an antiquark, and the baryons, which are made up of three valence quarks. QCD should certainly be able to predict the full spectrum. Indeed, hadron spectroscopy experiments in the 1950s and 1960s provided the essential clues that lead to QCD in the 1970s, but some fascinating puzzles remain unsolved. For example, What is the role of gluons in the production of bound states? Why have no states been found with a single well-identified gluon? What is the detailed mechanism that confines quarks within baryons and mesons? What are the most relevant degrees of freedom that explain the experimentally observed spectrum?
Understanding hadron spectroscopy poses many experimental and theoretical challenges. Many excited states are very short-lived and close in energy, making it hard to reliably categorize their quantum numbers or to specify their production mechanism. For almost 50 years the Roper resonance has baffled nuclear physicists. Discovered in 1963 by L. David Roper while working on his Ph.D. at MIT, it is just like the proton, only 50 percent heavier. Its mass was the problem: Until recently, it could not be explained from QCD by any available theoretical method. In a recent breakthrough, theorists at the Excited Baryon Analysis Center (EBAC) at JLAB demonstrated that the Roper resonance is the proton’s first radial excitation, with its lower-than-expected mass coming from a quark core shielded by a dense cloud of pions and other mesons. This breakthrough was enabled by both new analysis tools and new high quality data. EBAC has become the Physics Analysis Center, with an expanded scope that includes the analysis of the meson spectrum and their interactions. This is an essential component of the JLAB 12 GeV science program, particularly for the goals of the GlueX experiment in meson spectroscopy.
Identifying the full spectrum of hadrons from first principles with QCD
remains a challenge because of the unique and central feature of QCD—namely, that quarks and gluons are confined. Electrons can exist as free or bound particles, and the world is seen by virtue of the existence of freely propagating photons. However, quarks do not exist in nature as free particles and neither do beams of isolated gluons. They are bound within protons, neutrons, pions, and other hadrons. Further complicating the story is the fact that gluons interact with themselves as well as with quarks. This means that there can be QCD bound states made entirely of gluons, with no quarks, aptly dubbed “glueballs.” It is also possible for gluons themselves to contribute to the basic properties, such and total angular momentum and parity (i.e., their “quantum numbers”), of bound states with just one quark and one antiquark, resulting in so-called exotic mesons. Experimental searches for these around the world have yielded hints of their existence, but definitive evidence is not yet in hand. Challenges include first providing an environment where production of these states is favorable, and secondly disentangling one potential state from another by uniquely identifying their quantum numbers.
Technical advances have continued at a rapid pace, and there is exceptional potential for progress on the experimental front. Baryon and meson spectroscopy will be a main thrust of the program for the JLAB upgrade, where the CEBAF accelerator will provide beams of polarized gamma-rays. The GlueX experiment at JLAB is being optimized to explore the existence of exotic mesons with a sensitivity that is hundreds of times higher than in previous experiments. Complementary to the exotic light-quark bound states to be studied by GlueX, the Facility for Antiproton and Ion Research (FAIR) at GSI plans to study the heavier exotic charm quark bound states.
Computing the masses of these bound states is a formidable theoretical task. Lattice QCD is a brute force approach to performing these computations. The lattice QCD calculation of the masses of the eight lightest baryons is shown in Figure 2.33, showing remarkably good agreement with experiment. An easier problem is to compute the spectrum of hadrons that contain the heavier charm and bottom quarks on the lattice: the spectrum of bound states with heavy quarks also compares very well with experimental data from the high-energy physics experiments BaBar, Belle, and CLEO. Lattice QCD calculations indicate the existence of yet-undiscovered hadrons such as exotic mesons and glueballs, and it is important to confirm or refute these predictions experimentally to make progress. Progress continues in predicting the spectrum of mesons, both regular and exotic, using state-of-the-art lattice QCD computations, as shown in Figure 2.34. These calculations suggest the presence of many exotic mesons in the region accessible by the GlueX experiment at JLAB, ripe for discovery.
Another computational challenge for the next decade is to determine the energies of the heavier excited states of mesons and baryons. Theorists around the world are cooperating on this task, and the work complements the international
experimental efforts like GlueX and experiments at FAIR. Progress will come from both significant developments in theory and methodology, but additional investments in state of the art supercomputing facilities will also be required.
Lattice QCD is also being used to understand other aspects of hadronic structure. Calculations of electromagnetic transitions between excited states are another useful connection to experiment. Radiative transitions in charmed mesons have already been calculated, and applications to other systems will come in the next decade. The calculations can be compared with experimental results from CLEO, providing an ideal test bed for the validity of the theoretical approaches used. In the years to come they will be refined and extended to the light quark sector, providing theoretical input to the upcoming experiments.
An important next step in nuclear physics will be to connect studies of the quark/gluon structure of nucleons with the study of complex nuclei, by determining how the deep internal structure of nucleons is affected when the nucleons are bound inside nuclei. The nuclear physics community has studied extensively the science that a future electron-ion collider (EIC) could enable with a combination of relatively high center-of-mass energy, high luminosity, and polarized electron beams colliding with beams of polarized protons, light ions, and heavy nuclei. While the technical parameters of an EIC have yet to be finalized with respect to specific science goals, the energy would likely be lower than that of HERA, but the
intensity would be as much as 1,000-fold higher and, unlike HERA, the EIC would use nuclear and polarized beams in the collision. Such a capability would provide groundbreaking reach to low momentum partons in the proton and nuclei in the same fashion that the JLAB 12 GeV upgrade can probe the valence quark region of the nucleons and nuclei. An EIC would also provide direct access to the dynamics of the complex system of strongly interacting quarks and gluons that result in the proton’s spin. This includes orbital motion, the importance of which research at JLAB and RHIC and within the HERMES experiment has made apparent. Quark orbital motion leads not only to angular momentum but also to significant quark transverse motion in the proton. An EIC would permit access to the transverse distributions, allowing the development of a multidimensional (in space and momentum) image of the sea quarks and gluons. Finally, as mentioned above, an EIC would permit a first serious look at the gluonic structure of the proton and the nucleus, including the remarkable glass-like characteristics expected for the lowest momentum gluons in protons and nuclei.
At the end of the nineteenth century, physicist Albert A. Michelson was confident that most of physics had already been discovered, but nevertheless urged still better experimentation:
The more important fundamental laws and facts of physical science have all been discovered, and these are so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote. Nevertheless, it has been found that there are apparent exceptions to most of these laws, and this is particularly true when the observations are pushed to a limit, i.e., whenever the circumstances of experiment are such that extreme cases can be examined. Such examination almost surely leads, not to the overthrow of the law, but to the discovery of other facts and laws whose action produces the apparent exceptions.4
Michelson vastly underestimated the kind of revolution that would emerge from “pushing observations to a limit.” Within a few years, nuclear physics, relativity, and quantum mechanics would change physics forever.
Physicists are now more keenly aware of what remains unknown than at any time in the past, and often formulate the open issues in the form of concise questions. In the nuclear physics of neutrinos and fundamental symmetries, the subject of this section, the questions addressing matter at a very basic level, are these:
- What is the nature of the neutrinos, what are their masses, and how have they shaped the evolution of the cosmos?
4 A.A. Michelson, 1903, Light Waves and Their Uses, p. 23, University of Chicago Press.
- Why is there now more visible matter than antimatter in the universe?
- What are the unseen forces that were present in the dawn of the universe but disappeared from view as it evolved? Once very hot and very homogeneous, the universe now displays a preferred “handedness” and so the existence of lost forces.
The predictions of the Standard Model of particles and fields has demonstrated, in some cases to 10-digit precision, our understanding of physics, but it has also helped to place in stark relief that which is yet to be known. The experimental observation of something that is not included in the Standard Model, or is in contradiction to it, is by definition “new physics.” It is something that demands the closest attention as it holds the promise of discovery and deeper understanding. As has been known from its beginning some 40 years ago, the Standard Model simply does not speak to certain domains of physics, such as gravity, and in the last decade a definite contradiction to it on its home turf has been demonstrated—namely, the discovery of neutrino mass, in which nuclear physicists played a leading role.
The next decade will find nuclear physicists continuing to look for fingerprints of physics beyond the Standard Model. In addition to exploring the nature of neutrinos, efforts will take place on the precision frontier, where subtle details in the decay patterns of nuclei and the free neutron, in weak interactions between nucleons, and in interactions of electrons in scattering experiments, among others, might signify the presence of new physics.
That the mass of neutrinos must be much smaller than that of other matter particles was apparent to Enrico Fermi as he developed the theory of beta decay and compared it to the data available in 1932. By the time the Standard Model was being developed, it was clear that the mass of the neutrinos was so small that the model’s mechanism for mass generation would be unnatural for neutrinos. (The most sensitive experiment to date, on the beta decay of tritium, limits the mass to 2.3 eV, already more than 100,000 times smaller than the electron mass.) The only apparent solution was to make the neutrino mass exactly zero and neutrinos purely left-handed. A particle that is always spinning in a left-handed (or right-handed) sense must move at the speed of light and must therefore be massless. Otherwise one could in principle board a fast train and see the particle falling behind, spinning in the opposite direction in its motion relative to the observer. If such particles, moreover, are always found to be left-handed, that is a violation of parity—the concept that physical laws describing a reaction remain the same for both the reaction and its mirrored image. The discovery of parity violation in
weak interactions and the measurement of the left-handedness of the neutrino in the 1950s fitted perfectly with the concept of a massless neutrino.
A seemingly unrelated issue was the solar neutrino problem, in which the number of neutrinos detected in the chlorine experiment of Ray Davis, Jr., fell short by a factor of about 3 from theoretical expectations for the solar-neutrino flux tied to energy production in the sun. While many believed this to be due to errors in the theory or the experiment, Bruno Pontecorvo in 1967 raised the possibility that neutrino oscillations might be responsible. Neutrino oscillation is a time-dependent change in the type, or flavor, of a neutrino as it travels, an effect that can only be observed if neutrinos have mass. Flavor transformation and oscillations of neutrinos are possible if the states of well-defined mass (mass eigenstates) are not the same as the flavor eigenstates (the state of neutrinos obtained in beta decay, for example). Davis’s detector was sensitive only to the expected flavor, electron, and not to the other flavors, μ and τ. Hence neutrinos that oscillated to those other flavors would seem to be missing. By the mid-1990s new solar neutrino measurements had been done and the evidence made an astrophysical solution unlikely. Painstaking laboratory measurements of the nuclear reaction rates that determine neutrino production in the sun similarly excluded uncertainties in the nuclear physics of the solar model. On the other hand, neutrinos with mass gave a good account of the observations.
Definitive evidence for neutrino oscillations emerged from another unexpected quarter, the atmospheric neutrino background in proton-decay detectors. Indications from early kiloton-scale detectors that muon neutrinos were about half as abundant as expected were convincingly verified in the 50-kiloton Super-Kamiokande (SK) detector in Japan in 1998. A clear azimuth- and energy-dependent signature of oscillations for neutrinos traversing paths up to the diameter of the earth indicated that muon neutrinos were transforming to an undetected neutrino species. No corresponding effect was seen for electron neutrinos.
Pieces fell rapidly into place in the next few years as the Sudbury Neutrino Observatory (SNO) in Canada, a 1-kiloton heavy water detector for solar neutrinos (see Figure 2.35), showed that electron neutrinos did participate in neutrino oscillations as well. In this case, however, the difference between the squares of the masses is not the same as observed in atmospheric neutrinos; rather it is a factor of about 30 smaller. With three neutrinos there are only two independent mass-squared splittings. SNO showed, furthermore, that the astrophysical theory of the sun was extraordinarily accurate, by correctly predicting the central temperature to a precision of about 1 percent, despite strong skepticism with the solar model expressed beforehand.
The results from SNO and other solar neutrino experiments admit two different solutions for the second splitting. It was not until the KamLAND experiment in Japan detected antineutrinos from distant nuclear power plants fortuitously
situated with respect to the Kamioka underground site that this ambiguity was resolved in favor of what is termed the large-mixing-angle (LMA) solution. The KamLAND data show clearly the wavelike pattern that is the hallmark of oscillations (see Figure 2.36).
These three experiments have established the basic landscape of neutrino mass and mixing as it is known today. They show conclusively that, in contradiction to the expectation of the minimal Standard Model, neutrinos do have mass, albeit very small (see Figure 1.5). In another very recent, remarkable advance, the third (and last) mixing angle for neutrinos has been measured at the reactor complex at Daya Bay in China by a U.S.-Chinese-Czech-Russian collaboration in which nuclear physicists played a major role. The angle θ13 is a parameter describing how much electron flavor is to be found in the mass eigenstate that is well separated from the other two and is the key ingredient in deducing whether or not neutrinos respect time’s arrow. Important questions about neutrinos remain to be answered, as will be described below: What exactly are the masses? Are neutrinos their own
antiparticles? Would they behave the same if the arrow of time were reversed, and, if not, did they cause the matter-antimatter asymmetry of the universe?
Through nearly four decades of tests, the Standard Model has otherwise proven to be extremely resilient. But new challenges have emerged besides the neutrino mass. One of the most spectacular achievements of twentieth-century physics is quantum electrodynamics, now a subset of the Standard Model. The magnetic moments of the electron and muon and the Lamb shift in atoms can be theoretically calculated and experimentally measured to precisions of parts per billion or better. As the precision advances, however, corrections from possible contributions that are beyond the Standard Model may become significant. The recent measurement of the anomalous fraction a of the magnetic moment of the muon at BNL gives 116,592,020 × 10−11, which is 3.4 standard deviations larger than the theoretical prediction (see Figure 2.37). The anomalous moment of the muon is expected to be sensitive to contributions from new physics, such as are described in the plausible scenario of supersymmetry. The presence of new particles in such theories significantly modifies the effects on the magnetic moment caused by the fleeting appearance and disappearance of charged particles near the muon.
Measuring the Lamb shift (a shift in energy levels specifically predicted by quantum electrodynamics) in muonic hydrogen is a very challenging experiment
that has recently been carried out for the first time at the Paul Scherrer Institute in Switzerland. In a surprising outcome, the measured shift is more than three standard deviations from the value expected theoretically. It can be interpreted as a discrepant measurement of the radius of the proton, but the cause may lie elsewhere, in experiment or theory, and is not at present known. This surprising result may also, like the anomalous moment, be a hint of physics beyond the Standard Model.
Just as quantum electrodynamics was found to be a part of the Standard Model, it is broadly anticipated that the present Standard Model is but a part of a still more comprehensive model. The experimental observations above provide important clues for the discovery of the New Standard Model (NSM), which will incorporate the many successes of the existing model but will in addition provide an understanding of aspects of physics that now are mysterious. What are the dark matter and the dark energy that pervade the universe? Why does the universe contain matter, but little antimatter? What is the origin of the many seemingly arbitrary parameters that emerge in the Standard Model? Are they related and predictable? How can gravity and general relativity be conjoined with the rest of physics? These important questions increasingly call for the knowledge and techniques developed in nuclear physics.
The search for the NSM is proceeding along three complementary frontiers: the high-energy frontier, where experiments at the CERN LHC can discover new particles associated with the NSM; the astrophysical frontier, where measurements of gamma-rays and neutrinos produced in astrophysical environments may uncover the nature of cold dark matter; and a third frontier known in nuclear physics as the precision frontier and in particle physics as the intensity frontier. Among these challenges it is principally the precision frontier—where exquisitely sensitive measurements may reveal tiny deviations from Standard Model predictions and point to the fundamental symmetries of the NSM, or directly reveal the interactions of dark-matter particles and neutrinos—that has attracted the attention and participation of nuclear physicists. Particle physicists are approaching closely related fundamental physics questions with different tools, intense accelerator-produced beams of neutrinos, muons, and kaons.
As Michelson emphasized and experience has confirmed, it can be very illuminating to subject predictions to the most careful experimental scrutiny possible. In addition to the experiments described above in neutrino physics and muon physics, increasingly stringent precision tests of our present understanding of physics, as embodied in the Standard Model, have been devised. A convenient way to organize them for discussion is by “probe.” With few exceptions, they fall into groups: the beta decays of nuclei and the free neutron, weak interactions between nucleons; the weak interactions of electrons; the decay of the muon; and the decay of the pion. Searches for a permanent electric dipole moment (evidence for the violation of the CP symmetry—the product of charge conjugation symmetry (C-symmetry) and parity symmetry (P-symmetry)) and searches for neutrinoless double-beta decay
(evidence that neutrinos and antineutrinos are the same particle) are a particular focus in nuclear physics.
Beta Decays of Nuclei and the Free Neutron
The beta-decays of nuclei in which both the parent and daughter nuclear states have zero angular momentum and positive parity (“superallowed” nuclear decays) provide a value for the largest and most precise element Vud in the Standard Model Cabibbo-Kobayashi-Maskawa (CKM) matrix that relates quark flavor states to quark mass states. The CKM matrix transforms the quark description with well-defined masses (for which there are no special names) into states with well-defined flavors (down, strange, and bottom). Because there must be a one-to-one relationship between the two descriptions (i.e., no additional quarks beyond the three known to exist), the matrix is unitary, which defines some relationships between the elements and reduces the number of independent parameters to only three, plus a phase that has no effect on the size of the parameters. When combined with the results of kaon and B meson decay studies, which yield the small terms Vus and Vub, the superallowed nuclear decays provide a stringent test of the unitarity property of the CKM matrix (see Figure 2.38). If this unitarity requirement were found to be violated, it might imply the existence of new interactions such as right-handed weak interactions; an additional generation of quarks and leptons; or the effects of virtual supersymmetric particles that modify the dynamics of the decay. The correlation between the spin axis of a radioactive nucleus and the emission direction of a beta particle or a neutrino can also yield information about possible non-Standard-Model structure of the weak interaction.
Neutrons are neutral particles heavy enough that they are unstable: A free neutron can decay with a half-life of about 10 minutes into a proton, electron, and antineutrino. This decay process makes the neutron a microlaboratory for the study of the weak interaction. When combined with the results of neutron decay correlations, the lifetime of the free neutron provides an independent test of CKM unitarity. The neutron lifetime itself is also one of the key inputs in big bang nucleosynthesis that provides a framework for explaining the abundance of the light elements hydrogen, deuterium, helium-3, helium-4, and lithium-7 in the universe. Notwithstanding its importance, to make a precise measurement of the lifetime at better than the desired one part in a thousand level is very challenging. Improved measurements are needed.
Much more can be learned from a careful study of neutron beta decay. The correlations between the measurable quantities—namely, the neutron spin direction, the emission directions of the electron and the neutrino, the electron spin direction, and the electron energy spectrum—each illuminate a different facet of Standard Model predictions that may disclose the influence of NSM
physics. A vigorous worldwide program of precise weak decay studies aims to achieve significant improvements in sensitivity. It involves ongoing studies of the superallowed nuclear decays at ANL, Texas A&M University, TRIUMF, Jyvaskyla, ISOLDE, and Munich, and in the future FRIB, where rare, unstable isotopes will provide enhanced sensitivity for testing the theory of correction terms. Improved measurements of the neutron lifetime and neutron decay correlations are planned at the Institut Laue-Langevin, the Los Alamos Neutron Science Center (LANSCE), NIST, TRIUMF, Munich, and the Fundamental Neutron Physics Beamline (FNPB) at the SNS.
The unitarity of the CKM matrix supports the conclusion that the known quarks are the only ones that exist in nature. In the neutrino world, however, there are intriguing indications that the three known flavors may be accompanied by
other, so-called sterile neutrinos that mix slightly with the known neutrinos. Data from the Liquid Scintillator Neutrino Detector at Los Alamos, from a number of short-baseline reactor oscillation experiments, from the MiniBooNE neutrino oscillation search at Fermilab, and from radioactive-source tests of the Soviet-American Gallium Experiment (SAGE) and Gallex/Gallium Neutrino Observatory (GNO) solar neutrino detectors all appear to exhibit small deviations from the three-neutrino expectation. A consistent interpretation has been elusive. The results are limited by statistical and systematic uncertainties, pointing to a need for new tests and theoretical work. The low-energy solar neutrino spectrum remains imprecisely known, and the ongoing measurements by the Borexino experiment in Italy as well as new experiments being designed would permit a comparison between the sun’s energy production and its neutrino production. Carried out with sufficient precision, the comparison would be a test of both neutrino unitarity and of our understanding of solar energy generation. Despite the general success of solar models, there are small but significant discrepancies related particularly to the abundance of elements heavier than helium, and new experiments could also provide insight into this problem.
Weak Interactions Between Nucleons
The same weak interaction that gives rise to beta decay also contributes to the force between quarks and therefore between nucleons. Its magnitude is tiny (10−14) by comparison with the strong force, but it discloses its presence through parity violation because the strong force respects parity. Highly sensitive experiments reveal its presence unequivocally, but one particular part of the weak interaction between nucleons, in which a pion is exchanged, has defied experimental and theoretical quantification. New experiments are under way to try to observe the parity-violating rotation of neutron spin as neutrons pass through matter and a possible preference in spin direction as neutrons are captured by protons. At the same time, advanced lattice-gauge theory is being applied in the hope of achieving a theoretical understanding of the apparent suppression of this part of the force. Theory is currently limited by existing computational resources.
Weak Interactions of Electrons
A somewhat complementary avenue involves the measurement of parity-violating (PV) asymmetries in the scattering of longitudinally polarized electrons from nuclei or from other electrons. Historically, the measurement of such an asymmetry in deep inelastic scattering from deuterium at SLAC played a key role in confirming the fundamental prediction of the Standard Model that there were neutral weak interactions. Indeed, two more accurate versions of this classic experiment, the
Parity Violation in Deep Inelastic Scattering (PVDIS) and the PVDIS-SOLID, are planned at JLAB. After the initial work at SLAC, PV electron scattering was used with great success to probe the contributions of strange quarks to the nucleon’s electromagnetic properties through a program of measurements at MIT-Bates, the Mainz MAMI facility, and the CEBAF beam at JLAB. A measurement of the PV asymmetry in Moller scattering (in which polarized electrons are scattered from unpolarized ones) performed at SLAC yielded the most precise determination of the dependence of the weak mixing angle on energy scale, one of the more dramatic and novel predictions of the Standard Model. The weak mixing angle, or Weinberg angle, is a parameter of the Standard Model that defines (among other things) the extent to which interactions mediated by the Z boson violate parity. A still more precise version of this experiment, Moller, is planned at JLAB after the completion of the energy upgrade. It would complement another PV experiment, Q-weak, currently under way at JLAB involving elastic scattering from a proton target. Together, the comparison of results of purely leptonic (Moller scattering) and semileptonic (electron-proton scattering) experiments can provide a powerful test of the Standard Model.
The properties and decays of muons—structureless particles like electrons but with a mass about 200 times greater—are among the most sensitive probes of the Standard Model. The example of the muon anomalous moment was already described above. In addition, nuclear physicists have recently reported new results on the correlation parameters in muon decay, the muon lifetime, and muon capture in hydrogen, which have improved previous experimental values by factors of 10 or more. The muon lifetime, now determined to part-per-million accuracy, defines the strength of the weak interaction. These new results give the tightest limits now available on interactions beyond those in the Standard Model. Over the next decade, new measurements with muons will continue to push the precision frontier. An experiment to search for the decay of a muon into an electron and photon with a 100-fold better sensitivity than previous measurements is under way now at the Paul Scherrer Institute (PSI). This conversion is essentially forbidden in the Standard Model but is predicted to occur in certain proposed theoretical extensions. Two experiments—a new, even more precise measurement of the anomalous moment of the muon and a sensitive search for the conversion of a muon to an electron in the field of a nucleus—are being planned by collaborations of high-energy and nuclear physicists for Fermilab following its intensity upgrade.
Like the neutron, the pion is a composite particle that undergoes beta decay, but because of its large mass, it can decay to either an electron or a muon with an associated neutrino. The relative decay probabilities of charged pions into a muon or an electron have provided stringent (better than 0.1 percent) tests of the lepton universality property of the Standard Model weak interaction, which simply states that the weak interaction acts with the same strength in every family of elementary particles. With the operation of the LHC and its prospective sensitivity to the existence of new particles with multi-TeV masses, the forefront sensitivity for many of the weak decay studies will rise to 1 part in 10,000 within the next decade. Two new pion beta decay measurements are planned at TRIUMF and PSI.
Certain experimental efforts in nuclear physics are motivated by specific expectations for the physics that the NSM is likely to display. Two specific research thrusts having great discovery potential are searches for the permanent electric dipole moments (EDMs) of the nucleon, neutral atoms, and charged leptons and searches for the neutrinoless double beta decay of heavy nuclei. Hand in hand with these experimental initiatives is a focused program of theoretical nuclear physics studies that aim to interpret the results of these and other experiments in terms of the NSM.
Search for a Permanent Electric Dipole Moment
The goal of the EDM searches is to discover a mechanism for the violation of CP symmetry (or time-reversal symmetry) beyond the CP violation that can be accounted for by the Standard Model weak interactions. The reason is that an explanation of the excess of matter over antimatter in the present universe requires the existence of a not-yet-understood source of CP violation in the early universe. Perhaps it may be found in the neutrinos, as we consider below. Alternatively, if the matter-antimatter asymmetry was produced when the universe was roughly 10 picoseconds old—during the era of so-called electroweak symmetry breaking—then the next generation of EDM experiments would have a good chance of observing it. EDM searches look for a small shift in the precision frequency of a quantum system with spin (such as the neutron) in the presence of electric and magnetic fields. An EDM violates both parity (P) and time-reversal (T) symmetry, but not the matter-antimatter symmetry C. In addition to uncovering the CP violation needed to explain the matter-antimatter asymmetry, the EDM searches could also
reveal the presence of CP violation in the strong interaction. The present limits on the latter are so stringent as to imply the possible existence of another symmetry, known as Peccei-Quinn symmetry, a symmetry invoked specifically to explain why the CP symmetry is not violated in the strong interactions at more than about 10−10. The violation of this symmetry in such a way as to lead to a nonvanishing EDM would imply the existence of a new particle called the axion. If it exists, the axion itself could also make up the cosmic dark matter.
The next generation of EDM searches is expected to improve the level of sensitivity by up to two orders of magnitude over present limits. Intensive efforts to reach this level of sensitivity are under way in the United States, Canada, and Europe. They include searches for (1) the neutron EDM at the Fundamental Neutron Physics Beamline at the Oak Ridge SNS, the Institut Laue-Langevin in Grenoble, and the Paul Scherrer Institute in Switzerland; (2) the atomic EDMs of mercury, radium, radon, and xenon at various laboratories and universities; and (3) the EDM of the electron, using molecular or solid-state systems, in the United States and Europe. In addition, nuclear scientists at BNL are developing a possible measurement of the proton EDM using a storage-ring technique. The “physics reach” of these searches expressed in terms of the mass of new, presently unknown particles is in many cases at a scale beyond that accessible at the LHC. The LHC gets its sensitivity by directly trying to produce and detect new particles involved in CP-violating interactions, while the experiments that are the main subject of this paragraph look for the effects of the same interactions at much lower energies by seeking rare effects induced by quantum fluctuations. The present EDM limits generically imply that the mass scale of any new CP-violating interaction (in other words, the mass of some new particle that could mediate the interaction) is greater than several TeV, and improvements by two orders of magnitude would extend this scale by a factor of 10, well beyond the scale accessible at the energy frontier.
Search for Neutrinoless Double-Beta Decay
Because neutrinos lack electric charge they can in principle be their own anti-particles. Whether some symmetry preserves a distinction between matter and antimatter for neutrinos is presently unknown. The answer to this question may be at the heart of why the universe contains matter and essentially no antimatter, because the violation of total lepton number could be associated with the generation of the matter-antimatter asymmetry at times much earlier than 10 picoseconds after the big bang. It is also a question that needs an answer for the construction of the NSM, because it leads to a novel mechanism for the generation of particle mass, one that does not exist in the Standard Model.
The only practical experimental approach to this problem is the search for
neutrinoless double beta decay. The pairing property of the nuclear force leads to a number of nuclei that are stable against all decay modes except the simultaneous emission of two electrons and two antineutrinos. The process, while allowed, rarely occurs. Of approximately 10 examples known, the shortest half-life is still a billion times longer than the age of the universe. If neutrinos and antineutrinos are the same particle, then the decay can proceed with the emission of just the two electrons and no neutrinos—that is, neutrinoless double beta decay. That process has not yet been seen, with lifetime limits some 104 times longer still than the two-neutrino mode. A major experimental attack on this problem, calling ultimately for detectors containing a ton or more of an enriched isotope, is a priority in nuclear physics. In addition to answering the question of whether a neutrino is the same as an antineutrino, a Majorana particle, or is different, a Dirac particle, a positive observation would help to define the mass of neutrinos.
As with the EDM experiments, there exists a worldwide program of searches for neutrinoless double beta decay. U.S. nuclear scientists are involved in several of these efforts, including the CUORE experiment at Gran Sasso, the EXO experiment at the Waste Isolation Pilot Plant (WIPP) in New Mexico, the Majorana Demonstrator Project at the Sanford Underground Laboratory, SNO+ at SNOLAB, and KamLAND-Xen at Kamioka. Majorana neutrinos with masses in the presently allowed range may produce a signal in these experiments. If necessary, larger and more ambitious experiments using enriched isotopes could improve the sensitivity substantially. A next-generation ton-scale neutrinoless double beta decay experiment could be carried out 7,400 feet down in the Sanford Underground Research Facility (SURF) in the Homestake mine in Lead, South Dakota.
Nuclear Theory at the Precision Frontier
For these experimental efforts at the precision frontier, input and guidance from nuclear theory is vital. For example, interpreting the results of EDM searches in terms of a new mechanism for CP violation and relating the latter to the cosmic matter-antimatter asymmetry requires a web of nuclear theory computations along with calculations from cosmology and astrophysics. Starting from the computation of low-energy matrix elements in strongly interacting systems such as the neutron or mercury nucleus, one must then derive values for the parameters of an underlying model at the elementary particle level, taking into account the constraints from studies at the high-energy and astrophysical frontiers. Computations of the matter-antimatter asymmetry require calculations analogous to those performed when interpreting the results of relativistic heavy ion collisions. A similar chain of theoretical analyses is needed to interpret the neutrinoless double-beta decay results, as well as those from weak decays and PV electron scattering, in terms of the structure of the NSM. The increasing scope of the experimental effort in this
area of nuclear science calls for concomitant increases in the related theoretical effort as well as advances in computational tools.
Connections with Cosmology
Much remains to be understood about neutrino mass and mixing, and new experiments are under construction or in operation. Oscillations set a lower limit on the mass, but other techniques are required to determine the actual magnitude of neutrino mass. The mass of the lightest neutrino cannot be less than zero, and it follows from oscillation data that the sum of the three masses must be at least 0.06 eV. An upper limit that is independent of assumptions about the properties of neutrinos comes from laboratory measurements of the shape of beta spectra near the end point. There the electron energy approaches the maximum value for the decay, which is limited by the rest mass of the accompanying neutrino. Experimental measurements of the shape of the tritium beta spectrum yield an upper limit on the sum of the three masses of 6 eV, setting a range of 0.06 to 6 eV in which the mass sum must lie. A new, large-scale tritium experiment, the KArlsruhe TRItium Neutrino (KATRIN) experiment, is under construction that will have 0.6 eV sensitivity. New ideas for extending the sensitivity of beta experiments are being explored should the mass sum turn out to be smaller than 0.6 eV.
There is a strong prediction, but not yet direct experimental proof, of a cosmological relic neutrino background. The energies of these neutrinos are so low that detecting them appears all but impossible. However, cosmological arguments also relate the large-scale structure in the universe to neutrino mass. These arguments are model-dependent, being sensitive to the equation of state of dark energy and to the power spectral index that describes how quantum fluctuations in the big bang were distributed in scale. For reasonable assumptions, they limit the mass sum to about 0.6 eV or less. The ESA Planck satellite, launched in 2009, together with new galaxy surveys, may be able to extend the sensitivity to about 0.1 eV. A laboratory measurement at this level would be the most direct laboratory confirmation of the existence of the relic neutrino background that can presently be envisaged and would subject cosmological models to an important test.
Overwhelming evidence from observational astronomy for the existence of dark matter demands an understanding of its particle nature. Neutrinos are now known to be insufficiently massive, and no other known Standard Model particle can explain the data. Many candidates have been advanced, of which two are strongly motivated by theoretical considerations outside of astronomy. The lightest neutral particle in theories such as supersymmetry would be long-lived or stable and could have the mass (still many times the proton mass) and the interaction cross section to be the dark matter. Alternatively, a new symmetry would explain why CP is so well conserved in the strong interactions and would imply
the existence of a very light, long-lived particle, the axion, which could also be the dark matter.
Detection of the former type of particle, the weakly interacting massive particle (WIMP), might be achieved by observing the recoil energy imparted to a nucleus struck by a WIMP present in the galactic dark-matter cloud. The energies are small, the interactions are rare, and the backgrounds present significant challenges, but there has been steady progress toward achieving the necessary sensitivity and redundant criteria for identification. Nuclear physics techniques are widely used in this field, and nuclear physicists have much to contribute; indeed, there is great enthusiasm in the nuclear physics community for addressing this challenge.
Key parts of the experimental program in fundamental symmetries and neutrino astrophysics demand an underground location shielded from the steady rain of cosmic rays that arrive at Earth’s surface. The signals from solar neutrinos, supernova neutrinos, and geoneutrinos, from neutrinoless double-beta decay, and from dark-matter particles are so rare that the cosmic ray background at Earth’s surface overwhelms them. Deep underground, the flux of energetic muons, the most penetrating cosmic ray particle other than neutrinos, decreases by a factor of about 10 for each 300 m.
The deepest underground research laboratory today is SNOLAB in Canada, where the SNO experiment was carried out at a depth of 2,000 m. A smaller but even deeper laboratory is being commissioned at Jinping in China. Many countries have deep underground research laboratories: the Gran Sasso National Laboratory in Italy at an effective depth of 1,300 m is the largest in the world. In the United States, Ray Davis’s experiment on solar neutrinos, for which he shared the 2002 Nobel prize, was carried out 1,600-m down in the Homestake gold mine in South Dakota. Other research sites in the United States include WIPP and the Soudan mine in Minnesota, where neutrinos from Fermilab are detected. Both are about 700 m deep and are confirming the atmospheric neutrino signal and providing increasingly precise data on the “atmospheric” mass-squared splitting.
The priority of the research goals that need underground space, including neutrinoless double beta decay, dark matter searches, and solar neutrino physics, prompted the National Science Foundation to solicit proposals for a science program and a laboratory. Eight sites were proposed, and the Homestake mine was selected for the final design and facility proposal. The owners of the mine had decided in 2000 to terminate commercial operations there. Once closed, the mine flooded and required rehabilitation. The importance of the science and its location in South Dakota attracted private funding in excess of $70 million, unprecedented in the field of nuclear and particle physics. That funding, with additional support
from the state of South Dakota, was used to prepare surface facilities for research and to rehabilitate the mine to a depth of 1,300 m. In the interim, the field of high-energy physics became increasingly interested in this area of research and developed plans for a neutrino beam that would originate at Fermilab, 1,300 km to the East. At Homestake, a very large detector would make possible studies of the neutrino mass hierarchy (that is, the ordering of the mass eigenstates by increasing mass), the possible violation of the CP symmetry in neutrinos, and searches for proton decay. If CP violation is observable among neutrinos, it is a tantalizing possibility for explaining why the universe contains mostly matter and not much antimatter. There are both logistical and intellectual advantages for nuclear and particle physicists to collocate in the Sanford Underground Research Facility at Homestake.5 However, at the time this report was being prepared, the future of that facility and the experiments planned for it were uncertain.
The field of fundamental symmetries is a microcosm for some of the difficulties encountered in managing science in the United States and elsewhere because it does not always fit comfortably within the mainstream of a field. The questions often call for exploration of physics that lies at the interface between two or more disciplines. The physics outcomes are often highly uncertain when a project is starting up.
Many nations have elected to organize a separate research field that, broadly speaking, encompasses particle and nuclear physics, high-energy astrophysics, and cosmology. In the United States, the core disciplines of nuclear physics, particle physics, astronomy, and space sciences have been preserved at the federal agency level and are the homes for investigations in the interface areas often explored in the area of fundamental symmetries. Agency decisions on which discipline area will consider funding an investigation may appear to be arbitrary, but there has been a commendable effort to be flexible and to prevent research from falling into the cracks. Nevertheless, in the competition for scarce resources, core studies in a particular discipline area are likely to enjoy the home-arena advantage when competing against studies that might arguably belong to another discipline. The European approach, forming a separate discipline area, is one solution, but there is also merit in the continuous competition between research at the core of a discipline and research at its boundaries. From such competition, the center of a discipline can begin to shift.
5 NRC, 2012, An Assessment of the Science Proposed for the Deep Underground Science and Engineering Laboratory (DUSEL), Washington, D.C.: The National Academies Press.
The field of fundamental symmetries and neutrinos serves as a magnet for attracting new talent into physics and its related disciplines. Scientists young and old find the questions at once grand and simple. Motivation does not need to be accompanied by specialized knowledge at the beginning. University physics faculties as well are enthusiastic about the potential of the field for discovery and about the fact that the basic concepts are easily communicated to students and to colleagues. As a result, departments have hired new faculty working in this field. The experimental tools and expertise lie at the field’s boundary with particle, atomic, and molecular physics. For students, the field provides exposure to a variety of experimental and theoretical techniques and the opportunity to work at the interface of several disciplines. That breadth of experience is attractive to future employers whether in academe, at the national laboratories, or in industry.