The proposed DUSEL science program encapsulates an initial suite of physics experiments and diverse multidisciplinary research experiments in subsurface engineering, the geosciences, and the biosciences and has the capacity for more future experiments. This chapter undertakes to present the committee’s assessment of the main physics questions to be addressed by the proposed physics experiments and of the impact of the proposed facility on research in fields other than physics. The proposed physics experiments are one or more dark matter experiments; a long-baseline experiment for the study of neutrino oscillations and proton decay that is also capable of measurements in neutrino astrophysics; a neutrinoless double-beta decay experiment; and an accelerator-based nuclear astrophysics experiment. Accordingly, the chapter assesses, in no particular order, the physics questions of dark matter, of long-baseline neutrino oscillations and neutrinoless double-beta-decay in the larger context of neutrino physics and, together with proton decay, in the context of unified theories; of nuclear astrophysics, and of neutrino astrophysics. It also undertakes an assessment of the impact of the proposed laboratory infrastructure on research in fields other than physics—namely, subsurface engineering and the geosciences and biosciences.
To give an idea of the scale of the experiments needed to address the elements of the proposed DUSEL program, the construction cost ranges estimated by the DUSEL project during the preliminary design process were $80 million to $200 million for the dark matter experiment(s); $785 million to $1,065 million for the long-baseline neutrino and proton decay experiment, $250 million to $350 million for the neutrinoless double-beta decay experiment, $30 million to $50 million for
the nuclear astrophysics facility, and $60 million to $180 million total for multiple experiments in subsurface engineering and geoscience and bioscience. The estimated incremental costs associated with efforts to detect supernovas and proton decay are not significant. Budgetary considerations and further development of the experiments will, of course, change the actual costs of these experiments.
Because both the DUSEL program and the designs for the experiments to address the critical physics questions are still evolving, the committee chose to focus its assessment on the scientific merits of the questions to be addressed rather than on the technical merits of the experiments as they are now designed. Accordingly, it did not assess the technical merits of each experiment being sited at DUSEL or the suitability of alternative sites. Similarly, the committee chose to focus its assessment on the general scientific merits of research in the fields other than physics that would be enabled by the availability of an underground research facility rather than on the specific scientific or technical merits of a particular suite of nonphysics underground experiments. In choosing to focus in this way, the committee intends its assessments to be of value to the future direction of underground research, independent of whether the DUSEL program, as presently conceived, is realized. Finally, the committee assessed the intellectual merit of the underground science of the proposed DUSEL program in the general context of frontier scientific research worldwide. It was not a purpose of this study to rank the different fields or subfields of science, or to prioritize across programs. Neither the individual science questions nor the overall scientific program were compared with those of any other particular projects or investments.
Astronomers are sure that what can be detected by telescopes represents only a small portion of the Universe; furthermore, only a small fraction (~4 percent) is made of normal matter of the type that we live with here on Earth and observe directly elsewhere. The remainder of the Universe is composed of dark matter (about 22 percent), which has mass but does not emit or absorb light, and dark energy (about 74 percent). While dark energy is best studied using astrophysical techniques, direct detection of dark matter in the laboratory is possible, and direct experimental detection of dark matter interactions would profoundly change our understanding of both the microscopic world of elementary particles and the macroscopic astrophysical world, thus bridging the very smallest and the very largest objects in the known Universe.
The first evidence for the existence of dark matter came from observations of the rate at which astronomical objects such as stars, gas clouds, and galaxies rotate. It was discovered that bodies far from the center of rotation move faster than would be predicted using the laws of gravity and the visible mass of known objects, suggesting that unseen bodies existed on a grand scale. Additional evidence for dark matter comes from cosmological observations such as the fluctuation patterns of the cosmic microwave background, and further corroboration is provided by observations of colliding galaxies where the dark matter has been imaged using gravitational lensing. Depictions of this phenomenon have captured the imagination of the general public (see Figure 3.1).
Many explanations of the composition of dark matter have been proposed and compared with experimental data. Some of the dark matter could come from unobserved dark bodies of ordinary matter, such as massive compact halo objects or molecular gas clouds. However, to understand cosmological data requires the existence of exotic dark matter, and there now is consensus that most of the dark matter consists of as-yet-undiscovered elementary particles whose nature has yet to be determined. One possibility motivated by theory is that the dark matter arises from a particle called the axion. Experimental searches for axions and indirect astrophysical detection of dark matter use techniques that do not operate underground and so will not be discussed here. A second theoretically attractive possibility is that dark matter consists of weakly interacting massive particles (WIMPs). Such WIMPs could be directly detected in underground experiments and would be the focus of an underground dark matter search program.
Theories of elementary particle physics provide natural candidates for WIMPs. For example, in many supersymmetric models, the lightest supersymmetric particle is stable, and many of these theories naturally provide particles with masses and interaction cross sections that are consistent with astronomical and cosmological bounds on WIMP properties. There are also nonsupersymmetric theories that postulate the existence of particles with the appropriate properties. Several of these particles are being searched for in accelerator-based programs such as the Large Hadron Collider (LHC) of the European Organization of Nuclear Research (CERN). However, only the direct detection of naturally occurring WIMPs would assure that these particles, whether discovered at an accelerator or not, are in fact the source of dark matter.
Because they are elementary particles not found in the Standard Model, it is likely that, when discovered, dark matter particles will be a central ingredient in finding solutions to known problems with present particle theory. Knowledge of the mass, the interaction rate, and the number density of dark matter particles
independent of any theoretical framework would allow predictions of production and annihilation rates that could be tested in future experiments. These data would also affect cosmological calculations relevant for describing the evolution of the Universe.
The direct detection of dark matter would involve the search for collisions between ordinary nuclei and WIMPs from the halo of our galaxy. Such observations would be difficult, since WIMPs interact rarely and the signals of the collision would be very faint. Therefore, detectors having a good likelihood of measuring such collisions would need to be large and operate deep underground to reduce backgrounds of cosmic ray origin that can mimic the signals being sought.
These searches are based on the hypothesis that dark matter consists of WIMPs with a mass of a few tens of proton masses or greater. When such a particle collides with a target it should produce a recoiling nucleus whose energy can be measured through scintillation light flashes, phonons, or ionization produced by the nucleus. Learning to address the challenges associated with these types of studies requires a series of experiments with ever-increasing target mass and improvements in methods for rejecting background signals. History teaches that each generation of detector corresponds to an increase of about an order of magnitude in target mass. In the 25 years since WIMPs were first proposed as a dark matter candidate, the sensitivity of nuclear recoil experiments has improved by a factor of more than 1 billion. Once irreducible backgrounds are encountered for a specific detector, further running in the same configuration improves sensitivity only very slowly. It is much more efficient to determine appropriate solutions to identify and account for backgrounds and then to incorporate these improvements while also increasing the target mass.
Past experiments are referred to as generation zero (G0) and ongoing experiments as generation one (G1). G1 experiments typically operate with tens of kilograms of target mass and are reaching much better background reduction and sensitivity than G0 experiments. Experience with the targets and the handling of backgrounds have informed next-generation designs, and G2 experiments are currently under development and installation. These experiments will have hundreds of kilograms of target mass, and the following generation, G3, will have even greater target masses, 0.5 to multiple tons. The experiments considered for DUSEL are in the G3 category.
The dark matter experiments summarized in Table 3.1 illustrate the current and future generations of detector and techniques. U.S. scientists historically have been heavily involved in this research and are expected to continue their involvement. The compelling nature of the science, and the high discovery potential,
makes it important that they do so and that opportunities exist for discoveries to be made in the United States.
The strategy for experimental background rejection depends on which of the three signatures currently used to observe the nuclear recoil is chosen: scintillation, phonons, or ionization. Some experiments use a “single signature,” including the shape and localization of that signal. These include single-phase noble liquid (xenon, argon) scintillation experiments and experiments exploiting the bubble chamber concept, where ionization in a supersaturated liquid creates bubbles that can be detected visually or acoustically or both.
Other experiments, including most of the leading large experiments, use combinations of two signatures to reinforce background rejection: (1) light/ionization together with phonons or heat in crystals at millikelvin temperatures and (2) light/ionization in noble liquid detectors. Experiments of the first kind use germanium or scintillating crystals; those of the second are double-phase ionization/ scintillation xenon or argon experiments, so called because they operate under conditions where the gas and liquid phases coexist, enabling amplification of the weak ionization signal in the gas. Research and development are under way on direction-sensitive detectors using low-pressure-gas “time projection chambers.” Debates regularly surface in the dark matter community about whether certain experiments have properly excluded or included claims of positive signals. To address these uncertainties about signals, it is important that a single experiment be able to collect multiple complementary signals and that multiple experiments using different nuclear targets are conducted.
The most recent results over the WIMP mass range of 10-1,000 GeV exclude cross sections approaching 10−44 cm2 per nucleon for the simplest models (see Figure 3.2). However, the DAMA/Libra experiment has a long-standing observation with an annual modulation of the event rate that is consistent with a WIMP having a mass of less than 10 GeV. These results have persisted over 7 years of data taking. However, the cross section indicated by the DAMA/Libra experiment is not consistent with limits from the CDMS and XENON-100 experiments in most WIMP models. There are also measurements that may indicate an excess above backgrounds at very low WIMP masses, but this signal is not as well established as the DAMA/Libra observation. Finally, a number of cosmic ray experiments (PAMELA and ATIC) report excess electron or positron signals that could be from WIMP annihilation in our galaxy. It is, however, somewhat complicated to find dark matter models that reconcile these results with the charged cosmic ray data from the Fermi/LAT experiment. However, several research groups have pointed out that conventional astrophysical sources of positrons could account for the putative PAMELA/ATIC signal. The theoretical community has been very active in trying to explain some or all of these results and has developed new models leading to new signals to search for at the LHC, in B-factory data and in electron-scattering
TABLE 3.1 Plans of WIMP Search Collaborations Using Nondirectional Detectors Around the World
|Current Generation (G1)||Generation 2 (G2)||Generation 3 (G3)|
|Gross Mass||Current Status||Gross Mass||Current Status||Gross Mass||Current Status|
350 kg Xe
1.5-3 tons Xe
tank as LUX
20 tons Xe
10 kg Xe
50 kg Ar
6 tons Xe
20 tons DAr
2.4 tons Xe
10 kg Ge
100 kg Ge
60 kg CF3
|500 kg||2011 Design
|16 ton scale||S4
500 kg Ar
50 tons Ar/Ne
Now 3 kg →
24 kg Ge
100 kg Ge
1 ton Ge/Scintillator, LS Modane extension Merging of CRESST and Edelweiss
5 kg of CaWO4
|Europe||ArDM 800 kg Ar Canfranc||Construction 2011 Install|
|Europe/United States||WARP 140 kg Ar Gran Sasso||Running|
|Current Generation (G1)||Generation 2 (G2)||Generation 3 (G3)|
|Gross Mass||Current Status||Gross Mass||Current Status||Gross Mass||Current Status|
|Japan||XMASS 800 kg Xe Kamioka||Installation Running 2010||XMASS II 5 tons||R&D 2014 Install||XMASS III 10 tons||Planning 2016|
|China||JinPing lab Ge and/or Xe||Planning||100 kg||2015 R&D||>1 ton||2020|
experiments. At some level, dark matter imposes itself on every branch of particle physics. To keep all these communities from confusion as claims of discovery are made, definitive conclusions must be reached, and this will necessitate more than one detector that uses more than one technique.
In the next 4 to 6 years, with the deployment of the G2 experiments such as MiniCLEAN, DEAP-3600, and LUX, and as new results from XENON-100 become available, sensitivities can be expected to increase by another order of magnitude.
Two of the approaches under consideration for G3 detectors are 1-ton phonon-mediated low-temperature detectors and 1-ton or multiton noble liquids. U.S. scientists are playing leadership roles using both techniques, and contacts between this country and European groups are well developed. G3 experiments will push the cross section sensitivity below 10−47 cm2 per nucleon. Sensitivity near 10−48 cm2 per nucleon approaches a new background regime at which solar neutrino coherent scattering becomes important. This solar neutrino background is irreducible, and to progress past the regime, statistical background subtraction or directional detection become necessary, both of which represent a quantum step in difficulty. Thus, supporting G3 experiments are a natural goal for the next decade. On a longer timescale, large directional detectors may be required. Underground access for detector development is essential because background signals at the surface make it impossible to accurately assess performance aboveground. Post-G3 and large directional detectors would likely require large caverns at great depth.
Once a definitive dark matter signal is established, the next goal would be to observe the annual signal modulation as Earth’s, velocity relative to the dark matter halo changes owing to Earth’s motion around the Sun. Such velocity effects, largest at the threshold energy of the detector, would take several years of operation to convincingly establish. An annual modulation signal would be compelling evidence and within the scope of a G2 or G3 experiment.
Beyond annual modulation, a detector with directional sensitivity could potentially observe a daily modulation of the direction of dark matter at all energies due
to the finite rotational velocity at the surface of the Earth, thereby opening the door to dark matter astronomy. Directional detection would give information about the velocity distribution of WIMPs and would begin to discriminate between models of the dark matter halo. Directional detectors would rely on detecting the nuclear recoil in low-pressure gas and so represent a new technology. They would require large caverns and are not expected to be deployed before 2024.
The predominant mass in the Universe is dark matter. Demonstrating that dark matter consists of elementary particles would be a major discovery. Understanding the nature and composition of these particles is a major scientific challenge for our time.1
The direct detection of dark matter would provide a crucial experimental connection between particle physics and cosmology. To be definitive, their signature signals would need to be significantly above the background and would need to come from different experiments. Concurrence between experiments will be essential: Several experiments have already claimed dark matter signals, but these have not been confirmed by other experiments. The program in dark matter detection will by necessity involve a number of G2 experiments that will coalesce into a smaller number of highly sophisticated and massive G3 detectors. Based on the history of leadership by U.S. physicists on experiments using all detection modes, it is expected that there will be U.S. involvement in more than one G3 experiment, and given the importance of this science and the discovery potential, it would be desirable for the United States to be a leader in at least one. Once dark matter has been observed, a major program for understanding the properties of the new particles will be required.
Conclusion: The direct detection dark matter underground experiment is of paramount scientific importance and will address a crucial question upon whose answer the tenets of our understanding of the Universe depend. This experiment would not only provide an exceptional opportunity to address a scientific question of paramount importance, it would also have a significant positive impact upon the stewardship of the particle physics and nuclear physics research communities and would have the United States assume a visible leadership role in the expanding field of underground science. In light of the leading roles played by U.S. scientists in the study of dark matter, together with the need to build two or more large experiments for this area,
1 NRC. 2006. Revealing the Hidden Nature of Space and Time. Washington, D.C.: The National Academies Press, p. 13.
U.S. particle and nuclear physicists are well positioned to assume leadership roles in the development of one direct detection dark matter experiment on the ton- to multiton scale. While installation of such a U.S.-developed experiment in an appropriate foreign facility would significantly benefit scientific progress and the research communities, there would be substantial advantages to the communities if this experiment could be installed within the United States, possibly at the same site as the long-baseline neutrino experiment.
The three other major physics experiments proposed for DUSEL—neutrino oscillations, neutrinoless double-beta decay, and proton decay—are among the most promising tests of theories that seek to provide a unified description of the forces.2 After providing a general overview of the nature of grand unification theories, these three experiments, and the roles they might play in resolving outstanding questions, are described.
We are able to observe the Universe because it contains important ingredients that are the stuff of ordinary matter: protons and neutrons, which are composites of quarks, and electrons. Whatever the history of the Universe, these particles were left behind and are stable enough to account for what is visible to us. Most of the properties and interactions of the visible matter made up of these particles can be accounted for by current particle theories. However, significant inconsistencies within existing theories and gaps in our knowledge remain. The remaining major physics experiments proposed for DUSEL should help fill those gaps and address those inconsistencies.
What is now called the Standard Model evolved throughout the twentieth century and aimed to describe the physics of these elementary particles and how they interact. In the Standard Model there are two fundamental fermion-type particles, as shown on the left side of Figure 3.3. They divide into six flavors of quarks and six types of leptons, three with a charge—the electron, the muon, and the tau—and their associated neutrinos. The quarks are strongly interacting fundamental particles that combine to make up the baryons (protons and neutrons); the leptons do not strongly interact. Each particle type is associated with a conserved quantum number. Quarks carry the baryon number, and baryon number conservation guarantees the stability of the proton and many nuclei. However, quarks also come in different flavors, and weak nuclear interactions can change one flavor of quark into another. The leptons carry lepton number L, and until the 1990s and
2 The dark matter experiment discussed in the preceding section also has implications for tests of grand unified theories by way of the information it might provide on supersymmetry.
the discovery of neutrino oscillations, experimental observations were consistent with there being three separate conservation laws associated with the three lepton flavors—electron, muon, and tau lepton numbers.
In addition to the lepton and quark particles, there are four known fundamental forces—strong, electromagnetic, weak, and gravity. In any theory consistent with relativity and quantum mechanics, each force is associated with a boson-type particle, shown in the fourth column of Figure 3.3, which is the carrier of the force. In the Standard Model, the weak nuclear and electromagnetic forces are unified into the so-called electroweak interactions. The electromagnetic and weak interactions appear to be very different at low energy only because of the mass differences between their respective force carriers: The carrier of the electromagnetic force (the photon) is massless, while the carriers of the weak force (weak bosons) are approximately 100 times heavier than the proton.
A central question in particle physics is whether there are further unifications of forces. In the 1970s, the first grand unified theories (GUTs) unified the strong and electroweak interactions. From the broad class of GUTs emerges a unified picture of the quarks (strongly interacting fundamental particles) and the leptons that shares many similarities with the Standard Model. However, the GUTs differ from the Standard Model in several significant ways—for example, they predict that the proton is unstable and that there are bosons whose interactions can change a quark into a lepton or into an antiquark. Furthermore, unlike the Standard Model, in which neutrinos are massless, most GUTs predict that neutrinos would have tiny masses and would be their own antiparticles (Majorana-type particles) and that neither lepton number nor lepton flavor would be conserved. In GUTs, baryon and lepton number violations are associated with the exchange of new, extremely heavy force carriers, with masses ∼1015 times that of the proton, so that a process like proton decay exists but would be extremely rare. While direct production of such heavy force carriers will not be possible in any conceivable high-energy collider, proton decay may well be observable.
Cosmology also presents strong arguments in flavor of many of the conclusions drawn from GUTs—that the proton is unstable, that neutrinos have mass, and that lepton flavor violation should be different for neutrinos and antineutrinos. The present day excess of matter over antimatter translates to an excess of quarks over antiquarks in the early Universe of about one part in 108. In principle this excess could be an initial condition of the Universe; however, in standard cosmological explanations, inflationary expansion in the extremely early Universe would probably have removed any such initial excess. However, the Soviet nuclear physicist Andrei Sakharov pointed out that fundamental physics could produce a tiny excess of matter over antimatter during the early Universe provided three conditions are met: baryon number is not conserved, charge conjugation and
charge parity (CP) are not symmetries of nature (nature distinguishes between matter and antimatter), and the early Universe went through a period when it was out of thermal equilibrium. If the first two conditions are met, the proton should decay.
GUTs offer a beautiful explanation for the origin of the asymmetry between matter and antimatter, which is deeply connected with CP violation in neutrino oscillations, tiny neutrino masses, and neutrino flavor change. In GUTs, the neutrino masses are inversely proportional to the masses of very heavy particles, so the tiny size of the neutrino masses suggests the existence of particles that are too heavy to be produced today but that could have been produced in the early Universe. Such particles decay out of equilibrium into leptons, violating CP and lepton number conservation, thus producing an excess of leptons over antileptons (a process known as “leptogenesis”). Anomalous electroweak processes then convert some of the excess leptons into quarks (“baryogenesis”). Thus CP violation in neutrino physics and the Majorana nature of neutrinos could well be linked to the origin of the matter excess in the Universe.
Some of the inconsistencies between the Standard Model and GUTs have been resolved. For example, it is now known (and described in the following section) that, contrary to the tenets of the Standard Model but consistent with GUTs, neutrinos have small masses and that lepton numbers are not conserved, as neutrinos oscillate between types. However, many other inconsistencies have not been resolved, and gaps remain in our knowledge of the characteristics of these most fundamental of particles. The studies proposed for DUSEL are highly promising experimental approaches for testing many of these outstanding questions—do neutrinos oscillate, are neutrinos their own antiparticles, and do protons decay? These proposed studies are discussed in the following sections.
Although neutrinos were first observed experimentally more than 50 years ago, their properties are less well understood than those of other elementary particles, in part because they have no electric charge and interact very weakly, making them difficult to detect. Like other matter particles, neutrinos have spin, a form of angular motion, but very small masses, weighing many million times less than any other matter particles. Neutrinos come in three different generations, called flavors, and each neutrino shares a flavor quantum number with a charged lepton partner: the electron, the muon, and the tau (shown below the neutrinos in the bottom row of Figure 3.3). While the weakness of neutrino interactions makes them difficult to observe, it also makes them ideal probes of certain astrophysical processes (see the section “Neutrino Astrophysics”), and the study of tiny masses
may point to physics at far higher energies than could ever be reached with a terrestrial particle accelerator.
An interesting property of neutrinos is that a neutrino born with one flavor will spontaneously transform into another flavor as it moves through space. This phenomenon, known as “neutrino oscillation,” has only been known for about 15 years. A similar phenomenon among quarks has been studied extensively, most recently at the two B-factories, one in the United States and the other in Japan. Scientific interest in such oscillations arises because they provide a mechanism whereby particles and their antiparticles can interact differently. Such difference between particle and antiparticle behavior is known as “CP violation” and is a key component of theories attempting to explain why the Universe is made primarily of matter, with very little antimatter. The amount of CP violation among quarks is insufficient to allow these theories to explain our matter-dominated Universe. However, CP violation involving neutrinos is an attractive theoretical way to explain this matter/antimatter imbalance through a process called leptogenesis, which was discussed in the preceding section.
Another interesting property of neutrinos is their relationships with antineutrinos. For each particle species, there is a corresponding antiparticle species. When particles have a charge, their antiparticle partners have the opposite charge, making these particles and antiparticles distinct. For particles like neutrinos, which have no net charge, it will have to be determined experimentally whether the particle and the antiparticle are different. In the Standard Model of particle physics, neutrinos have a lepton number and antineutrinos have a lepton number of the opposite sign. Since lepton number is conserved in the Standard Model, neutrinos and antineutrinos are distinguishable and their differences can be studied through their interactions with matter. In many theories that extend the Standard Model, however, lepton number is not conserved. In such theories, it is possible for neutrinos to be their own antiparticles, a property that would make them Majorana particles. The most promising experimental sign of neutrinos being Majorana particles would be the observation of a rare nuclear decay called neutrinoless double-beta decay. This fundamentally important process has been unsuccessfully sought for many decades but is expected to become observable in the next decade.
We are entering an era when the accurate determination of the properties of neutrinos needed for a deeper understanding of particle physics will be possible. There are still anomalies in the data and huge gaps in our knowledge, making this a very exciting time to gather largely unexplored information about this perplexing group of particles. The experiments considered here address the most critical open questions in neutrino physics. Because the answers that emerge will have major impacts on cosmology as well as on particle physics, this work represents an important scientific opportunity for the U.S. physics community.
Electron neutrinos are the main product of the thermonuclear reactions that power the Sun. It has been about 40 years since Ray Davis and his collaborators first discovered evidence that their flux at Earth is substantially less than predicted by the best solar models of the time.3 S.M. Bilenky and Bruno Pontecorvo suggested that electron neutrinos change flavor in transit and become muon or tau neutrinos.4 Such neutrino oscillations require that neutrinos have mass. The ideas of both massive and flavor-changing neutrinos were revolutionary, and because the experimental evidence was not strong, the Standard Model of particle physics was constructed with massless and flavor-conserving neutrinos. Only within the last 15 years has the experimental evidence for neutrino oscillations become convincing enough for the scientific community to accept that they are a fact of nature.
To illustrate the most important phenomenological features of mixing, the case when only two neutrinos are involved will be considered. In a two-flavor world, the probability, P, that one flavor (say, ve) will appear in a pure beam that was initially of another flavor (say, vμ) is given as
P = sin2(2θ) sin2(1.27Δm2L/E)
where θ is the mixing angle, Δm2 is the difference between the squares of the masses of the two neutrinos (in eV2), L is the distance (in kilometers) from the source to the detector, and E is the energy of the neutrinos (in GeV). Of course the original flavor disappears at this same rate, so the total number of neutrinos remains constant.
This formula suggests two types of experiments: disappearance experiments, where a changing flux of the original flavor is observed as a function of distance or energy; and appearance experiments, where neutrinos of a different flavor appear in the beam. Note that the experimentally controllable parameters, the distance from the source and energy of the neutrinos, appear only in the ratio L/E. It is also noteworthy that oscillation experiments cannot determine the absolute masses of neutrinos since the probability P depends only on the difference between the squared masses. This formula for P also indicates how to measure the oscillation parameters—the amplitudes of the measured oscillations determine the mixing
3 See, for example, R. Davis, D.S. Harmer, and K.C. Hoffman. 1968. Search for neutrinos from sun. Physical Review Letters 20: 1205.
4 S.M. Bilenky and B. Pontecorvo. 1977. Lepton mixing and neutrino oscillations. Physics Review 41: 225.
angles, while the variations with either distance or energy determine the mass squared differences.
An important feature was added when Stanislav Mikheyev and Alexei Smirnov, building on the earlier work of Lincoln Wolfenstein, realized that interactions of the neutrinos with electrons in the Sun (or even in Earth) could lead to a substantial modification of the oscillation probabilities, resonantly making amplitudes of oscillation either larger or smaller than otherwise. These matter, or Mikheyev-Smirnov-Wolfenstein (“MSW”), effects can be important in understanding data and are very useful in that they allow the neutrino masses to be ordered.
With the three known flavors of neutrinos, the oscillation phenomena are more complicated but also much richer in possibilities. The formulas governing the three flavor mixing are well understood and contain a number of independent parameters that govern neutrino flavor change and propagation:
• Δm221, the mass-squared difference primarily associated with solar neutrino disappearance.5 It has been measured thus far to be |Δm221| = 7.59 ± 0.20) × 10−5eV2.
• Δm232, the larger mass-squared difference primarily associated with atmospheric muon neutrino disappearance. It has been measured thus far to be |Δm232| = 2.35 +0.11−0.08) × 10−3eV2.
• θ12 the parameter known best for governing the disappearance of solar electron neutrinos; sometimes known as the solar mixing angle. It has been measured thus far to be large: sin2 2θ12 = 0.87 ± 0.03.
• θ23 the parameter primarily known for its role in the disappearance of muon neutrinos. Because it was initially discovered in atmospheric neutrino experiments, it is sometimes known as the atmospheric mixing angle. It has been measured thus far to be consistent with the maximum possible value: sin2 2θ > 0.91.
• θ13, a still unknown parameter. It governs the probability that propagation involving the larger Δm213 associated with atmospheric neutrinos will involve electron flavor change. Its upper limit is sin2 2θ < 0.15.
• δ, the parameter representing a phase that governs the CP-violating difference between neutrino and antineutrino flavor change; its value is completely unknown.
• The hierarchy, or ordering, of the neutrino masses is contained in the signs of the linear mass differences. The sign of Δm221 is known, so the sign of Δm232 completely determines this ordering. The latter sign, and therefore the overall hierarchy, is completely unknown.
5 For comparison, the mass of the electron, the lightest of the leptons, is 0.511 × 106 eV.
• The effects of matter that produce resonant MSW oscillations are contained in additional calculable parameters.
This picture of three elementary particles of very tiny mass evolving into and out of each other has the ring of science fiction. Indeed, it took some time and extraordinary evidence to be accepted by the scientific community. How is it known that neutrino oscillations do occur? The data summarized above were gathered by underground experiments of the type assessed in this report.
The first experimental indications that neutrinos oscillate came from the experiment in Homestake mine by Davis, followed by Kamiokande-II’s direct detection of solar neutrinos and the other solar neutrino experiments with gallium, SAGE and GALLEX, which observed solar neutrinos from the fundamental proton-proton fusion reaction for the first time. The final incontrovertible evidence came from experiments at SNO that definitively confirmed changes in neutrino flavor. Meanwhile, measurements of neutrinos created in Earth’s atmosphere (originally considered a background to proton decay experiments) at Super Kamiokande in 1998 showed that muon neutrinos arising from cosmic ray interactions in the atmosphere (so-called atmospheric neutrinos) disappear as a function of distance.
The study of antineutrinos from reactors has also been important, starting with the first observations of neutrinos in the 1950s by Reines and Cowan and leading through a long series of experiments to KamLAND. By observing electron antineutrino disappearance from all the reactors in Japan, this experiment eliminated any credible option to neutrino oscillations as explanations of solar and atmospheric neutrino effects.
Just as in more conventional particle physics, where initial observations of new particles in cosmic rays gave way to the controlled creation of new particles in accelerators, precision observations of neutrino oscillations must move from the now exploited natural sources to controlled high-flux accelerator-produced neutrinos. In this way, the K2K and MINOS experiments have already provided observations of muon neutrino disappearance, the NOVA experiment will operate in the next few years, and the new T2K experiments are just beginning to produce results in a search for electron appearance in a muon neutrino beam.
Our current knowledge of some of these neutrino oscillation parameters, arising from experiments performed to date, is briefly summarized in the above list. The mixing angles show a curious mix: Two are rather large and the third (θ13) is currently consistent with zero. While the sign of the very small Δm221 has been determined by matter effects in the Sun, the sign of the other much larger mass-squared difference (Δm232) is unknown. The overall mass hierarchy is therefore unknown. The CP phase parameter, δ, has not so far been determined by any
experiment. Measurements of these three (θ13, δ, and the sign of Δm232) are major goals of future experiments.
Four Critical Questions on the Nature of Neutrino Oscillations
The above discussion leads to four critical questions on neutrino oscillations that could be addressed by the long-baseline neutrino oscillation experiment (LBNE) discussed in this report. These questions (or close variants of them) have also been discussed by other review bodies such as the DOE/NSF Neutrino Scientific Assessment Group (NUSAG) and the Particle Physics Project Prioritization Panel (P5), and by the NRC report Connecting Quarks to the Cosmos. They have motivated a number of related international projects such as T2K in Japan and LAGUNA-LBNO (proposed) in Europe. The committee agrees with those other bodies that these questions are among the highest priority questions in particle physics today, and long-baseline neutrino oscillation experiments are essential to answer them. No credible experimental alternatives exist that would not require large underground detectors. The four critical questions that need answers are these:
1. What is the value of the mixing angle θ13? Presently we only have limits on θ13. A null value, θ13 = 0, would point to a deeper symmetry. On the other hand, θ13≠ 0 would imply observable phenomena (questions 2 and 3) that answer other key neutrino oscillation questions.
2. What is the hierarchy of the neutrino masses? Is it similar to that in the quark sector, so that the neutrino mostly made up of the same flavor as the heaviest charged lepton is the heaviest neutrino (“normal” hierarchy)? Or is it the lightest (“inverted” hierarchy)? This hierarchy is determined by the sign of Δm231 and has important implications for both neutrino oscillations and neutrinoless double-beta decay, discussed in the next section.
3. Is CP violated in the neutrino sector and if so, what is the value of the phase δ? This is a key question, since observing CP violation in neutrino oscillations would open a new window into the physics of matter and antimatter, providing essential inputs into models of leptogenesis, discussed more fully in the section on proton decay.
4. Are there new neutrino properties (or new neutrinos) that are not described by the three flavor neutrino model? Anomalies in existing data do not fit into this model, although no anomaly so far has been confirmed.6 However,
6 Neutrino oscillation experiments are very difficult, often limited by systematic effects or backgrounds, and initially with only modest statistical precision. It is therefore essential that multiple observations be made with complementary techniques and with different energies, initial flavors, energies, and baselines.
the history of neutrino physics is full of surprises, and the existence of a simple phenomenological model that works does not guarantee its correctness. Nature is often richer than imagined. New neutrino properties and new neutrino states7 could emerge if the neutrino model described here cannot fit all the observations.
The amount of observable oscillation depends on the ratio of the distance between where the neutrinos are produced and where they are detected to the neutrino energy (L/E). For ranges of neutrino energies that are easily produced and detected with present technology, a detector must be located appropriately. On the one hand, if it is placed too far from the source, it sees little flux and presents technical difficulties in beam construction (because of Earth’s curvature). On the other hand, a minimum distance (“baseline”) of about 1,000 km is needed in order to provide sufficient time and distance for the neutrinos to oscillate. Such an experiment requires an intense neutrino source, as well as a massive and sensitive detector. In order to eliminate backgrounds from cosmic ray events, it must be located underground. An experiment with such capabilities—the LBNE—will allow, in addition to the search for CP violation, a broad program of neutrino physics, as well as sensitivity to proton decay.
The Homestake site, the intended location for the DUSEL program, is approximately 1,250 km from Fermilab, the presumptive neutrino source. The principal existing general science underground laboratory, the Soudan Underground Laboratory, is only 730 km from Fermilab. Other sites considered in the DUSEL site selection process that culminated in choosing Homestake also meet the requisite minimum 1,000 km distance from Fermilab. Similar large-scale long-baseline experiments are under consideration in Japan and Europe. The baseline length between J-PARC and Kamioka in Japan is 295 km, which is too short for the determination of mass hierarchy. Furthermore, the CP violation parameter cannot be determined uniquely with this configuration alone due to the immeasurable mass hierarchy. In Europe, studies are in progress to select a possible underground site for future large detectors. The physics questions that can be addressed by such detectors will depend on the selected site and detector technology.
If LBNE proceeds, the design and construction of it and the neutrino beamline will take at least 7 years. To consider what new knowledge LBNE would provide requires a comparison with the expectations of experiments currently operating or under construction. Experiments that will be sensitive to electron neutrino
7 Such new states are called sterile neutrinos, because other measurements show that only the three currently known neutrino species can have normal weak interactions.
appearance on that timescale include the long-baseline experiments MINOS, T2K, and NOvA and the reactor experiments RENO, Double Chooz, and Daya Bay. The last five are focused primarily on determining θ13. Their sensitivity to θ13 as a function of time is difficult to predict precisely, but by 2020 they are expected to have measured a finite value if sin22θ13 is greater than 0.03-0.04. By comparing the results from these experiments, it will be possible to place constraints on the other oscillation parameters. In particular, combined results from these experiments could provide some evidence for CP violation over about 20 percent of the allowed parameter space if sin22θ13 is in this range. Even in the most optimistic scenarios, the statistical significance of such a result would be marginal. In addition, experimental data obtained before the LBNE becomes operational would have only a small window for determining whether the mass hierarchy is inverted or normal.
LBNE therefore offers the real prospect of a transformative discovery of CP violation in the lepton sector, with sensitivity greater than three sigma over half the possible values of δ for sin22θ13 greater than 0.03 after 10 years of operation at the initial beam intensity.8 With potential future accelerator upgrades, sensitivity to values of θ13 extends to almost an order of magnitude below expected pre-LBNE limits. In addition, for sin22θ13 greater than 0.04, LBNE can unambiguously distinguish the normal from the inverted hierarchy over the full range of possible CP parameters. Determination of the hierarchy would shed some light on whether neutrinos have the same flavor-ordering of masses and perhaps demonstrate that the source of neutrino mass is different from the source of mass for other leptons and quarks. It would also significantly impact the interpretation of the sensitivity of any double-beta decay experiments.
The main goal of LBNE is to significantly extend our sensitivity to the neutrino oscillation parameters over existing experiments using a broad-band neutrino beam (a beam with a wide range of neutrino energies) with a peak energy of about 3.5 GeV from Fermilab. The experiment requires a small “near detector” located at the Fermilab site and a much more massive “far detector” located underground. Both detectors would observe the flux of neutrinos of a given flavor by reconstructing neutrino interactions through charged current processes that identify the final-state charged-lepton flavor. The near detector would monitor the flux and composition of the neutrino beam near the point of production. The far detector would primarily search for the appearance of electron neutrinos (or antineutrinos) in the muon neutrino (or antineutrino) beam. The details of the oscillation predictions are complex because it is necessary to include the interference effects of three-flavor mixing as well as the effects of neutrino interactions with the portions of Earth traversed by the beam. It is precisely these interference effects that produce a potentially observable CP violation effect; however, they
make the oscillation probability at any particular baseline and neutrino energy depend on all the oscillation parameters. Thus the oscillations must be observed over an extended energy range for both neutrinos and antineutrinos in order to disentangle all the parameters.
Two main technologies for the far detector have been proposed. The first entails building a huge water Cherenkov detector similar in design to the extremely successful Super-Kamiokande experiment in Japan, but larger by a factor of eight. Energetic charged particles traveling through a transparent medium such as water emit a cone of Cherenkov light until they slow down below the speed of light in water. A water Cherenkov detector consists of a large tank of water with the vessel surface partially covered by inward-looking photomultiplier tubes (PMTs), large sensors capable of detecting single photons of Cherenkov light. Each charged particle appears as a ring of light detected by the PMTs, with electrons distinguished from muons by the sharpness of the ring (electrons undergo much more multiple scattering in water than muons, and so have fuzzier rings). Water Cherenkov detectors are a well-proven and well-understood technology. However, since they cannot detect slow particles and because rings can be merged or confused in events with many charged particles, they have a relatively low efficiency for low-velocity particles, on the one hand, and a significant fraction of misreconstructed events for the relatively complicated events in the multi-GeV range.
The alternative technology is a liquid argon (LAr) tracking calorimeter similar in concept to the ICARUS detector currently in the Laboratori Nazionali del Gran Sasso in Italy, but larger by a factor of 40. In a LAr detector, ionization deposited along charged particle tracks is drifted to a grid of sense wires, allowing the tracks to be reconstructed in three dimensions. Such a detector is sensitive to low-velocity particles, and the spatial resolution is excellent (potentially a few millimeters or even better). Thus a LAr detector is capable in principle of reconstructing quite complex events and is expected to have a lower misreconstruction fraction and higher efficiency. Because of this added efficiency, a LAr detector can be smaller in mass by a factor of about six and (owing to the greater density of LAr) almost an order of magnitude smaller in volume. As a result, the far detector hall could be much smaller than for the other option. However, there is much less experience with large LAr neutrino detectors than with water Cherenkov detectors. The challenges include the technical complexity and safety considerations involved in producing and retaining multikiloton volumes of a cryogenic noble liquid in an underground laboratory, the high purity requirements for the argon, and the technical complexity of the readout system. Techniques for reconstructing LAr events are still in development. The development and operation of the MicroBOONE 800-ton LAr experiment at Fermilab will be a first step in resolving many of these technical issues.
From the point of view of neutrino oscillations alone, the physics sensitivity of the water Cherenkov module and the LAr options are similar. The choice
between them relies heavily on technical and financial considerations. The choice of technology may also affect the required depth underground. This requirement is under study and has not yet been firmly established.
The first observation of CP violation in the neutrino sector will be followed by a long sequence of experiments of different types intended to more accurately measure the oscillation parameters and to understand whether the three-neutrino parameterization is correct. There are at present uncorroborated anomalies that, if correct, would need to be explained through modifications or additions to this picture.
The two technologies discussed above have different and complementary strengths both for the initial discovery of nonstandard phenomena and for second-generation measurements. Proposals for new second-generation experiments with water Cherenkov detectors include very imaginative possibilities, such as the DAEdALUS proposal to create neutrinos using a series of small nearby cyclotrons. These may well become important complementary techniques and illustrate the possibilities for additional use of a large water Cherenkov detector for constraining neutrino parameters, depending on what is found. Liquid argon, on the other hand, should be able to analyse complicated events with its particle identification and tracking capabilities, which may also open new possibilities.
LBNE would also allow a broad program of physics measurements beyond accelerator-produced neutrinos. Examples include studies of atmospheric neutrinos, solar neutrinos, and neutrinos from astrophysical sources.
There is a long history of measurements of atmospheric neutrinos that will continue over the next decade with new results from the Super-Kamiokande detector. With the same kind of water Cherenkov detector, the statistical improvement would be at best nominal, by a factor of, say, 2 or 3. Although an alternative LAr detector of lower tonnage would provide less statistical improvement, it might be able to make a more definitive observation of tau neutrinos produced from oscillated cosmic ray muon neutrinos. This would depend on many factors not yet demonstrated.
A massive water Cherenkov detector would allow measurements with high statistics of the small day-night asymmetry in electron solar neutrino flux that arises from the small (∼2 percent) additional oscillation that takes place within the matter of Earth. Thus far, Super-Kamiokande has measured only a small (negligible) effect (2 standard deviations). A LAr detector would have better particle detection and identification for much less tonnage. In any case, use of LAr for solar neutrinos would require addressing significant technical issues, including the production of 39Ar by cosmic rays, a source of serious background signal, as well as control of radon. These technical issues mean that solar measurements would probably require substantial depth.
Realization of an LBNE with experimental reach for CP violation and the understanding of the mass hierarchy will require a large underground detector and a high-intensity neutrino source separated from each other by more than 1,000 km or so. Such an experiment has the potential to determine whether the current phenomenological description of neutrino oscillations is correct and to measure the associated parameters. Observations of both the mass hierarchy and CP violation in the neutrino sector would have profound effects on extensions to the Standard Model as well as on our ability to model the early Universe. An experiment capable of these discoveries will enable a broad program of discovery and measurement in neutrino physics. Such a program would be a cornerstone of basic science research in the United States.
Conclusion: The long-baseline neutrino oscillation experiment is of paramount scientific importance and will address crucial questions upon whose answers the tenets of our understanding of the Universe depend. This experiment would not only provide an exceptional opportunity to address scientific questions of paramount importance, it would also have a significant positive impact upon the stewardship of the particle physics and nuclear physics research communities and have the United States assume a visible leadership role in the expanding field of underground science. The U.S. particle physics program is especially well positioned to build a world-leading long-baseline neutrino experiment due to the combined availability of an intense neutrino beam from Fermilab and a suitably long baseline from the neutrino source to an appropriate underground site such as the proposed DUSEL.
In 1937, the Italian physicist Ettore Majorana conjectured that the neutrino could be its own antiparticle, thereby lending his name to particles that have this characteristic of being their own antiparticle.9 Whether or not neutrinos are Majorana particles remains a fundamental and unresolved question in particle and nuclear physics. Double-beta decay experiments could resolve this question.
Double-beta decay is a process in which a nucleus decays into another nucleus with the same mass and two more protons by emitting two electrons. Because it
9 E. Majorana. 1937. Nuovo Cimento 14: 171.
typically is accompanied by the emission of two electron antineutrinos, it is known as two-neutrino, double-beta (2vβ) decay (see Figure 3.4a). In the absence of emitted neutrinos, the process is called neutrinoless double-beta (0vββ) decay10 (see Figure 3.4b). The 2vββ decay can occur whether or not the neutrino is a Majorana particle and has been observed in a number of nuclei. However, the 0nbb decay can occur only if the neutrino is a Majorana particle, so its existence would be an unambiguous demonstration of the neutrino’s peculiar nature. Thus far, no confirmed observation of such a decay has been made.
Establishing that neutrinos are Majorana particles would have a number of important consequences. Because 0vββ decay rates depend on neutrino masses, determining those rates would be the most sensitive laboratory experiment to determine the neutrino mass scale. The 0vββ decay rate is calculated to be proportional to the square of an effective neutrino mass and a quantity that is determined by the nuclear structure of the corresponding decaying nucleus. Various nuclear structure models have been used to calculate this quantity and, with recent progress, they agree with each other to within a factor of about two. The observation of a Majorana neutrino would also provide support for a subset of GUTs that have massive Majorana neutrinos.
10 For a recent review, see F.T. Avignone III, S.R. Elliott, and J. Engel. 2008. Review of Modern Physics 80: 481 and references therein.
Although the values of the neutrino masses are not known precisely, neutrino oscillation studies and other evidence indicate they are much smaller (by factors of many millions) than those of other elementary particles, whose masses are thought to be generated by the well-known Higgs mechanism. This fact alone strongly indicates that neutrino masses are due to very different and very likely much higher energy mechanisms. In many GUTs, such a mechanism is most natural if the neutrino is its own antiparticle—that is, if it is a Majorana particle.
Finally, the 0vββ decay implies a change of lepton number by two units, and its observation would lead to the important conclusion that lepton number conservation is violated. This provides support to leptogenesis, a process that violates CP and lepton number conservation and leads to lepton-antilepton asymmetry. In turn, this lepton-antilepton asymmetry could lead to baryon-antibaryon asymmetry and might help to explain the preponderance of matter over antimatter in the Universe (see discussion in the section “Tests of Grand Unification Theories”).
However, even if 0vββ decay is not observed, such studies can provide important information about the nature and mass of neutrinos. The neutrino oscillation data are consistent with the three neutrinos having different masses and two of the masses having a smaller mass splitting as determined by the solar mass scale. However, there are still two possibilities for the neutrino spectrum: the normal hierarchy and the inverted hierarchy (see discussion in the section “The Nature of Neutrinos—Oscillations [Long-Baseline Neutrino Experiment]”). If oscillation experiments demonstrate that the mass hierarchy of neutrinos is inverted, then having 0vββ decay results establish an upper limit of 20 meV on the effective Majorana neutrino mass would show that neutrinos are not Majorana particles (see Figure 3.5). Third-generation experiments at the ton scale would reach this limit, so that the failure to observe 0vββ decay would rule out neutrinos being a Majorana particle in the inverted hierarchy scenario. In the case of the normal hierarchy scenario, an experimental sensitivity of 1 meV would likely be required to rule out most possibilities for a Majorana neutrino. Furthermore, an experimental lifetime limit for the 0vββ decay will directly constrain the mass of the lightest neutrino (assuming the neutrino is a Majorana particle).
Two Critical Questions on the Nature of Neutrinos—Antiparticles and Mass Scale
The above discussion leads to two critical discussions that could be addressed by the neutrinoless double-beta decay experiment considered in this report.
• Are neutrinos Majorana or Dirac particles? It would be an amazing discovery in itself to demonstrate the existence of an entirely new type of elementary particle: one that is its own antiparticle. However, the existence
of Majorana neutrinos also allows leptogenesis to be an explanation for the matter-antimatter asymmetry of the Universe.
• What is the absolute mass scale of neutrinos? Knowledge of the absolute mass scale is needed in order to understand neutrino masses within the framework of particle physics, as well as to gauge the impact that massive neutrinos have on cosmology. Neutrino oscillation experiments cannot determine the absolute mass scale (only the squared mass differences), but neutrinoless double-beta decay experiments address this question.
There is an overwhelming interest in the international particle and nuclear physics communities to pursue the science of 0vββ decay. Many of the underground laboratories have programs to search for the process, including Gran Sasso in Italy, Canfranc in Spain, Modane in France, Kamioka in Japan, SNOLAB in Canada, and the Waste Isolation Pilot Plant (WIPP) and Sanford Underground Laboratory in the United States. A number of the offshore experiments have significant U.S. involvement.
The typical 0vββ decay experiment consists of a set amount of an isotope susceptible to double-beta decay in which detectors have been incorporated and which involves searching for very rare monoenergetic electron signals superimposed on continuum backgrounds. Because cosmic ray muons can create neutrons whose interactions form such a continuum, experiments must be conducted deep underground where such muons only rarely penetrate. A reliable 0vββ decay program requires multiple experiments worldwide using different isotopes. There are several reasons for this: (1) very different experimental techniques are used for different isotopes, some of which may prove to be more effective in, for example, background suppression; (2) a signal observed in one particular isotope might be a misidentification because of unknown background; (3) if a signal is detected, the measurement of multiple isotopes can provide a more reliable effective neutrino mass given the uncertainties in the calculated nuclear matrix elements; and (4) measuring the signal in different isotopes can help distinguish between different possible mechanisms of 0vββ decay. Worldwide, there are ongoing or proposed experiments searching for 0vββ in 48Ca, 76Ge, 82Se, 100Mo, 116Cd, 130Te, 136Xe, 150Nd, and 160Gd. Those experiments use several key experimental detection techniques, such as calorimetric bolometers (e.g., CUORICINO and CUORE at Gran Sasso), cryogenic semiconductor detectors (e.g., GERDA at Gran Sasso and MAJORANA at the Sanford Underground Laboratory), and liquid/gas detectors (e.g., SNO+ at SNOLAB and EXO at WIPP). See Table 3.2 for a more complete list.
Since the 0vββ decay is a very rare process, large masses of the corresponding isotope are required to reach a given sensitivity. First-generation (G1) experiments use detector masses in the range of 10-25 kg and have sensitivity to a neutrino mass of about 1 eV. Typically, these are prototype experiments to demonstrate the feasibility of various techniques. Demonstrating the scalability of a particular method is accomplished by using 30-200 kg detectors, which provide sensitivities down to 100 meV. There are about 10 of these second-generation (G2) experiments, and all experiments currently running or in construction are either G1 or G2 experiments.
Reaching the atmospheric scale (the mass scale associated with atmospheric muon neutrino disappearance) of 50 meV requires third-generation (G3) detectors using detectors with masses of 1 ton or more. For the reasons above, a meaningful
TABLE 3.2 0vββ Decay Experiments Worldwide Classified by Generation and Experimental Technique
|Calorimetric Bolometer||Calorimetric Semiconductor||Calorimetric Liquid/Gas||Tracking Calorimetry|
|G1||CUORICINO, Gran Sasso||Heidelberg-Moscow, Gran Sasso||NEMO3, Modane|
|G2||CUORE, Gran Sasso||GERDA-I-II, Gran Sasso||XMASS, Kamioka||SuperNEMO, Modane|
|CANDLE, Kamioka||Majorana Demonstrator, Sanford Lab||SNO+, SNOLab|
|LUCIFER, Gran Sasso||COBRA, Gran Sasso||EXO-200, WIPP|
|G3||GERDA-III, Gran Sasso||XMASS-10 ton, Kamioka|
|Majorana, Sanford Lab|
0vββ decay program requires multiple experiments and, although costly, there should be at least two such G3 experiments worldwide. It is appropriate that such a 1-ton detector be mounted at a U.S. site: (1) a 0vββ decay experiment in this country will be part of the required complement of experiments worldwide using different isotopes and different techniques, (2) a detector installed at a U.S. facility will ensure U.S. leadership in this field and enable U.S. scientists to participate more readily, and (3) a U.S. facility hosting a ton-scale 0vββ decay detector will attract top foreign scientists in the field and will foster international collaborations. At the same time, U.S. scientists are likely to continue their involvement in 0vββ decay experiments abroad.
The primary technical challenge facing the 0vββ experiments is to increase their scale at reasonable costs. A list of 0vββ decay experiments around the world is provided in Table 3.2. The experiments are tabulated according to their generation and the experimental technique they use. Proponents of these experiments have made convincing cases that at least the xenon and germanium experiments can scale to 1 ton. Going to the even larger detector masses needed to test limits for the normal mass hierarchy will present a more difficult challenge.
To reach sensitivity at the solar scale (the mass scale associated with the solar electron neutrino disappearance) of 5 meV and even down to 1 meV requires a 50-ton detector. Such a massive 0vββ detector is difficult to contemplate: A 50-ton germanium detector would be the size of a small house and very expensive. However, technology does advance, and it makes sense to ensure that the underground cavern for a 0vββ experiment could eventually accommodate a 50-ton experiment and its attendant shielding.
Detectors in the mass range of 10-25 kg have established the current limits on 0vββ decay lifetime to be around 1025 years, implying that the effective Majorana neutrino mass scale must be less than 1 e V. There is one claimed observation of 0vββ decay in 76Ge with a lifetime of 2 × 1025 years,11 but the interpretation of the data is disputed. A potential scientific challenge to extracting an effective neutrino mass from the 0vββ decay measurements is the numerical uncertainty in the nuclear structure calculations. There has been much theoretical progress in improving these calculations in the last few years. Further progress is likely, and should minimize the nuclear structure uncertainties on the effective neutrino mass derived for the timescale of much larger experiments.
The 0vββ experiment addresses crucial unanswered questions in particle and nuclear physics.12 It is the only practical experiment that could determine whether the neutrino is a Majorana or Dirac particle. If the neutrino is a Majorana particle, it would also be the most sensitive laboratory experiment that could measure or at least constrain the absolute mass scale of neutrinos. Were 0vββ to be observed, it would tell us that neutrinos are Majorana particles and lepton number conservation is violated, a model-independent conclusion and a Nobel Prize-level achievement. The 0vββ decay is very rare and its detection necessitates massive detectors. Given the paramount scientific importance of this experiment and the leadership roles taken by U.S. scientists, it is appropriate that the United States take a leadership role in one 0vββ decay ton-scale experiment for installation at a U.S. site, or at an appropriate foreign facility if necessary, and that U.S. scientists be supported to participate in other such experiments worldwide.
Conclusion: The neutrinoless double-beta decay experiment, like the direct detection dark matter experiment and the long-baseline neutrino oscillation experiment, is of paramount scientific importance and will address crucial
11 H.V. Klapdor-Kleingrothaus, A. Dietz, L. Baudis, et al. 2001. European Physics Journal A 12: 147.
12 NRC. 2006. Revealing the Hidden Nature of Space and Time. Washington, D.C.: The National Academies Press, p. 13.
questions upon whose answers the tenets of our understanding of the Universe depend. These three experiments are of comparable scientific importance. This experiment would not only provide an exceptional opportunity to address scientific questions of paramount importance, it would also have a significant positive impact upon the stewardship of the particle physics and nuclear physics research communities and would have the United States assume a visible leadership role in the expanding field of underground science. In light of the leading roles played by U.S. scientists in the study of neutrinoless double-beta decay, together with the need to build two or more large experiments in this area, U.S. particle and nuclear physicists are also well positioned to assume leadership roles in the development of one neutrinoless double-beta decay experiment on the scale of a ton. While installation of such a U.S.-developed experiment in an appropriate foreign facility would significantly benefit scientific progress and the research communities, there would be substantial advantages to the communities if this experiment could be installed within the United States, possibly at the same site as the long-baseline neutrino experiment.
Atoms are made of electrons and nuclei and nuclei are made of protons and neutrons. Protons and neutrons are the lightest particles that carry the baryon number, B, a quantum number that is conserved in Standard Model processes. While free neutrons, slightly heavier than protons, decay to protons with a lifetime of about 900 s, they can live much longer when bound in nuclei. The free neutron decays through nuclear beta decay, a weak interaction (n → p + e− + e, where n is the neutron, p is a proton, e− an electron, and e an electron antineutrino). Experimentally, there is no evidence that the proton, as the lightest baryon, decays at all. However, the stability of the proton is a very fundamental question and observation of proton decay would constitute a major scientific discovery. GUTs predict proton decay although the lifetime of the proton would be very long, and decay rate predictions are sensitive to a broad choice of theoretical model and the parameters of that model, leaving little theoretical guidance to the expected lifetime or the dominant decay mode. In practice, observation of proton decay would provide essential input to GUT models. In this situation, new experiments must be evaluated based on their “reach”—that is, on how far they can extend present sensitivity.
Because the lifetime of the proton is so long, the probability of seeing any individual proton decay is very small. Discovering proton decay becomes possible
only if a large number of protons are observed; thus, proton decay detectors must be massive. Since size and many of the other requirements for a successful proton decay experiment coincide with those of the LBNE, it is expected that the next-generation long-baseline neutrino detector will also serve as a next-generation proton decay detector.
In the original GUT of Georgi and Glashow, the prediction for the lifetime of the proton was 4.5 × 10(29 ± 1.7) years, and the main decay mode was predicted to be p → e+ + π0 The prediction for the rate was highly uncertain as it was proportional to the fourth power of the unification scale (the energy scale above which the electromagnetic, weak, and strong forces are equal in strength). Current bounds on the proton lifetime set by experiment decisively rule out this original model. That model had several other shortcomings. In particular, it did not address the “hierarchy problem,” the issue of why the weak and grand unified energy scales differ by 12 orders of magnitude. It also made other predictions that were ruled out by experiments.
Subsequent GUT theories introduce the concept of supersymmetry (a symmetry between bosons and fermions often referred to as SUSY), which plays an important role in addressing the hierarchy problem. Other SUSY predictions were consistent with experiment, especially for the so-called weak interaction mixing angle. Supersymmetry raised the unification scale by a factor of 100, and so this class of theories increases the predicted lifetime for the decay p → e+ + π0 by as much as eight orders of magnitude, well beyond experimental reach. With SUSY, however, new modes of proton decay involving kaons can become important. The minimal supersymmetric GUT predicts a dominant proton decay mode to be p → K+ + , where K+ is a kaon and an antineutrino, with a lifetime prediction as small as 1032 years. (This prediction reduces the dependence on the masses of GUT-scale particles from the fourth power to the square.) The prediction, however, is sensitive to unknown masses of weak-scale SUSY particles, as well as to uncertain details of the GUT. There are a large number of GUT models, as well as models that directly unify the forces of the Standard Model with quantum gravity in string theory models. However, all unified models predict that the proton does decay, and proton decay remains the main unverified prediction of unification, which has other successes. These successes include agreement with the measured weak interaction mixing angle discussed above and, importantly, the presence of small but nonzero neutrino masses and of neutrino oscillations. Detection of proton decay would unambiguously signal physics beyond the Standard Model. It would provide a guide to which of the many theoretical extensions of the Standard Model
are worth pursuing, and it would provide crucial information about the origin of matter in the Universe.
The best current bounds on many nucleon decay rates come from the Super-Kamiokande experiment. Water Cherenkov detectors, such as Super-Kamiokande, are highly efficient for the π0e+ mode. Decay modes containing kaons are more difficult, since the kaon is below the Cherenkov threshold and so can only be detected with reduced efficiency or much more stringent cuts. For this reason, limits on these modes are weaker. Present limits on decays with kaons are obtained by combining several such techniques. One example is the decay of a proton in an 16O nucleus, detected with an efficiency of 11 to 13 percent by a coincidence between the subsequent kaon decay (K+ → μ+v or K+ → π+π° with a γ from de-excitation of the excited 15N. Backgrounds for some modes of proton decay have been demonstrated to be very low at Super-Kamiokande; consequently, its bounds are expected to continue to improve, as they depend primarily on integrated exposure.
If LAr is the technology of choice for the LBNE detector, significant improvements to limits for proton decays to kaons could be possible. Since LAr allows for position resolutions of a few millimeters, complex events can be reconstructed in detail, with particle identification from energy loss and with photons distinguishable from electrons by the gap from the vertex and by ionization before a shower develops. However, since a LAr LBNE detector is expected to be less massive than a water Cherenkov detector, it would be less sensitive to those modes for which water Cherenkov detectors have high efficiency.
The impact of depth underground on background rates has been studied,13 leading to the conclusion that with an active muon veto shield, a proton decay search may be viable at fairly shallow depths for both water Cherenkov and LAr detectors. However, for the p → K+ + mode, expected background due to cosmic ray interactions in the rock near the detector can be eliminated only by limiting the fducial volume to the central region of the detector volume. In order to keep the large fiducial volume, both the water Cherenkov and LAr detectors are recommended to be located deeper than 3,000 meters of water equivalent (m.w.e.).
Table 3.3 lists a few specific modes, theoretical expectations, lifetime bounds,14 the most sensitive experiments providing those bounds, and projected sensitivities of future LBNEs. The expectations for Super-Kamiokande in 2030, as well as the detector capabilities for configurations in scenarios wherein 10 years of data taking could have occurred on about the same timescale, are taken from the LBNE proton
13 A. Bernstein, E. Blucher, D. Cline, et al. 2008. Report on the depth requirements for a massive detector at Homestake,” available at http://www.bnl.gov/isd/documents/43873.pdf. Last accessed on September 22, 2011.
14 From K. Nakamura, K. Hagiwara, K. Hikasa, et al. 2010. Review of particle physics, Journal of Physics G-Nuclear and Particle Physics 37: 075021. Available at http://iopscience.iop.org/0954-3899/37/7A/075021/media/rpp2010_0001-0007.pdf. Last accessed on September 22, 2011.
TABLE 3.3 Current Limits on Lifetimes (Column 3) for a Few Proton Decay Modes and the Major Experiments Establishing Those Limits
|Decay Modea||Predicted Rate in Various Unified Modelsb||Current Bound on Lifetimea||Main Experiment Setting Current Boundc||Expected Super-K Bound on Lifetime by 2030d||LBNE Sensitivity for 200 kT Water Cherenkov Detector after 10 Yearsd||LBNE Sensitivity for 28 kT Liquid Argon Detector after 10 Yearsd|
|p → π0e+||10(34-39) y||8.2 × 1033 y||Super-K||3 × 1034 y||6.2 × 1034 y||1.0 × 1034 y|
|p → K+||10(33-39) y||2.3 × 1033 y||Super-K||6 × 1033 y||12 × 1033 y||35 × 1033 y|
|p → π0μ+||10(34-39) y||6.6 × 1033 y||Super-K|
|p → e+γ||10(36-41) y||6.7 × 1032 y||IMB|
|p → μ+γ||10(36-41) y||4.8 × 1032 y||IMB|
|p → π+||10(32-39) y||2.5 × 1031 y||Soudan|
aThe modes shown represent a small sample of the more than 75 decay modes with limits on baryon lifetimes. The modes and current bound on lifetime for the modes are discussed in K. Nakamura, K. Hagiwara, K. Hikasa, et al. 2010. Review of particle physics. Journal of Physics G-Nuclear and Particle Physics 37: 075021.
bPredicted rates are estimates (y = years). Taken from P. Nath and P.R. Perez. 2007. Proton stability in grand unified theories, in strings and in branes, Physics Reports-Review Section of Physics Letters 441: 191.
cSuper-K refers to the Super-Kamiokande experiment; previous experiments using a similar technique but smaller detectors were the 1MB (Irvine-Michigan-Brookhaven) and Kamiokande experiments. A different technique, but with considerably less mass, was used by experiments at Soudan. (All experiments shown were performed underground.)
dExpected bound on lifetime by 2030, sensitivities for the first two decay modes expected by Super-Kamiokande, and two possible detector configurations are shown in the rightmost three columns, as estimated in M. Bass, M. Bishai, E. Blaufuss, et al. 2011. “A study of the physics potential of the long-baseline neutrino experiment project with an extensive set of beam, near detector and far detector configurations,” LBNE-PWG-002, INT-PUB-11-002, January.
decay working group.15 The predicted rates are taken from a variety of popular uni-fed models, such as SO(10) grand Unification, split supersymmetry, models with extra dimensions at the GUT scale, etc.16 The wide range of predictions is indicative of the difficulties in obtaining definitive theoretical guidance, though the current bounds on some modes (particularly K+) are already constraining model building.
Note that an order of magnitude improvement over the projected future Super-Kamiokande bounds does not seem to be possible on a 10-year timescale with the detectors being discussed for next-generation long-baseline neutrino experiments. Nonetheless, less ambitious improvement is possible, notably in the theoretically well-motivated p → K+ + mode. For these modes, a water Cherenkov detector would offer better statistics than the LAr option, but it would still have significant backgrounds and/or poor detection efficiency. LAr technology might provide much better control of backgrounds, offering improvement in this mode and possibly several other modes, particularly with final-state kaons. On the other hand, the LAr option does not offer improvement over Super-Kamiokande for the π0e+ mode, where the detection efficiency is high.
The search for proton decay is compelling science. There are theoretically persuasive reasons to expect that the proton will be found to be unstable, and proton decay provides a unique direct window into the physics of grand unification and the origin of matter. Decay rates and specific modes cannot be predicted given our current knowledge. Nonetheless, the range of lifetimes predicted by various theories has a large overlap with the sensitivity of current experiments, and current bounds have ruled out the simplest models and place severe constraints on others. The extension of experimental sensitivity provided by the large underground detector of a LBNE could credibly produce the important discovery of proton decay, although there are no guarantees. For this reason, the increased experimental reach is insufficient to be the primary factor in the choice of neutrino detector technology or detector siting. Each of the two detector technology options offers some, but different, promise for increased sensitivity to proton decay.
Conclusion: The stability of the proton is a crucial, fundamental scientific question and can be studied by the large underground detector of a long-
15 M. Bass, M. Bishai, E. Blaufuss, et al. 2011. A study of the physics potential of the long-baseline neutrino experiment project with an extensive set of beam, near detector and far detector configurations, LBNE-PWG-002, INT-PUB-11-002, January. Available at http://www.int.washington.edu/PROGRAMS/10-2b/LBNEPhysicsReport.pdf. Last accessed on November 15, 2011.
16 See, for example, P. Nath and P.F. Perez. 2007. Proton stability in grand unified theories, in strings and in branes, Physics Reports—Review Section of Physics Letters 441: 191.
baseline neutrino experiment. This capability would be of great scientific interest and would add significant value to the neutrino oscillation experiment. However, the sensitivity is not so important as to make the search for proton decay the primary consideration in choosing neutrino detector technology or a site for the experiment.
Understanding nuclear processes is critical to interpreting a number of astrophysical observations that range from stellar energy generation and the formation of solar neutrinos to elemental and isotopic abundances of the elements in the Universe. In particular, measurements of low-energy cross sections are key inputs to the nuclear reaction network calculations used to model these astrophysical phenomena. Thermonuclear reactions in stars occur in a narrow energy window—the so-called Gamow peak—with typical energies in the range of keV (for the Sun) to MeV (for explosive stellar processes). For a reaction to occur at these low energies, it is necessary for the participating nuclei to tunnel through the Coulomb barrier, which dramatically reduces the reaction cross section. While theoretical extrapolations can be used to extend higher-energy measurements into the very-low-energy astrophysical regime, such extrapolations have large uncertainties, in particular when low-energy resonances are present. Figure 3.6 shows a typical measurement, where the astrophysical S-factor is plotted versus the reaction center-of-mass energy. The S-factor is extracted from the cross section by dividing out the exponential suppression due to the Coulomb barrier.
Measuring nuclear cross sections at stellar energies requires high luminosities and low backgrounds. The counting rates of some of these stellar reactions can be as low as a few counts per day and the counts from these reactions cannot be reliably measured in the presence of cosmic ray background because of the low signal-to-background ratio. This challenge led to the first underground accelerator facility at the Gran Sasso facility, the Laboratory for Underground Nuclear Astrophysics (LUNA). In this facility several key reactions were studied for the first time at the relevant solar energy. Following the success of the LUNA facility, a number of new initiatives to construct larger underground accelerator facilities were proposed because LUNA is somewhat limited in space. These proposals include designs for facilities in Germany, Spain, Romania, India, the United Kingdom, and the United States. The project proposed for Homestake is at the 4,400 m.w.e. level. Known as the Dakota Ion Accelerator for Nuclear Astrophysics (DIANA), it is one of the most ambitious. After addressing the general science case for a new underground accelerator facility, the report discusses the merits of a U.S. facility.
DIANA involves the study of three important processes in nuclear astrophysics: (1) solar neutrino production, (2) nucleosynthesis in late-stage stellar burning as a precursor to white dwarf and supernova formation, and (3) the production of elements heavier than iron in neutron-rich nucleosynthesis. Each of these processes requires improved measurements of low-energy nuclear cross sections.
• Solar neutrinos flux was first studied as a means of exploring the thermal and compositional structure of the solar interior. This decades-long study helped transform our understanding of fundamental interactions by suggesting that neutrinos are massive and undergo flavor-changing oscillations. As more is learned about those parameters, it is important to return to solar neutrinos as a source of information about the Sun’s processes. In particular, while the neutrinos from the proton-proton (p-p) chain (where protons are transformed into helium by sequential fusion) have been carefully studied,
the flux of neutrinos from the carbon-nitrogen-oxygen (CNO) cycle is not well understood, primarily because of uncertainties in the nuclear processes that produce neutrinos. Improved measurements of relevant reactions, such as 14N(p, γ)15O and 15N(p, γ)16O, would allow us to use the CNO neutrinos as a probe of the so-called metallicity of the solar interior. Metallicity is the abundance of elements heavier than helium in the initial solar core. The abundance of these elements indicates the extent of the nuclear processing that occurred in the material that formed our solar system.
• The LUNA program has focused on reactions relevant to hydrogen burning, the main energy generation process in the Sun. However, red giant stars and asymptotic giant branch (AGB) stars are fueled mainly by helium burning. Helium burning begins at elevated temperatures with the triple-alpha process that allows three helium nuclei to fuse into 12C. Radiative capture of alpha particles on 12C to produce 16O and the subsequent alpha capture on 16O set the stage for carbon burning, a series of fusion reactions between carbon nuclei and carbon and oxygen nuclei. These burning processes then greatly influence the light-element composition of the star. This composition is a key component in the calculation of nova and supernova ignition. These radiative capture reactions and fusion reactions are poorly measured, especially near the relevant stellar energies at and below 1 MeV.
• The slow neutron capture process or s-process is thought to be the source of a large number of elements heavier than iron. During the later stages of helium burning, alpha particle capture on certain isotopes of carbon, oxygen, and neon—13C(α, n)16O, 17O(α, n)20Ne, and 22Ne(α, n)25Mg—can produce copious amounts of neutrons whose sequential capture on seed nuclei produces the heavier elements.
These three science topics are among the most compelling in the field of nuclear astrophysics, which itself was noted as being one of three key intellectual directions for nuclear physics in the 2007 long-range plan for nuclear physics.17 The main thrusts within nuclear astrophysics are exploring the structure of nuclei far from stability, understanding the nuclear equation of state, and measurements of low-energy nuclear cross sections. The first two thrusts are key elements of the new facility for rare isotope beams (FRIB) accelerator to be constructed at Michigan State University. The third thrust, the understanding of low-energy nuclear reactions, is the key focus of DIANA, which could effectively complement the nuclear astrophysics science to be addressed by the FRIB accelerator.
17 DOE/NSF. 2007. The Frontiers of Nuclear Science: A Long Range Plan. Report of the Nuclear Science Advisory Committee.
Measurements of all of the low-energy cross sections discussed above are hampered by the ultra-low event rates at the relevant stellar energies and by background contamination. The pioneering LUNA facility at Gran Sasso has already demonstrated that with sufficient suppression of cosmic ray background, cross section measurements can be performed at much lower energies than is possible aboveground.
The proposed DIANA facility consists of two high-current accelerators (about 100 times the luminosity of LUNA), whose beams can be directed to a number of target stations. The lower energy accelerator covers the energy range 50-400 keV, while the higher energy accelerator extends from 400 keV (to match the lower energy machine) to 3 MeV for singly charged ions. Beams from both accelerators can be directed to the same high-density gas target stations in order to map out key reactions over a larger energy range, thus allowing the study of a variety of burning processes in stars. A concept model for the accelerator complex is shown in Figure 3.7. This complex requires a cavern approximately 20 m high by 20 m wide by 45 m long.
A key aspect of the DIANA facility is the design of the target stations. These must be able to handle the very high beam currents while keeping beam-induced backgrounds to a minimum. High-density gas target systems will be used in combination with state-of-the-art gamma-ray and neutron detectors. Figure 3.8 demonstrates the improvement in background levels (on a logarithmic scale) for gamma-ray detection that can be achieved under various shielding configurations. The principal experimental advantages of DIANA over existing experiments such as LUNA are advances in design that allow it to significantly reduce background counts, as shown in Figure 3.8.
Clearly, a significant portion of the international community is interested in the science of low-energy nuclear astrophysics. As discussed above, in addition to the LUNA facility at Gran Sasso (3,500 m.w.e.) and the proposed DIANA facility, other underground low-energy accelerator facilities are being considered: Dresden (Germany, 110 m.w.e.), Canfranc (Spain, 2,500 m.w.e.), Praid (Romania, 900 m.w.e.), Boulby (U.K., 2,800 m.w.e.), and INO (India, 3,500 m.w.e.). The DIANA project itself has a number of international partners.
The design for the DIANA facility is fairly well advanced, and construction could begin in the next several years. Should a new underground laboratory in the United States not be pursued, the collaboration has also begun investigating opportunities at the WIPP facility (1,600 m.w.e.) in New Mexico. The technical feasibility of the DIANA underground facility does not appear to pose a major risk for the project. The accelerators are largely based on existing systems that have been successfully implemented. Some research and development will be required to develop target stations that can handle the high currents from the accelerators. Most of this work can be performed aboveground and is already under way at the University of Notre Dame and the University of North Carolina. New detection techniques will take advantage of the low background environment.
A potential complication associated with an underground accelerator facility operating in conjunction with other very-low-background experiments is reduced sensitivity in those other experiments due to accelerator-related background. However, straightforward means can be implemented to eliminate interference with other experiments. Thus a well-isolated cavern is required with adequate shielding to reduce background to manageable levels. In addition, for experiments that require higher energies, which are more likely to produce elevated background levels, the accelerator can be operated in a low-duty-cycle pulsed mode. In this mode, the timing of the accelerator can be included in the data acquisition systems of the other experiments to allow for studies of possibly elevated backgrounds or vetoing of signals during the beam pulses. However, Monte Carlo simulations performed by the collaboration indicate that no additional neutron flux will be present outside the DIANA cavity. Careful communication between experiments will be essential to ensure that periods of potentially high background runs have minimal impact on other experiments. These measures for preventing interference with other experiments are not expected to be costly.
The importance of the science questions that DIANA could address makes this an exciting research opportunity for nuclear physics. The ultralow backgrounds at such an underground facility will enable precise measurements of very-low-yield stellar reaction rates that are key to elucidating important astrophysics processes. The LUNA facility at Gran Sasso has already demonstrated the usefulness of an underground accelerator for understanding the hydrogen burning process. A more advanced facility such as DIANA will shed light on other key burning processes in stars and on the production of elements heavier than iron.
Conclusion: A small underground accelerator to enable measurements of low-energy nuclear cross sections would be scientifically important. These
measurements are needed to elucidate fundamental astrophysical processes such as thermonuclear reactions and the production of heavy elements in the Sun and the stars.
The possibility of using neutrinos to make unique and valuable contributions to astronomy and astrophysics has been recognized since neutrinos were discovered. Because neutrinos interact only weakly with matter allows them to be used as probes of processes that occur in dense regions from which photons cannot escape. In addition, because neutrinos play a central role in the dynamics of a number of important astronomical systems such as supernovas and solar cores, our understanding of these systems cannot be complete until the emitted fluxes of neutrinos can be accurately measured. Similarly, it has long been realized that the properties of neutrinos can be uniquely tested using astrophysical systems as neutrino sources and the Universe itself as our laboratory.
To date, the Sun and supernova SN1987A are the only sources that have provided a detected neutrino signal. However, the present and upcoming generation of large underground detectors hopefully will increase our neutrino source catalog to include a galactic supernova, the integrated flux of all supernovas throughout the history of the Universe, ultra-high-energy sources such as active galactic nuclei and gamma-ray bursts, and even Earth itself.
Scientific Landscape—Neutrino Astrophysics
Supernovas are spectacular stellar explosions in which the energy released in a few weeks is comparable to that expended by the Sun during its entire lifetime. Further, supernovas are believed to play a crucial role in the history of the Universe. For example, the heavy elements in cosmic rays are synthesized in massive stars and ejected in supernova explosions. So, it is not an exaggeration to say that life itself would not have been possible without supernovas. It also appears that supernovas play essential roles in galaxy formation and in reenergizing the process of star formation at later times in the life of a galaxy. From examining the decay products of radioactive isotopes, it appears that a nearby supernova seeded the elemental composition of our solar system and may have contributed to its creation.
In spite of their importance, supernovas are not yet well understood. Although baseline models exist, there are still many uncertainties, and significant problems stem from trying to determine the fundamental processes occurring in the center of the explosion from the relatively late-time optical light curve. Core-collapse
(Type II) supernovas are those in which a massive star exhausts its nuclear fuel; heavier and heavier elements are exhausted until a nickel-iron core forms that can no longer support the weight of the star. In the resulting explosion, more than 99 percent of the energy released comes out in the form of neutrinos that are largely emitted in the first 10-20 s. The bulk of the neutrinos are emitted at energies below 40 MeV, and it is expected that the emitted neutrinos are roughly evenly distributed among the three flavors and particles and antiparticles. However, to estimate the expected neutrino signal at Earth from a galactic supernova, flavor oscillations must be taken into account and, in fact, the detected neutrino signal can provide crucial information about not only supernovas but also the neutrino.
The detection of 19 neutrinos from SN1987A in the Large Magellanic Cloud by the first generation of underground water Cherenkov detectors at Kamiokande in Japan and the onetime Irvine-Michigan-Brookhaven detector was a historic event that demonstrated the possibility of supernova neutrino astronomy. The detection of a core-collapse supernova in our galaxy by a large LBNE detector would, in turn, provide a wealth of scientific information, relating to our understanding of particle physics as well as our fundamental picture of supernovas.
Supernovas in our galaxy are relatively rare occurrences. In fact, the last recorded event occurred more than 300 years ago. However, we have good evidence that numerous supernovas occurred in our galaxy since then but were obscured from our view. From a variety of inferences, the core-collapse supernova rate in the Milky Way is estimated to be approximately two per century, which means that a detector would need to operate for more than 20 years to have a significant chance of catching an event. Because these events are so rare, it is essential that multiple detectors are available worldwide, not only to ensure that at least one detector is operational when the neutrinos reach Earth but also to maximize the scientific output should more than one detector see the same event.
Although moderately sized detectors built to study solar neutrinos (or to search for dark matter or neutrinoless double-beta decay) could, in some cases, detect neutrinos from a nearby supernova, it is the large detectors proposed for LBNE that would greatly increase our capabilities. A water Cherenkov detector of 300 kTon scale would detect a very large number, estimated to be on the order of 20,000, of events from a supernova in the galactic center (i.e., at a distance of 8.5 kiloparsecs). Most of the recorded events would be antielectron neutrinos, detected via the inverse-beta-decay (IBD) reaction, with different models varying by factors of three to four in their prediction of the expected neutrino event rate. A statistically significant number of neutrinos would also be detected through elastic-scattering (ES) and charged-current (CC) interactions, with the latter events
providing directional information. Thus, the water Cherenkov detector of LBNE would clearly distinguish between various models that describe core collapse, and the relative numbers of neutrinos detected via the IBD, ES, and CC interactions would help us to know the flavor composition of the flux, which could be further improved by the addition of Gd to the water to permit neutron tagging.
A LAr detector of the size envisioned for LBNE would detect on the order of 1,000 neutrino events from a galactic core-collapse supernova, where most of the signal would be in the form of electron neutrinos detected via the CC interaction, ve + 40Ar → e− + 40K*. The reduction by more than an order of magnitude in the neutrino signal in the LAr detector relative to the water Cherenkov detector is due partly to its smaller size and partly to the relevant interaction cross sections. However, the improved energy resolution of liquid argon could partly compensate for the lower statistics. For example, the expected sensitivity in the ability to differentiate between the normal and inverted neutrino mass hierarchies is comparable for the two detectors. It is important to note that the backgrounds in a large LAr detector at these low energies (threshold energy ∼ 2 MeV) are not yet well known and could be significant at the shallower depths of 300 to 600 ft being considered for that detector.
Detecting neutrinos from supernovas outside our galaxy is largely a question of probability and distance, since the flux will vary as 1/r2, where r is the distance between the supernova and Earth. Thus, events occurring within the satellite system of the Milky Way would certainly be detectable, but events occurring further out in the local group (e.g., Andromeda) would not. However, an important potential exists—namely, the detection of the diffuse neutrino signal arising from all supernovas that have occurred during the lifetime of the Universe. Detection of these so-called supernova relic neutrinos would be an experimental tour de force. The flux and spectrum of the relic neutrinos could tell us about the uniformity of the supernova neutrino signal, whether SN1987A was a representative explosion, and whether there exists a component of supernovas that does not shine brightly in the optical band. Importantly, the diffuse neutrino flux predicted by different models is uncertain by a factor of approximately 12.
The spectrum of the relic neutrinos will have the same shape as the neutrinos from individual supernovas, but the signal will lack a distinct temporal signature since the relic neutrino flux is steady-state. Consequently, there exists only a small window of neutrino energy, between 20 and 30 MeV, where the relic neutrinos may be detectable above background. Below this window, solar neutrinos swamp the signal, and above it, atmospheric neutrinos dominate. No evidence for a signal of relic neutrinos was seen in a long exposure at Super-Kamiokande (approximately 1,500 days of SK-I and 800 days of SK-II), and the upper limit on the flux of supernova relic neutrinos from these data is just reaching the largest theoretically predicted flux. A large water Cherenkov detector similar to that proposed for LBNE would
substantially improve on this limit and could well detect a statistically significant signal. The ability to confidently see a signal with a water Cherenkov detector in the baseline configuration (15 percent photocathode coverage, no Gd doping) would be marginal, given the large uncertainty in the flux level. However a detector in the enhanced configuration (30 percent photocathode coverage, Gd doping) would cover most of the parameter space and could confidently expect to see a signal. The LAr detector option for LBNE is too small to significantly improve on the capabilities of Super-Kamiokande.
Neutrinos are expected to be produced in most astrophysical sources, and their detection on Earth could lead to profound insights about the relevant astrophysics in the sources themselves, as well as an important understanding of the properties of the neutrino. Indeed, the detection of neutrinos from the Sun and SN1987A were crucial advances in the development of neutrino astrophysics. A large underground detector for long-baseline neutrinos would serve as an excellent detector of neutrinos from a nearby supernova, and it might also be possible to see the first evidence of the relic neutrinos from supernovas that have occurred over the history of the Universe. Although the rate of supernovas in our galaxy is relatively small, there is a reasonable possibility of one happening within a 20-year period. A large underground detector like the one envisioned for DUSEL would detect a large number of neutrinos from a galactic supernova. This would greatly advance our understanding of these important sources and could shed new light on the makeup of the neutrino itself.
Conclusion: Neutrinos from supernovas can be studied by a large underground detector of a long-baseline neutrino experiment, making a unique and valuable contribution to our understanding of one of the most important astrophysical phenomena. This capability of the neutrino oscillation experiment would be of great scientific interest and add a significant value to that experiment. However, the sensitivity for detecting neutrinos from supernovas is not so important as to make it the primary consideration in choosing neutrino detector technology or a site for the experiment.
While the principal focus of the DUSEL program is the pursuit of physics research, the development of such a facility would provide rich research opportunities
for other fields. The environments that exist in underground facilities at depths of a few hundred to several thousand meters or more are complex and offer systems with strongly coupled thermal, hydrological, mechanical, chemical, and biological characteristics. The nature of such an environment, including how its components engage and influence one another, is the focus of both applied and basic research in fields that range from engineering applications to geological studies of geomechanics and geophysics, and to research into biological systems in extreme environments. Much of this research can be carried out only in situ, since many of the important events occur on time and spatial scales that cannot be replicated through sampling and intermittent tests.
Subsurface engineering research includes work related to fairly traditional extractive activities that arise in petroleum drilling and mining and civil engineering issues associated with rock slopes, dam foundations, tunnels, rapid transit, and subsurface city infrastructure. However, those more traditional research fields are now joined by research in areas such as “enhanced” geothermal systems, unconventional sources of natural gas, and an ever-widening variety of applications of the subsurface for isolating materials such as nuclear waste and CO2.
For the geo- and biosciences, the nature of the environment itself and how it responds to disturbances offers a wide range of research opportunities. Many rock types have exceedingly low porosity and permeability, and at great depth, fractures18 are commonly the main conduits for fluid flow, the main determinant of rock strength, and the locus of seismicity. Although these fractures are critical to many aspects of rock behavior and are present over a wide range of scales they are exceedingly poorly understood for the simple reason that they are easily missed by conventional subsurface probes such as well bores. A subsurface environment at great depth may also enable the existence of microbiological communities. How those communities arise and survive is the result of a complex engagement among the chemical, hydrological, and thermal characteristics of the underground environment.
In this section of the report, the committee discusses the experiments that have been proposed for incorporation into the initial suite of research to take place at
18 A “fracture” is a break in a rock caused by brittle failure. A “fault” is a fracture whose opposite sides have been offset parallel to the fracture surface (i.e., shearing offset). A fault is a fracture, but not every fracture is a fault. In the literature and in casual conversation “fracture” frequently refers to fractures that display no fracture-parallel offset—that is, those that are not faults. It would be better to call these features “opening-mode fractures” (or extension fractures or cracks or joints). The only displacement accommodated by these fractures is parting of opposing walls (widening of the aperture).
DUSEL,19 but possible future experiments are many and are briefly touched upon at the end of this section.
Rock at depth has mechanical characteristics that set it apart from other materials. Rock is preloaded by vertical (gravitational) and lateral (tectonic) forces. It is common for rock to be extensively fractured and folded owing to deformation processes occurring over many millions of years. The rock is a combination of a solid “skeletal” (matrix) component and a system of interconnected fluid-filled pores within the skeleton. The forces are transmitted in part by the solid rock, which sustains different forces (or stresses) in different directions, and in part by the fluids, which develop a pressure in the pores. It is a dynamic system with the tectonic forces increasing continuously, albeit at a very slow rate, until some part of the system—commonly a fracture or a fault—is overloaded and slip occurs, until the system reaches a new equilibrium. Depending on the force-deformation characteristics of the fault and those of the surrounding rock mass, this slip may occur violently, producing an earthquake and seismic waves, or slowly, as a gradual process.
Figure 3.9 illustrates some of these factors. When subjected to a change in load, the rock deforms (as indicated in the lower diagram), eventually reaching a limit when the internal structure starts to disintegrate. In practice, these load changes are usually a result of disturbing the preexisting equilibrium by, for example, the introduction of an excavation (borehole, tunnel, or mine) or the injection or removal of fluid. An excavation changes loads on rock in its vicinity, as does localized fluid, for which the excavation serves as a “sink.” The intensity of the load decreases with distance from the “disturbance.” However, elastic energy in the outer region is available to “feed” the disintegration. Depending on the type of rock and its unloading characteristics, the disintegration is sometimes violent, as in the case of earthquakes. The deformation characteristics of the rock will change as the number of fractures increases (Figure 3.9, upper left diagram), and as the duration (Figure 3.9, upper right) of the load increases.20
19 A science review of six of the proposed geosciences/engineering experiments was recently conducted by a subcommittee of the National Science Foundation’s (NSF’s) Deep Underground Science and Engineering Laboratory (NSF. 2011. AC-GEO Subcommittee, Science Review Panel Report, April 1).
20 The diagram indicates that the rock reaches a peak load beyond which it disintegrates progressively. This is the case when the lateral confining pressure is reduced (e.g., by the excavation process). In the interior of a rock mass, the rock remains “elastic” but may still undergo long-term strength changes owing to the thermochemical effects of the fluids circulating through the rock. As the depth and temperature increase, the rock will change progressively from a brittle to a ductile material.
Engineering projects can be kilometers in linear extent. In some cases, such as nuclear waste isolation and carbon sequestration, the performance of the engineered system must be assessed over very long periods (tens to thousands of years or longer). Projects in petroleum engineering now extend to depths on the order of 10 km, where rock temperatures may exceed 350°C and in situ rock stresses are on the order of 250 MPa. Proposed geothermal energy projects involve comparable depths. Part of the rock pressure is supported by fluids circulating through connected pores in the rock. Chemical reactions between the fluids and the rock are complex and not well understood but very important in designing effective long-term heat-exchange systems for geothermal energy production.
The advent of high-capacity computers now allows large-scale features to be modeled and deformation behavior predicted. However, testing the validity of computer predictions requires experiments to be conducted in situ. Attempts have been made to obtain some insights into in situ behavior through the study of rock exposed during events such as mining operations and the construction of dam foundations. However, because the primary purpose of these excavations is not research, these sites are far from optimal and typically do not allow for the careful design and instrumentation of experiments or for the conduct of experiments over extended periods of time. Having extensive underground space available for regulated long-term experiments, such as those proposed for DUSEL, would help to address these shortfalls.
It may seem surprising that basic questions of subsurface geomechanics, geohydrology, geochemistry, and geophysics remain given that subsurface studies have been central to geology since its inception. Using coring techniques, geoscientists acquire samples21 from great depths, and they regularly deploy sophisticated geophysical well-logging tools to document a wide range of rock and fluid properties at depth. These measurements are augmented by powerful geophysical seismic methods for imaging undrilled areas. Despite the inherent inaccessibility and complexity of subsurface environments, such tools are adequate for characterizing many rock properties. Thanks to the advent of geophysical logging tools that image or detect fractures at or near the well bore and coring procedures that succeed in fractured rock, log- and core-based methods usually provide some information on fracture attributes. For example, well bores that target large faults can provide rock samples to support physical and chemical investigations of earthquake zones.22
Nevertheless, for assessing other important attributes of rocks in these settings—fractures and faults and their relations to fluids, in situ stress, chemical reactions and microbiology—wellbore-based studies have important limitations. Data are commonly incomplete because meaningful samples of subsurface fracture networks are inherently difficult if not impossible to obtain. The actual process of core drilling into the rock produces stress changes that may change the core and render it unrepresentative of the rock from which it was sampled.
Fractures are commonly too small and widely spaced to be effectively sampled by well bores,23 and, owing to their small size, opening-mode fractures and many
21 Typical cores are cylinders ∼10 cm or less in diameter and of arbitrary length but usually >100 m.
23 W. Narr, D. Schechter, and L.B. Thompson. 2006. Naturally Fractured Reservoir Characterization. Richardson, Tex.: Society of Petroleum Engineers.
faults are invisible to indirect geophysical investigation.24 The widespread distribution of fracture arrays and the small size of individual fractures mean that vital characteristics25 of most subsurface fractures are little known. Well bores that do intersect fractures or faults may not be optimally located within the structure to provide insight into important processes. Because the geoscience data needed for breakthrough insights is inherently three-dimensional over a wide range of scales, small samples at a single point are bound to be inadequate, and they may provide no meaningful data or even misleading data.
Operating mines provide access to the underground but do not usually allow for long-term studies, impeding our understanding of fluid flow and its associated physical and chemical processes. An alternative to subsurface studies is the investigation of rocks that have been buried and then uplifted to the surface. These rocks may preserve evidence of faults and fracture arrays and the by-products of chemical reactions that existed at depth, but key features may be obscured or overprinted during uplift. Moreover, these fossilized records lack the essential dynamic context of tectonic, burial, and thermal loading, fluid flow, and chemical reactions.
Faults are important features that cross a wide spectrum of the geosciences and have important societal impacts beyond earthquakes. Understanding the nucleation and rupture of earthquakes on faults is a central theme of seismology and rock mechanics, and unraveling the history of slip is a central research area for structural geologists. The dynamic aspects of rock at depth have profound implications for engineering operations that perturb the subsurface, including drilling, hydraulic fracturing, and fluid storage. Further, mass transport and mineral deposition along faults is an important source of metal ores, and understanding these processes is a significant challenge to geochemists and economic geologists. Faults affect preferential pathways for fluids at a wide range of lengths and timescales. An important fraction of Earth’s heat flow is carried by hydrothermal circulation through faults, and the circulation of cooler water through faults is a hydrogeologic process. Faults can also be a locus for microbial life.
Despite their significance, the study of stresses and strain deep in Earth’s subsurface and their interaction with preexisting or growing fractures; moving or static fluids; and chemical or biochemical reactions is necessarily restricted to sparse point measurements in deep boreholes and deep mines26 that rarely include measurements over time27 and are seldom located in the most informative places
24 The Leading Edge, v. 26, no. 9, September 2007.
25 Such as length, height, and aperture distributions, connectivity, orientations, and patterns of mineral deposits, and variation of these attributes with position and rock type.
26 T. Engelder. 1993. Stress Regimes in the Lithosphere. Princeton, N.J.: Princeton University Press, at 451.
27 NRC. 1996. Rock Fractures and Fluid Flow: Contemporary Understanding and Applications. Washington, D.C.: National Academy Press.
or collected at the most interesting times. An example of a potentially interesting data set that is lacking and that illustrates the last two points would be measurements of all key parameters near the nucleation zone of an earthquake prior to, during, and after the event. The most desirable subsurface experimental setting would therefore enable observations over large volumes (hundreds of cubic meters to cubic kilometers) and for long periods of time (years to decades), providing researchers with the opportunity to target and perhaps even deliberately perturb28 specific key, instrumented areas within a given volume. Such a setting would allow systematic investigations of important interactions and the feedback on them that are suspected to exist among loading, fracture growth, closure or sealing, altered permeability and porosity and structure of the rock and fractures, altered composition of fluids, altered stress, and pressures, directions, and rates of fluid movement. For example, fluid pressure changes can alter a rock’s elastic response to deforming forces, which could influence earthquake frequency and magnitude. As with permeability, variation in rock strain and stress as a function of measurement scale and sample position and size is not well understood because sufficiently large volumes of rock at depth have not been adequately measured or characterized.
Many fracture and fault attributes and their behavior with respect to processes covered by the disciplines of geomechanics, geohydrology, geochemistry, and geophysics could be addressed effectively in an underground laboratory. Such facilities would permit measurement of rock structure, fracture attributes, and their variability with size, depth, and distance across the excavation. The scale of observation has to be large enough to allow for the collection of meaningful evidence for coupled mechanical, geochemical, and microbiological processes occurring within the subsurface environment. These processes can play a vital role in how effectively fluids are stored in or transmitted through rock and how faults and opening-mode fractures behave over time spans of hours to decades to millennia and, thus, how they may respond to human intervention. The ability to investigate the rock volume after tracer tests or imaging may lead to improved techniques that can be applied elsewhere.
Access to the large rock volume would permit testing the hypothesis that Earth’s crust is critically stressed and that some part of Earth is always close to failure by fracture. Significant rock permeability at depth may occur along critically stressed fractures. Mapping fractures, stress, and fluid flow within the subsurface will help geoscientists to confirm or extend theories about the mechanics of Earth deformation.
Any disturbance of the subsurface, be it “natural”—for instance by volcanic or seismic activity—or as a result of engineering, will change the preexisting
28 Active experiments, such as placing heaters in the rock mass, might improve our understanding of how coupled mechanical, chemical, and fluid-flow behavior responds to environmental changes.
equilibrium, sometimes dramatically, as in the case of surface tremors induced by fluid injection at depth.
The process of coring to obtain rock specimens from these subsurface environments can change their properties to an unknown extent. In some cases, the behavior of cores, the primary basis for much of university laboratory rock mechanics research to date, may not be representative of rock’s behavior in situ.
Microorganisms have inhabited Earth for 3.5 billion years and hence have had a much longer time for adapting to life in a mineral world than some more recent microorganisms have had to adapt to life with higher organisms. During that long time some evolved mechanisms to capture energy from virtually every energy-yielding chemical redox couple. The more common inorganic reductants supporting microbial growth are Fe(II), S−2, H2, and NH4+, while Fe(III), NO3−, SO4−2 as well as O2 are common oxidants. Other minerals that are involved include but are not limited to Se, As, P, Mn, Cr, Co, U, and Zn. These minerals can also serve as electron donors and/or acceptors, supporting some microbial growth. Also, because of their long history, microbes are widely dispersed and serve as inocula in fissures within rocky materials, becoming available as life-sustaining niches. Besides their diversity in capturing energy, these microbes have also evolved adaptations to extreme conditions, such as long-term starvation, high and low temperatures, acidity and alkalinity, high pressures, and desiccation, to name the more relevant. In summary, most mineral environments with moisture and temperatures below 120°C can be expected to contain some microbial life.
A number of recent high-profile studies from deep ocean drilling programs have expanded our knowledge of the physiological types, extent, activities, and diversity of the bacteria and Archaea that growth at depth.29 This has enhanced our knowledge of their biogeochemical role and the extent of the biosphere. Some information on the terrestrial microbes at depth has come from microbial studies in deep mines and oil drilling wells. The former have confirmed microbes living at depths; in one case, the genome of a novel bacterium from a 2.8 km deep rock fracture was sequenced.30 The studies of microbes in oil wells have focused on the microbial role in well corrosion and oil field “souring.” All such studies establish substantial and diverse microbial life at depth, but detailed information on the
29 B.B. Jorgensen and S. D’Hondt. 2006. A starving majority deep beneath the seafloor. Science 314: 932-934; J.S. Lipp, Y. Morono, F. Inagaki, and K.U. Hinrichs. 2008. Significant contributions of Archaea to extant biomass in marine subsurface sediments. Nature 454: 991-994.
30 D. Chivian, E.L. Brodie, E. J. Alm, et al. 2008. Environmental genomics reveals a single-species ecosystem deep within Earth. Science 322: 275-278.
indigenous microbes and their biogeochemical roles in rock environments are comparatively sparse.
One consistently high-profile area for biological advance is the discovery of new microbes that expand our knowledge of the strategies and limits of life,31 such as microbes that harvest new sources of energy, live at even higher temperatures or pressures, or exhibit new biochemical reactions, some of which may have biotech-nological or pharmaceutical value. The discovery of these organisms often occurs in samples from unusual habitats where unique biology may have evolved. A facility for the described physics experiments would necessarily access subsurface material that could reasonably harbor unique biology, and the samples made available to the biological research community should be free from external chemical and microbial contamination. Important questions about the energy sources and energy efficiency of these organisms and about the evolution of small populations and horizontal gene exchange as well as mechanisms of mineral weathering could be addressed using the access enabled by the DUSEL physics facility. Other subsurface research facilities being put to use for the studies of microbes are the Ice Core Lab (http://nicl.usgs.gov) and the Integrated Ocean Drilling Program (http://www.iodp.org).
Sites dedicated to cross-disciplinary research in the biological and geosciences would be valuable. For example, phenomena where faults play a role are closely interconnected, but the disciplines that address them are in many cases not closely interconnected nor do they enjoy much professional interaction. Fluid transport and chemical reactions contribute to microbial life, and the microbes probably facilitate chemical reactions. Chemical reactions alter permeability and affect fluid pressures, which in turn may influence fluid flow and mechanical stability. Damage and flow conduits formed during an earthquake rupture can be healed, and resulting changes in permeability can be sealed by chemical reactions, thereby influencing subsequent fault slip. The rupture process itself may release hydrogen, carbon, or other compounds that go on to take part in chemical and biochemical reactions.
All existing and proposed underground facilities have important limitations, especially for subsurface engineering and geoscience research. Many of the most interesting processes occur at depths and temperatures deeper and hotter than any of the proposed underground facilities. Moreover, all of the processes and interactions described earlier are sensitive to characteristics such as rock type, tectonic and structural setting, and rock history. DUSEL is in a specific geological setting,
31 See H.N. Schulz, T. Brinkhoff, T.G. Ferdelman, H. Hernandez-Marine, A. Teske, and B.B. Jorgensen. 1999. Dense populations of a giant sulfur bacterium in Namibian shelf sediments. Science 284: 493-495.
principally metamorphic rock in a low tectonic environment. It is, however, sedimentary rock (carbonates, sandstones, shales, etc.) that is the focus of a great deal of research because of its importance to oil and gas discovery and extraction, as well as the potential benefits associated with CO2 sequestration. Moreover, although many generic experiments can be conducted at Homestake, engineering applications may need to be demonstrated in specific rock formations. Yet, developing the tools to overcome scale and sampling challenges at an underground facility would have widespread impact.
This limitation applies to any single underground research site. Thus, research in subsurface engineering, geosciences, and biosciences (EGB) would benefit from international cooperation and a strategy of several subsurface sites. Owing to important variations in rock types, the investigation of loading conditions, temperature, and fluid regime at many sites is likely to yield the most valuable insights. Some of these sites need not be extensive long-term underground laboratories, since much information can be gained from targeted drilling.
Several broad classes of EGB experiments have been described to date. All of these are intended to be accomplished over the first decade of DUSEL operations (i.e., 2014-2024):
1. Scale effects and coupled thermohydromechanical processes;
2. Subsurface imaging (“transparent Earth”);
3. Modeling the mechanics of induced fracturing and fault slip; and
Scale Effects and Coupled Processes
Much of the research intended in this category was stimulated by the proposal to construct the large water Cherenkov cavity (∼60 m span) at a depth of 1.5 km (4,850 ft). Such a cavity at this depth is unprecedented and would provide a unique opportunity for engineering research on the effects of (1) scale (both size and time) on rock deformation and (2) the preconditioning of rock mass (by blasting) to facilitate excavation and minimize damage to the final rock periphery. This experiment will require “halo” tunnels around the large cavity for instrumentation and monitoring. The dynamic response of various support systems installed in the halo tunnels could also be monitored during blasts as part of excavating the large cavern. Observation and characterization of fracture systems in the rock mass will be carried out in drifts (including in the halo tunnels) developed in preparation for the large cavern excavation.
The complex coupled nature of thermal-hydraulic-mechanical-chemical (THMC) effects in subsurface systems is illustrated in Figure 3.10. The DUSEL experiment proposes to study the role of biological effects in such coupled processes, which may be significant in certain underground environments. Among the tests under consideration is a heated block test for studying THMC plus biological processes. The study proposes to heat a 50 m × 40 m × 40 m block of rock by an array of electrical heaters to a maximum temperature of between 150°C and 300°C. The block will be delineated by two parallel drifts approximately 45 m (center to center) and a cross drift. Instrumentation will be deployed along the three drifts. Researchers will then study links between microbial activities, nutrient supply, biochemical reactions, and temperature. It is anticipated that this project will require approximately a decade to complete the heating and cooling phases.
Subsurface Imaging (Transparent Earth)
The opacity of rock is a major impediment in subsurface engineering. The problems range from the inability to “see” a few tens of meters ahead of a tunnel boring machine to the precise location of “producing horizons” at depths of several kilometers and occur in petroleum extraction and in the search for ore deposits in mineral exploration. Experiments to explore the potential of a variety of geophysical techniques to make the rock more “transparent” are planned at DUSEL. Faculty from several universities are involved as a collaborative team, led by Steve Glaser at the University of California at Berkeley. Experiments include broadband and long-wavelength seismic arrays, passive electrical arrays, and electromechanical passive imaging. A rock block between two drifts 50 to 75 m apart is envisaged.
One problem limiting the wider applicability of imaging tests in hard-rock underground sites such as Homestake is that the geology is either highly complex (folded and faulted metamorphic rocks at Homestake) or markedly different from that in areas generally of interest to geoscientists (homogeneous granitic rocks versus sedimentary rocks). This might lessen the usefulness of imaging experiment results in these facilities for clarifying questions of widespread interest in the geosciences. Verification test results could be ambiguous or techniques developed at Homestake might not work elsewhere.
Mechanics of Induced Fracturing and Fault-Slip Modeling
Induced fracturing is a major element of much of subsurface engineering. Perhaps the most common example is massive hydraulic fracturing that is used extensively in the oil and gas industry. Recent applications in the United States to stimulate the extraction of geothermal energy and natural gas by fracturing have led, in some instances, to seismic tremors and proposed legislation to prohibit the use of fracturing. Other important methods of inducing fracturing include use of explosives and rock-cutting tools in tunnel boring machines, all in an effort to increase drilling rates in deep borehole drilling.32 A study has been proposed to conduct hydraulic fracturing tests in a rock block similar in dimension to the heated block test discussed in the preceding section. Instrumentation would be installed to detect microseismic activity and velocity changes during fracture propagation.
When a fault can no longer sustain the forces applied to it, dynamic slip may take place, resulting in earthquakes. A slip can occur from an increase in tectonic
32 The rate of drilling (including rock removal) and the time spent in reaching the producing horizon directly affect the economics of offshore drilling. Energy costs for drilling are a small component of the overall costs of maintaining an offshore rig.
loading, a decrease in fault slip resistance owing to hydrological, chemical, thermal, or other changes along the fault surface and in nearby rock, or to some combination of all of these factors. The spatial and temporal distribution of rock deformation leading to fault slip (and earthquakes) is inadequately known as are the processes that lead to and accompany progressive rupture. Many preexisting faults are believed to be active in today’s stress field (critically stressed faults).33 If forces on faults are in a state of critical equilibrium the implications for engineering operations that disturb this equilibrium are profound.34 A deep underground laboratory could allow measurements of rock strain as a function of time and position near faults and in the rock mass. These data would help explain the influence of geology and human activity on strain and stress distribution in rock, allow observation of how deformation accumulates near faults and fractures, and provide insights into how laboratory and underground laboratory measurements of fault slip processes can be scaled to larger events. The understanding gained from this research could be a step toward reliable understanding of earthquake rupture processes and precursory phenomena.
At least two potential issues with these fault slip experiments should be noted. First, if experiments succeed in activating an instrumented fault, the ramifications for nearby physics experiments (and physicists) would need to be considered.35 This possibility might necessitate conducting the geoscience experiments during site construction, although this would result in the experiments operating on shorter than optimal timescales. However, since all of the proposed sites are in relatively tectonically quiescent areas,36 the second potential problem is that the experimental perturbations will be insufficient to cause an interesting response (i.e., there will be no earthquake). Selection of a site in a more seismically prone area or the application of an unfeasibly large perturbation might be needed before sufficient slippage will take place.
The proposed biology experiments fall into two categories: (1) those that seek to define and quantify the microbiological role in the rock weathering processes,
33 C.A. Barton, M.D. Zoback, and D. Moos. 1995. Fluid flow along potentially active faults in crystalline rock. Geology 23(8): 683.
34 For example, large-scale fluid injection for CO2 sequestration or hydraulic fracture water disposal could lead to widespread seismicity in otherwise tectonically quiescent areas.
35 Efforts to prepare the Homestake mine for the physics experiments have included testing the structural capability of the surrounding rock and stabilizing and rehabilitating the space where needed. K.T. Lesko, Lawrence Berkeley National Laboratory, “Deep Underground Science and Engineering Laboratory (DUSEL) Project Overview,” Presentation to the committee on December 14, 2010, p.11.
including their contribution to the coupled THMC, and (2) those of a discovery nature that explore unknown aspects of biology provided by access to a unique habitat.
While the general capacities of microbes in rock weathering are known, their activities under field conditions—such as their natural rates, environmental controllers of those rates, biochemical mechanisms, and often the types of microbes themselves—are unknown. This information is important in quantifying the processes, their accurate modeling, scaling, and their integration into coupled processes. This gap in information is due largely to the lack of field laboratories at depth that would allow in situ studies under natural or nearly natural conditions. The reproduction of these natural conditions in a distant laboratory is currently impossible. While it may be possible to obtain the rock material, it is not possible to reproduce the natural water chemistry, including its natural redox state and flow conditions, or the indigenous microbial populations. Furthermore, the contamination of samples with external microorganisms during drilling and sample processing becomes a much greater problem in off-site studies. An additional advantage of field laboratory studies is that the site hydrology and geochemistry information can be directly integrated with the biological information. Defining and quantifying the microbial role in the coupled processes is the science area where important new biological knowledge should be reliably obtained in a DUSEL-like facility. The experiments planned for this area are well integrated with the nonbiological components and would greatly benefit from the data synergy that would occur.
Only one other underground microbiology laboratory exists in the world, the ASPO lab in Sweden. While it has proven the feasibility and value of such a lab at depth, it is small, fully used, only 400 m deep, and embedded in homogeneous granite, which offers only limited conditions for microbial study.
The second category of experiments would expand our knowledge of biology by (1) defining the depth of the biosphere and (2) determining whether some unique biology exists in terms of energy sources, physiology, and evolutionary outcomes, including life as we do not know it. In the first effort, the proposal is to drill deeper into the crust to determine where life ceases to exist—perhaps at the 120°C isotherm? The drilling cost would be reduced substantially since drilling could start from the existing excavations at 7,400 ft, the deepest directly accessible level in North America. This experiment would better define Earth’s biosphere and biogeochemical inventory. However, while it would help fill gaps in our knowledge of the terrestrial biosphere, it would be costly relative to its potential science value. It is, after all, limited to a single location and the microbial densities are likely to be low, both of which are limitations compared to the proven value of ocean sediment studies.
The experiments for detecting novel biology are both intriguing and risky. The environment should select for novel energy specialists—“dark life,” different
evolutionary outcomes, and isolation from horizontal gene exchange with surface organisms—to name a few potential high-profile outcomes. But, the study is risky because the microbial density would probably be very low in this geologic material, conditions for microbial isolation might be difficult to determine, and the microbes might not have been isolated from the surface life for long enough to exhibit population or genetic differences. In evaluating the merits of this experiment, one must examine the extra cost for the biological objective in relation to the probable value of its results. The committee judges that undertaken alone, the experiment seems too costly. However, if the field lab and the THMC experiments are undertaken as well, then the extra cost of obtaining some biological samples is significantly decreased and at least some sample collecting would be warranted for these risky but potentially high-payoff experiments.
The experiments proposed in the DUSEL program are only a fraction of the possible nonphysics studies that might take advantage of the existence of an underground research facility. Here, the committee presents a sampling of other promising lines of experimental inquiry. However, as noted in the preceding section, the value of the underground space for these experiments might depend on the types of rock present.
Fracture Network Engineering
The development and control of fracture networks at depth by remote stimulation of a rock mass is central to many aspects of subsurface engineering. Currently, although hydraulic fracturing is a major component of oil and natural resource development, it is still in some respects more art than science. It is a technology that is being applied increasingly to the development of other resources. In enhanced geothermal systems, for example, a fracture network is created at depth on the order of 6 km or more, where the rock temperature is approximately 300°C or higher. Water is circulated through the fracture system to extract heat. Cooling causes the rock to contract; fracture apertures change and hence also the pattern of circulation. Downhole-microseismic networks can monitor fracture development. It has been proposed to develop a model of the preexisting fracture network and develop a model of a stimulation plan, including predicted seismicity. The predicted activity can be compared with that observed and the stimulation procedure modified to improve the overall “heat exchange system.” Such ambitious schemes will need to be tested, modified, and made robust before they can be applied successfully. FNE research experiments could be an excellent development of or supplement to the THMCB experiment proposed above.
Other Potential Future Experiments
• Large-scale rock mechanics experiments, including induced brittle failure on new or preexisting natural faults through controlled stress relaxation (e.g., with slow release of hydraulic support structures) or other means.
• Seismic experiments to detect and monitor hydraulic fracture propagation and fault rupture with closely spaced monitoring devices and subsequent intense sampling or mining. An advantage of a dedicated site for such tests is that the dedicated site would not have the noisy active mining operations or nearby tunnels that are in use (traffic, water flow).
• Hydrogeologic experiments, including effects of microbes on flow properties. Such tests could include controlled flooding of deeper mine sections.
• Experiments relevant to nuclear and chemical waste disposal—for example, radioactive tracer studies.
• The underground access provided by DUSEL is an opportunity for determining some “ground truths” and improving three-dimensional seismic and other surface-based geophysical exploration techniques by comparing the geophysical predictions with actual observations at depth. Finally, the increasing variety of engineering applications of the underground—for example, for nuclear and hazardous waste isolation, including CO2 sequestration for the development of domestic natural gas resources, and for geothermal energy37—will stimulate a variety of engineering studies for which DUSEL will be well suited.
Conclusion: The ability to perform long-term experiments in the regulated environment of an underground research facility could enable a paradigm shift in research in subsurface engineering and would allow other valuable experiments in the geosciences and biosciences.
37 The events in Japan resulting from the devastating earthquake in 2011 have reopened discussion of underground location of nuclear reactors to avoid the possibility of releases of dangerous concentrations of radionuclides into the atmosphere.