Interactions with and Connections to Other Branches of Physics and Technology
The questions addressed by elementary-particle physics have strong links to those of other disciplines. These connections are both intellectual and technological.
Today, different fields of physics appear to be becoming ever more specialized and this is largely true. Nevertheless, perhaps somewhat paradoxically, connections between fields are continuing to strengthen. This chapter indicates some of the strong areas of overlapping interest between elementary-particle physics and the disciplines of cosmology, astrophysics, nuclear physics, atomic physics, condensed-matter physics, fluid dynamics, and mathematical and computational physics.
Unlike accelerator-based experiments, which are limited by available beam lines and interaction regions, the universe is an ever open laboratory. The possibilities for study, limited only by the imagination, are recognized by many particle physicists who turn to cosmology (and astrophysics, see below) for investigations of high-energy phenomena. Particle physicists with their expertise in large-scale computing and detector technology (low-noise and micropower analog and digital electronics, and large-scale and cost-effective detectors, to cite two examples) are well suited to contribute to such experiments probing the frontier of physics.
Cosmological observations have established that the known universe started as a small patch of space filled with radiation that subsequently expanded and cooled. In the earliest stages of its evolution, the universe contained extremely energetic particles, and it is likely that some important relics that could have been produced only at these early times are likely to remain today. Indeed, astronomical and astrophysical studies may very well be a way to study some aspects of particle physics beyond the Standard Model.
Particle physics interacts with cosmology on three important topics: dark matter, structure formation, and baryogenesis and nucleosynthesis.
There is very strong evidence for a preponderance of dark matter in the universe, and there are strong arguments that it cannot be ordinary matter. This may be seen as one of the most important scientific discoveries of this century.
The amounts of various types of matter in the universe are customarily expressed as a fraction of what is called the critical density. The critical density is that value above which the universe will eventually contract and below which it will continue to expand forever. Of course the value of the actual total mass density in our universe is of great importance.
From observations, one gets important information on the dynamics of the matter in the universe. One looks at the radial dependency of rotational velocities of stars in spiral galaxies. These observations indicate that galaxies are made of visible stars in the center and are surrounded by massive halos of invisible matter. Additional observations using radio observation of the rotational velocities of neutral hydrogen in the gas clouds indicate that the halo extends way beyond the edges (defined by visible stars) of the galaxies. From these observations and also from studies of gravitationally bound clusters of galaxies, it is known that the mean mass density in the universe is at least 20% of the critical density and could be consistent with being equal to the critical density. In addition, there are indirect but compelling theoretical arguments as to why our universe actually has almost exactly the critical density; however, the question remains open.
Now, what is the composition of the matter in the universe? One can readily add up the matter in luminous mass (stars), and this is found to be less than 1% of critical density. Thus there is at least 20 times more mass in the universe that is nonluminous (i.e., "dark").
The question arises as to whether this matter is ordinary (baryonic) or possibly exotic (nonbaryonic). The amount of ordinary matter in the universe is, however, constrained by the big bang model to be between about 1% and 10% of the critical density. Thus, there is a major component of baryonic matter that cannot be accounted for in observations, and if the universe indeed has the critical density, then it is dominated by nonbaryonic dark matter. Even if not, there
appears to be a significant component of the dark matter that is exotic. The prediction of baryonic density needs to be taken quite seriously, since the "big bang" model successfully accounts for the relative abundances of light nuclei and for the number of light neutrinos.
Two candidates for baryonic dark matter are hot, diffuse gas in galactic clusters and possibly in between galaxies and in their halos, and massive compact halo objects (MACHOs), such as brown dwarf, white dwarf, and neutron stars. There is strong evidence for both, and these two may very well make up the deficit in baryonic dark matter.
The nonbaryonic components are of primary interest to particle physicists. Elementary-particle physics provides unique insight into the nature of this component of dark matter. Candidates include neutrinos with mass, supersymmetric particles, axions, and magnetic monopoles. These candidates are primeval, nonbaryonic, and interact weakly with ordinary matter. They are treated here in turn.
Neutrinos are one candidate for nonbaryonic dark matter. Unlike other candidates, they are known to exist; but to be a significant component of dark matter, neutrinos would have to have mass.
An upper limit to the mass density of the universe limits the sum of the masses of the three neutrino types: If the limit were violated, our universe would be contracting rather than expanding. Several experimental approaches are sensitive to nonzero neutrino masses. These include the study of neutrino oscillations (or mixing) in experiments at accelerators, measurements of the flux of electron neutrinos from our Sun relative to theoretical predictions, and studies of the ratios of neutrino flavors observed from the products of cosmic-ray interactions in Earth's atmosphere. The extensive experimentation in this area is discussed in Chapters 4 and 5.
Weakly Interacting Massive Particles
Another class of candidates for dark matter comprises supersymmetric and other weakly interacting massive particles (WIMP). The lightest supersymmetric particle is likely to be stable, so it becomes itself an interesting candidate for dark matter, particularly if it has a mass in the range from about 10 GeV (109 electron volts) to 1 TeV (1012 eV). In this range, WIMPs would be very slowly moving: They are cold dark matter candidates. They would be found clustered in our own galaxy and could be detected either directly (from interactions with laboratory equipment) or indirectly (from interactions in the halo of the galaxy).
WIMP signals are expected to be small, but detectors could take advantage of the unique signatures that WIMPs exhibit. For example, since WIMPs are
orbiting in the galaxy, their velocity relative to the detector would exhibit predictable temporal variation. The rate of collision and the strength of the WIMP recoil would reflect this variation. The strength of the signal would also depend in a predictable way on the mass of the detector nucleus, the mass of the WIMP, and the relative velocity between them.
Three different types of detectors are under development for the detection of WIMPs. Solid-state detectors, consisting of large germanium and silicon ionization detectors, are used to detect WIMPs that collide with nuclei in the detector. The current limits on WIMP flux are set by these detectors. Sodium iodide and other high atomic mass scintillating materials show promise in increasing a detector's sensitivity to higher-mass WIMPs. WIMP collisions are also being sought in phonon detectors. The technology of detecting phonons (excitations in solids) is well developed in condensed-matter physics. When WIMPs collide with atoms in a crystal, such excitations are produced and the energy imparted to the crystal can be observed as a small rise in temperature, depending on the heat capacity of the material. At very low temperature, the heat capacity can be made very small, resulting in the best sensitivity, compared to other types of direct search detectors. Steady and significant progress is being made in the development of high-quality crystals and new temperature sensors.
Finally, indirect detection techniques are also being used. These involve searching for distinctive annihilation products of WIMPs. For example, WIMPs will be gravitationally captured by the Sun and annihilate within it, producing among other things high-energy neutrinos that can be detected in large underground detectors on Earth. WIMPs in the halo of our galaxy that annihilate can produce high-energy positrons and gamma rays or low-energy antiprotons that can be detected by instruments placed above the atmosphere. These indirect methods complement direct detection efforts in searching for particle dark matter within our galaxy.
Axion searches are discussed in Chapter 5; they exemplify the advantages of cross-disciplinary research. Axions are predicted to have mass, interact very weakly with matter, and are believed to have been produced at the time hadrons were produced in the big bang. Like WIMPs, as cold dark matter, axions would play an important role in galaxy formation, causing large-scale structures to evolve to what they appear to be at present and would be expected to cluster around our galaxy. The mass of the axion (if it exists) is now known to lie between one-thousandth and one-millionth of an electron volt: If it were any lighter, its density would comprise enough matter to be inconsistent with the age of the universe, whereas if it were heavier, some stars would shine for too short a time. Axions would be produced in the center of hot stars and act as cooling agents, speeding up stellar evolution.
Current searches are based on the fact that axions can be detected by their decay into two photons. The actual detection method uses a high-quality radiofrequency cavity to detect the presence of decay photons. Important advances in magnet and low-noise amplifier technology are expected spin-offs from such searches.
A fourth candidate for nonbaryonic dark matter is the magnetic monopole. In string theories and in grand unified theories, magnetic monopoles are predicted to have been produced at the origin of the universe, but their density in the present universe must be low. They would be gravitationally bound to our galaxy and, if there are enough of them, would be observable. Currently, a large detector, the Monopole, Astrophysics, and Cosmic-ray Observatory (MACRO), has been built in the Gran Sasso tunnel in Italy to observe magnetic monopoles orbiting in this Galaxy. Within a few years of operation, MACRO should reach an interesting level of sensitivity.
Astronomers have found a remarkable pattern of structure in the distribution of galaxies. Galaxies reside in large concentrations connected by thin, filamentary structures, surrounding large, quasi-spherical voids. These voids have typical scales of 200 million light-years. This structural complexity is contrasted with the smoothness of the very early universe observed in cosmic background radiation, where only tiny fluctuations of a part in 105 are seen across the entire horizon. So the question is, How did the universe evolve from such a smooth, featureless condition to the current one?
Cosmologists turn to particle physics for help in understanding the two basic issues underlying the formation and evolution of structure: the nature of dark matter and the origin and nature of the small density inhomogeneities that seeded all the structure. The unification of fundamental particles and forces are key here. As mentioned earlier, particle physics has provided several interesting possibilities for dark matter, and two attractive and very different possibilities for the origin of density inhomogeneities have been suggested. The first is that these inhomogeneities arose from quantum mechanical fluctuations during a very early, rapid period of expansion known as inflation: the second is that the seeds are topological defects, such as cosmic strings or textures, formed in a very early cosmological phase transition associated with breakdown of the symmetry between the fundamental forces.
At present, the most promising idea is that dark matter is slowly moving elementary particles (cold dark matter) and that the density inhomogeneities arose during inflation. However, both this idea and the possibility that seed
perturbations were topological defects are being tested by fine angular scale measurements of the anisotropy of cosmic background radiation (the relic microwave radiation left after the big bang), the large-scale distribution of galaxies, and a host of other cosmological measurements. In studying the formation of structure, one is also exploring the unification of particles and forces of nature in a regime not accessible to terrestrial laboratories.
Baryogenesis and Nucleosynthesis
As discussed in Chapter 5, the universe appears to be made of matter, not antimatter (a small admixture of antimatter observed in cosmic rays can be understood as having been recently created through high-energy collisions with matter in the universe). To evolve from the initial condition of equal parts of matter and antimatter requires that the baryon number must be violated. The search for this violation is one motivation for experiments looking for decay of the proton.
Even more sensitive direct searches for antimatter are under way. Detectors on Compton Gamma-Ray Observatory (CGRO) are measuring spectra to search for characteristic gamma rays that would result from the annihilation radiation due to the interaction of an antimatter galaxy with a nearby matter galaxy. These searches, which can be done only in space with instruments having fine energy resolution, will be extended with new orbiting satellite gamma-ray experiments, such as Gamma-Ray Large Array Satellite Telescope (GLAST), under consideration by the National Aeronautics and Space Administration (NASA) and the Department of Energy (DOE). If there are antimatter galaxies, then cosmic rays of antimatter should also exist at a small level. The best limits on the antimatter component of extragalactic cosmic rays are now provided by the Balloon-Borne Experiment with Superconducting Solenoidal Spectrometer (BESS) detector; the Alpha Mass Spectrometer (AMS) detector is a new instrument being built to greatly extend the sensitivity. Should these detectors positively identify, for example, an antihelium component in cosmic rays, the implications for cosmology and particle physics would be highly significant.
There is an important interplay between particle physics and big bang nucleosynthesis. The formation of light nuclei such as deuterium, 3He, 4He, and 7Li depends critically on the properties of neutrinos, such as the number of light neutrino flavors, and their mass and mixing parameters. This interface between the fields works in both directions, with the known neutrino properties providing constraints on the cosmological calculations and the astrophysical measurements (such as the abundances of the light nuclei) constraining the unknown particle physics parameters.
Particle physics has contributed broadly to the issues of astrophysics. Three specific areas that have been directly impacted are the physics of the Sun, the physics of supernova explosions, and the study of very energetic cosmic rays.
Physics of the Sun
For more than 25 years, observations of the solar electron neutrinos have shown a significant deficit from the expectations of standard solar models. This observation has challenged the modeling of the Sun, motivating significant effort to see if modifications could explain the deficit. The theoretical understanding of the Sun has thus been significantly advanced, leading to the strong suspicion that the physics of the neutrino itself is likely responsible for the observations.
The theory of supernova explosions depends partly on the Standard Model. Neutral current interactions are needed to produce the explosion. In addition, an experimental contribution to the understanding of supernova explosions resulted from the serendipitous observation of neutrinos from Supernova 1987A by two proton-decay experiments, one in the United States and one in Japan. These experiments—both using massive, underground detectors-were built and operated to search for the decay of protons and were successful in establishing that the proton lifetime is greater than 1032 years (see Chapter 4). These experiments, being well shielded from most sources of cosmic rays, are also quite sensitive to neutrinos from astrophysical sources. They observed a pulse of neutrinos passing through Earth just before the arrival of visible light from the supernova SN1987A. It has been unambiguously established that these neutrinos came from Supernova 1987A. Measurements of the characteristic of these neutrinos contributed significantly to the theory and understanding of mechanisms acting within the extraordinary conditions inside a supernova. In fact, the neutrinos released by a supernova are a window right into the center of the ''fireball," since once generated, they pass relatively uninhibited through the outer region of the exploding star directly to the detector on Earth. The visible light traditionally observed is that emitted from the surface of the "fireball."
Cosmic rays are high-energy particles impinging on our atmosphere. How they are accelerated is a major unanswered question; they may also embody
information about the early universe. Many of the astrophysicists who study the highest-energy cosmic rays were trained as elementary-particle physicists.
Cosmic rays represent the highest-energy particles ever detected by mankind. The most energetic (seen by the Fly's Eye detector in the United States and the AGASA detector in Japan) have 300 million times the energy of protons accelerated at the Fermilab Tevatron. Cosmic rays in this energy region almost certainly originate within the local supercluster of galaxies (those galaxies within a radius of about 60 million light-years from our Milky Way); otherwise they would be attenuated in their travels through the 3 K cosmic microwave background radiation left over from the big bang. The nature of the mechanism that accelerates them to these energies is completely unknown. Their energies represent the most significant departure from thermal equilibrium found in the universe. Some theorists speculate that they might be produced by exotic processes, for example, the collapse of massive cosmic strings, possible relics of the early universe.
It is clear that more data are needed to untangle this mystery. The Fly's Eye detector is now being upgraded to extend its sensitivity. The Telescope Array Project would view cosmic-ray initiated particle cascades in the atmosphere via their fluorescent glow. NASA is funding a 2-year feasibility study for a project to place an optical detector in Earth orbit that looks down and detects particle cascades in the atmosphere initiated by cosmic rays with energies greater than 10 20 eV. One very ambitious proposal is the Pierre Auger Project. It has chosen a detector technology and two sites (one in Argentina and the other in Utah), both of which would be equipped with very large ground-based detectors. These and other new detectors will also study characteristics of high-energy collisions (in the atmosphere) in an effective energy region 200 times higher than that of the Tevatron. Although the interaction rate is much lower than at an accelerator, new phenomena could still be uncovered.
In addition to the protons and nuclei that enter Earth's atmosphere from the cosmos, there are gamma rays and neutrinos. Having no electric charge, these types of cosmic rays are thus undeflected by galactic and intergalactic magnetic fields. They point back to the very energetic astrophysical sources that produced them and tell astrophysicists about these sources. Very-high-energy gamma rays can be produced by synchrotron radiation and neutral pion decay. Neutrinos are produced largely in the decay of a charged pion.
Very-high-energy gamma rays are emitted by sources in our galaxy and by very energetic sources beyond. Observations have been made using a variety of techniques from Earth-based instruments around the world and from space. New observatories with much higher sensitivity are under construction. In some cases—for example, the Solar Tower Atmospheric Cerenkov Effect Experiment (STACEE)—these observatories use mirrors designed for solar energy research to focus light emitted in gamma-ray interactions with Earth's atmosphere into light sensors. A space mission to build a new observatory that will replace the
aging high-energy gamma-ray instrument on CGRO is benefiting from the experience of elementary-particle physicists knowledge of detector design for measuring similar gamma rays in accelerator laboratories.
Observation of astrophysical neutrinos of very high-energy requires extremely large detectors because of the very weak interaction of neutrinos with matter. Physicists and astrophysicists are instrumenting large volumes of water in lakes and oceans, as well as ice, to act as detectors for neutrinos. To obtain adequate sensitivity to very high-energy neutrinos requires that a volume close to a cubic kilometer be instrumented. Initial studies by a Russian-German collaboration in Lake Baikal and by a U.S.-German-Swedish group using the ice at the South Pole have given promising results. There is a plan to extend the South Pole effort to the full size required for neutrino astronomy, and there are two efforts by European groups to instrument volumes in the Mediterranean Sea. An ocean-based observatory is also under discussion in the United States.
Fruitful interactions with nuclear physics have a long history and continue to the present. The discipline of elementary-particle physics grew out of the studies in the 1930s and 1940s of the atomic nucleus. Initially, techniques for particle acceleration were developed for the study of nuclei. The first working accelerators—electrostatic devices, cyclotrons, betatrons, and linear accelerators—were used for this purpose. Many techniques for particle detection, including proportional and ionization detectors, silicon detectors, sodium iodide (Na1) and germanium (Ge) detectors, and magnetic spectrometers, were initially developed for studies of nuclei.
Ongoing efforts in nuclear physics study fundamental processes and symmetries with nuclei and include solar neutrino studies and tritium beta-decay experiments sensitive to neutrino masses, studies of double beta decay, and tests of parity and time-reversal violation. These areas have greatly benefited from fruitful interactions between particle physics and nuclear physics.
Particle physicists study phenomena at the smallest distance scales, which require very high-energy accelerators as tools. Today, the major focus of nuclear physics is the exploration of the structure of nuclei and single hadrons. To realize this goal, nuclear physicists for the first time are concerned with the quarks and gluons in the particle physics world as they live inside nuclear matter. The community has placed its highest priority on two facilities: the Continuous Electron Beam Accelerator Facility (CEBAF) and the Relativistic Heavy Ion Collider (RHIC), at the Brookhaven National Laboratory.
CEBAF promises to greatly expand understanding of the structure of nuclei and hadrons and to better constrain understanding of the consequences of the strong nuclear force; to improve understanding of the interaction between nucleons and strange baryons and the dynamics of strange baryons in nuclear matter;
and to study the origin of proton and neutron spin, a subject that has been the focus of a significant effort recently in both the nuclear and the elementary particle physics communities.
RHIC is a good example of the scientific progress that can be generated when two fields (in this case nuclear physics and high-energy physics) pool their expertise in a synergistic way. The scientific question is one central to nuclear physics: Under what conditions do the low-energy constituents of nuclear matter (neutrons and protons) dissolve into quarks and gluons, thus decisively changing the nuclear many-body system. This question directly connects to two fundamental questions in physics: (1) What is the nature of quark-gluon confinement? (2) How did the early universe evolve from a quark-gluon plasma to the nuclear matter that makes up most of the visible mass of the universe today? To address such questions, nuclear physics has adopted some large-scale particle physics detection schemes, such as time projection chambers, further developing them for extremely high particle multiplicities.
Many discoveries in elementary-particle physics come from experiments done at the highest possible energies. However, a great deal is also learned by very precise experiments at lower energies. From the high-energy physicist's point of view, atomic physics is the low-energy limit of the field. In atoms, one can study processes with extraordinary sensitivity. Because of the exquisite precision with which frequencies can be measured, these very low energy experiments effectively complement experiments done at higher energies.
There are several examples of particle physics done at atomic energies. One of the most rigorous tests of quantum electrodynamics comes from precision measurements of the Lamb shift, a slight shift in atomic energy levels due to fluctuations in the vacuum. Development of the experiments and the theory has now advanced to the point at which these agree at a precision of 1 part per billion. The neutral current weak interaction, which has been studied extensively at high-energy machines such as the Large Electron-Positron (LEP) collider and the Stanford Linear Collider (SLC), has small but observable effects in atoms. A combination of precision atomic measurements and new calculations of atomic structure has made possible precision tests of the Standard Model using the cesium atom. In fact cesium experiments provide nearly as stringent constraints on some non-Standard Model physics as do the precision experiments at LEP and SLC.
Another focus of particle physics done at atomic energies has been a search for time-reversal-violating forces. The observation of CP violation in K mesons leads naturally to predictions of T (time-reversal) violation, one consequence of which might be observable electric dipole moments for fundamental particles such as the electron or neutron. Recent improvements in laser and radio-
frequency resonance techniques have led to experiments resulting in sensitive limits on these electric dipole moments. These limits can then be used to constrain models of CP violation and T violation.
One of the most fundamental assumptions of modern physics is that reality is invariant under charge conjugation, parity, and time-reversal symmetry (CPT) transformations. If so, then the absolute magnitudes of the masses, charges, magnetic moments, and mean lives of a particle and its antiparticle will be precisely the same. Measured properties of particles and antiparticles could differ despite CPT invariance if particles and antiparticles interacted differently with the apparatus made of particles alone (e.g., a long-range coupling depending on baryon number), although no such interaction has yet been observed.
CPT invariance must be subjected to rigorous experimental tests and has been, using the low-energy techniques of atomic physics. The magnetic moments of an electron and a positron have been compared to 2 parts per trillion. Low-energy techniques were used to make the measurement, and efforts are under way to increase the accuracy. Comparisons of the charge-to-mass ratios of a single antiproton and a single proton at an accuracy of several parts in 10 billion, by the TRAP collaboration, gave a CPT test at this accuracy. This collaboration also developed the techniques to accumulate cold antiprotons and positrons at 4 K for the production of cold antihydrogen and is now in pursuit of greatly improved CPT tests with leptons and baryons, which use laser spectroscopy to compare the properties of cold hydrogen and antihydrogen atoms.
Condensed-matter physics (CMP) and elementary-particle physics (EPP) share a deep conceptual unity. This is remarkable, given that these two fields operate on widely different distance scales. CMP deals with scales much larger than atoms, whereas EPP addresses scales one-thousandth the size of the proton or smaller. In many ways, however, there is a deep correspondence-both mathematically and physically-between phenomena in the two disciplines.
Understanding of the dynamical issues that occur in CMP (e.g., in superconductors) can be brought to bear on deep questions pertaining to dynamical arenas of EPP, such as quantum chromodynamics (QCD), understanding the electroweak physics scale of the Standard Model, and the Planck scale of gravity.
The cross-pollination of these fields has historically been a remarkable two-way street. For example, Richard Feynman first applied his path-integral techniques to quantum electrodynamics, the foremost particle physics problem of its day, then later to solve basic CMP problems, such as the behavior of superfluid liquid helium. Feynman diagram techniques are now universally applied in both CMP and EPP.
The intellectual parallels can be illustrated in the case of a superconductor where CMP asks the question: What is the structure of the state of lowest
energy, whose properties determine much of the behavior of the system? The so-called Landau-Ginzburg model is a loose description, or a "toy model," of superconductivity. This model was superseded by a full and complete dynamical theory by Bardeen, Cooper, and Schrieffer (BCS). The BCS theory is one of the most remarkable dynamical models in physics. With it, conventional superconductors are understood.
Analogously, EPP, by attempting to understand the origin of quark, lepton. and gauge boson masses, is asking a very similar question: What is the structure of the vacuum, also the state of lowest energy, again whose properties determine why particles have masses and why weak forces become weak? (The vacuum in quantum mechanics is not nothing!) The vacuum pervading the entire universe can be thought of as a kind of superconductor, involving mechanisms that we are just now on the threshold of understanding. The Standard Model assumes that something like the Landau-Ginzburg toy model (slightly modified and redubbed the Higgs mechanism) is applicable. This gives a description of the mass generation of all quarks, leptons, and gauge bosons, and the rest of the machinery of the Standard Model performs beautifully in all experimental tests to date. Yet, the Higgs mechanism is really just a "black box" concealing a deeper mechanism that we do not yet understand, just as the Landau-Ginzburg model was a black box containing the BCS theory. Thus, EPP—with the Standard Model—finds itself today in a kind of "pre-BCS" era. The exciting aspect of all this is that we are on the threshold of understanding what is really happening by deeper examination of the physics currently accessible to Fermilab's Tevatron, and the LEP at the European Laboratory for Particle Physics (CERN) and eventually the LHC in the next decade.
Another remarkable connection between EPP and CMP is the study of topological structures that can occur in the vacuum, in complete analogy to "defects" that occur in solids when they form from cooling liquids. Indeed, entirely new branches of particle cosmology deal with the formation of these objects in the early universe and the problems and opportunities they create. As discussed earlier, cosmic strings or vortices are being actively considered by cosmologists as possible seeds for the formation of structure in the early universe.
Vortices and magnetic monopoles involve a profound connection between topology and quantum theory within modern gauge theories. Such topological objects are known to occur in some kinds of superconductors in CMP. They are understood in EPP to play a key role in the phenomenon of quark confinement in QCD. Other kinds of topological objects, called instantons, play a role in mass generation in the strong interactions and may arise eventually in our understanding of the weak interaction mass generation.
The techniques emerging from the abstract arena of superstrings in EPP are having an important impact in CMP. It is likely that the study of high-critical temperature (high-Tc) superconductivity will lead to advances in EPP or that understanding of high-Tc will receive significant impetus from EPP results.
Other general mathematical methods emerging in EPP have had impact on CMP, and vice versa.
It is clear that in many ways, the true sister science of elementary-particle physics is condensed-matter physics.
High-energy physics experiments use special-purpose equipment and techniques, which can prove useful in other fields. For example, the large-scale production and storage of liquid helium at accelerator facilities requires designing and constructing specialized equipment that also finds application in the field of fluid dynamics. Fundamental experiments in turbulent flow at high Reynolds numbers require large-scale helium refrigeration equipment. The RHIC project at Brookhaven National Laboratory, which has the largest helium liquefier in the world, is a natural site for a large turbulent convection facility. Preparations are under way to build a Bénard cell 10 m high and 5 m in diameter operating with supercritical helium gas near 5.2 K and cooled by RHIC refrigeration.
Another project being considered in this field is building a wind tunnel large enough to test models of submarines that use liquid helium instead of air or water. The advantage is that a helium tunnel can reach operating Reynolds numbers to model nuclear submarines, something that cannot be done today. The large-scale cryogenic apparatus needed for such a tunnel already exists in large accelerator laboratories, and U.S. industry could fabricate such a device at the present state of the art.
Other spin-offs from high-energy physics to fluid dynamics are being considered. For example, some of the imaging techniques used in particle physics detectors could be adapted relatively easily to perform extremely fast tracking of particles seeded in a turbulent flow. Such an application would be a major boon to high Reynolds number research because the Lagrangian path of a seeded particle could be observed directly, something impossible to do today.
MATHEMATICAL AND COMPUTATIONAL PHYSICS
Research in physics has traditionally proceeded by two methodologies: experimental and theoretical. Traditionally, novel mathematical structures have been used heavily in constructing theories of particle physics, and now such structures are often invented by particle theorists. Both aspects of this exchange between particle theory and pure mathematics are especially evident today in string theory.
"Computational" physics is occasionally considered a third, comparably vital methodology. In reality, experimenters and theorists rely on computers to solve problems that would otherwise be intractable. The computing needs of "big" science have inspired many innovations. Often high-energy physicists
have adapted ideas or technologies from other disciplines, developed them for their own needs, and returned a more powerful, practical product. One example is the effort in lattice gauge theory, illustrating the give and take with the computer industry and other branches of physics.
The most intensive computer jobs are in the domain of lattice gauge theory, part of the theory of elementary particles. As this work blossomed in the 1980s, it quickly became clear that the most powerful commercial supercomputers would be neither adequate nor cost-effective. A popular concept for reducing costs was to take commercial processors and connect them to each other. Such a computer is called "massively parallel" because very large numbers of processors compute simultaneously. With a massively parallel machine, one could, in principle, split up big problems and let each processor do a fraction of the job. The drawbacks are the difficulties of coordinating the split-up and of communicating the data among processors. Theoretical particle physicists decided to design and build parallel computers specifically for lattice gauge theory. They came up with elegant solutions to the coordination and communication problems, and the resulting machines are among the first practical examples of massively parallel computers. One of these consists of 8,000 50-MHz processors! Now, many computer vendors offer a parallel computing product.
The mathematical structure of lattice gauge theory has much in common with that of systems in condensed-matter theory, because both grapple with problems of large systems. After understanding the physical meaning of "renormalization" in elementary-particle theory, Ken Wilson sought a simpler problem on which to test his insights. He solved some outstanding problems of condensed matter physics (later winning a Nobel Prize for the accomplishment) and came back from the experience with a way to define the theory of quarks and gluons on a lattice. Techniques of the resulting "lattice QCD" have developed side-by-side with condensed-matter theory ever since. In particular, computer programs running on massively parallel machines offer the most reliable way to work out details of the attraction between quarks inside the proton. As a result, many nuclear physicists have started to study QCD to understand nuclei as (complicated) composites of quarks and gluons.