2
Foundations: Matter, Space, and Time

BACKGROUND

In the first half of the 20th century the twin revolutions of quantum theory and relativity dramatically changed scientists’ perspective on the physical world. Building on this base, over the last half of the 20th century physicists developed and tested a new quantum theory of matter (now called the Standard Model) and extended and tested the theory of classical space-time (general relativity and big bang cosmology). These successes present extraordinary new opportunities for physics in the new century. Questions of unprecedented depth and scope about the ultimate laws governing physical reality, and about the origin and content of the physical universe, can now be formulated and addressed—and possibly even answered! Is there a unified theory encompassing all the laws of physics? Is matter fundamentally unstable? Are there additional dimensions of space? Is most of the mass in the universe hidden in some exotic form? Does “empty” space have energy (a cosmological constant term in the equations of general relativity)? What physical principle determines that energy?

Today physicists and astronomers have some specific, compelling ideas about the answers to these grand questions. These ideas are by no means vague and idle speculations. On the contrary, they are grounded, scientific hypotheses, testable by performing appropriate experiments and observations. To test such concepts is a challenging task—all the easy work and much of the very difficult (but possible) work has already been done, and what was learned has been incorporated into current knowledge. To probe situations further where established theories are not adequate requires producing and observing matter under extraordinary new conditions or exploiting novel techniques to see in new ways or to new places. Fortunately, there are some highly creative ideas—and timely opportunities—for accomplishing such exploration. This chapter outlines the intellectual context within



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 15
2 Foundations: Matter, Space, and Time BACKGROUND In the first half of the 20th century the twin revolutions of quantum theory and relativity dramatically changed scientists’ perspective on the physical world. Building on this base, over the last half of the 20th century physicists developed and tested a new quantum theory of matter (now called the Standard Model) and extended and tested the theory of classical space-time (general relativity and big bang cosmology). These successes present extraordinary new opportunities for physics in the new century. Questions of unprecedented depth and scope about the ultimate laws governing physical reality, and about the origin and content of the physical universe, can now be formulated and addressed—and possibly even answered! Is there a unified theory encompassing all the laws of physics? Is matter fundamentally unstable? Are there additional dimensions of space? Is most of the mass in the universe hidden in some exotic form? Does “empty” space have energy (a cosmological constant term in the equations of general relativity)? What physical principle determines that energy? Today physicists and astronomers have some specific, compelling ideas about the answers to these grand questions. These ideas are by no means vague and idle speculations. On the contrary, they are grounded, scientific hypotheses, testable by performing appropriate experiments and observations. To test such concepts is a challenging task—all the easy work and much of the very difficult (but possible) work has already been done, and what was learned has been incorporated into current knowledge. To probe situations further where established theories are not adequate requires producing and observing matter under extraordinary new conditions or exploiting novel techniques to see in new ways or to new places. Fortunately, there are some highly creative ideas—and timely opportunities—for accomplishing such exploration. This chapter outlines the intellectual context within

OCR for page 15
which the rest of this report can be understood. Later chapters focus more directly on the opportunities now available to begin to answer the 11 questions on the nature, origin and makeup of our universe. PHYSICS OF MATTER: THE STANDARD MODEL AND BEYOND The Standard Model The Standard Model is a modest name for a grand intellectual achievement. For it is no less than, and in many ways more than, the theory of the fundamental structure of known matter. At the beginning of the 20th century, physics was very different from today. The classical laws of that time allow one to predict, given the configuration of matter and force fields at one time, the configurations at all later times. For example, Newton’s laws of motion and gravitational attraction can predict the positions of planets and comets in the future once their current positions (and velocities) are known. However, nothing in Newton’s laws can predict the existence of, or determine the overall size or shape of, the solar system. The modern (20th century) laws of physics go well beyond simple extrapolation of known conditions to the future. They describe not only how things move, but also what sorts of things there can and cannot be. The first theory of the new type was the mathematical atomic model proposed by Niels Bohr in 1913. At first glance this model appears to differ little in spirit from Newton’s solar system or Rutherford’s nuclear atom: electrons orbit an atomic nucleus just as planets orbit the sun; the relevant force is electric rather than gravitational but obeys a similar law that relates force and distance between objects. But Bohr postulated that only certain orbits of definite size and shape could actually occur—the orbits are quantized. With this idea it became possible to explain why all systems with one electron orbiting one proton have exactly the same properties, and to calculate those properties. Thus, the universal properties of the substance called hydrogen could be explained. The existence of such a substance, with all its properties, is a consequence of the allowed quantum solutions for the interactions between a proton and an electron. Bohr’s original rules, though successful in describing many features of atomic spectra, were not entirely correct, nor even internally consistent. Later physicists, including Werner Heisenberg, Erwin Schrodinger, and Paul Dirac, produced a framework that corrected these problems for the dynamics of quantized systems. The new quantum mechanics of simple electrical forces between elementary electrons and nuclei could explain the main

OCR for page 15
features of atoms and thus—in principle—all of chemistry. The mature form of the theory, unifying both electrodynamics and quantum mechanics, is called quantum electrodynamics, or QED for short. According to this theory, the electrical and magnetic forces and energy are carried by photons, which are quantum excitations of the electromagnetic fields (see Box 2.1). Despite such revolutionary breakthroughs, major challenges remained. There were still subtle internal difficulties within QED. All the many successful applications of QED were based on solving the equations in an approximate way. When physicists tried to solve the equations more precisely, they ran into great difficulties. Some corrections seemed to be infinite! Thus, although QED was spectacularly successful at a practical level, it was completely unsatisfactory from a logical point of view, because it required setting infinite quantities to zero. This mathematically dubious procedure amounted to ignoring a physical effect called quantum fluctuations, the quantum mechanical corrections to the theory. Eventually it was recognized that the problem lay in the interpretation of the quantum corrections, not just in how they affected the particle processes but also in how they altered the concept of empty space or the vacuum. Since these effects have a role to play later in this story, it is worth taking a little time here to discuss them. One of the revolutionary aspects of quantum mechanics was Heisenberg’s uncertainty principle, which specifies a limit to how precisely one can measure both the position of a particle and its momentum (or BOX 2.1 PARTICLES AND FIELDS Quantum electrodynamics (QED) was the first example of a field theory of how matter interacts with light. All subsequent particle theories are built to include QED, and are likewise field theories. In field theories each particle type is understood as the quantum excitations of some underlying field type. Conversely, the excitations for every type of field include an associated particle type. Thus the fact that all particles also have associated wavelike properties comes from the fact that both particlelike and wavelike excitations of the underlying fields can occur. In such theories, the key distinction between matter fields and force fields is the spin (i.e., the amount of angular momentum) associated with the particle excitations of the field. For matter fields the associated particles are fermions, which means that they carry one-half unit of spin (measured in terms of Planck’s constant, h), while the photon carries one whole unit of spin. The particles associated with strong and weak force fields, the gluon and W/ Z bosons respectively, also carry one unit of spin, while the predicted particle associated with excitation of the Higgs field has zero spin.

OCR for page 15
velocity) at the same moment. Put another way, an attempt to examine very closely where a particle is located, is accompanied by a large uncertainty in the knowledge of its momentum, in particular whether it may be moving very rapidly. These unpredictable motions represent “quantum fluctuations” in the particle’s motion. The special theory of relativity requires a similar uncertainty principle involving time instead of position, and energy instead of momentum. Thus if a particle—or even “empty” space—is observed for a very short time, it is not possible to measure precisely the amount of energy contained in the region observed. The amount of energy may appear to be very high, even when what is being observed is empty space, often called the vacuum (see Box 2.2). Thus, over a short enough time, there could appear to be enough energy present to produce particle-antiparticle pairs of various kinds. These evanescent particles, which apparently pop in and out of existence for a short time, are called virtual particles. Quantum mechanics and relativity together force scientists to see empty space in a new way: as a dynamic medium full of virtual particles. Immediately following World War II, Willis Lamb and other experimenters exploited advances in microwave technology, driven by wartime work on radar, to measure the properties of atomic hydrogen with unprecedented accuracy. They discovered small deviations from the QED predictions that, at the time, ignored quantum corrections. In the 1950s, inspired by these developments, physicists, including Shinichiro Tomonaga, Julian Schwinger, and Richard Feynman, developed new mathematical methods that gave more accurate predictions. Their methods incorporated the quantum corrections in a profound way from the start. They include the possibility for an isolated particle traveling in empty space to “interact with the vacuum” by temporarily disappearing to produce a virtual particle-antiparticle pair, seemingly coming from the vacuum itself. The original particle then reappears when the particle and antiparticle meet and annihilate each other. The intermediate stages in these calculations seem to involve impossible physical processes, but because they last for such a short time they are allowed by the strange logic of quantum uncertainty in energy. These physicists found a technique by which they could incorporate such quantum effects into the way the constants of the theory were defined and thereby obtain meaningful and finite results for the physically measurable quantities they wished to calculate. Furthermore their results matched the measurements. Indeed, the quantitative agreement between the theoretical predictions of QED calculations and experiment is now the most precise in all of science, reaching levels of parts per billion.

OCR for page 15
BOX 2.2 THE VACUUM: IS EMPTY SPACE REALLY EMPTY? While the notion of a vacuum brings to mind the ultimate state of nothingness (indeed, this is what was pictured by 19th-century physics), quantum theory changes all of that. Nature’s quantum vacuum is anything but empty; instead, it is seething with virtual particles and condensates. To 20th-century physicists, the vacuum is simply the lowest energy state of the system. It need not be empty or uninteresting, and its energy is not necessarily zero. Quantum mechanics and the uncertainty principle tell scientists that the vacuum can never be truly empty: the constant production and then annihilation of virtual particle-antiparticle pairs make it a seething sea of particles and antiparticles living on borrowed time and energy (as shown in Figure 2.2.1). Although the Heisenberg uncertainty principle allows the pairs to last for only very short times, they have measurable effects, causing shifts in the spectrum of atomic hydrogen and in the masses of elementary particles that have been measured (e.g., W/Z bosons). The unanswered question is whether empty space contains any energy. The weight of the vacuum is certainly not great enough to influence ordinary physical processes. However, its cumulative effect can have profound implications for the evolution of the universe and may in fact be responsible for the fact that the expansion of the universe seems to be speeding up rather than slowing down (see the discussion of dark energy in Chapter 5). The second way in which the vacuum may not be empty involves vacuum condensates of fields. For example, the Higgs field in the Standard Model has a nonzero, constant value in the lowest energy state. The effect of this is to give masses to quarks, leptons, and other particles. The lowest state, the one we perceive as “nothing,” need not have zero field. Rather, the field everywhere has the value that gives the minimum energy. The nonzero field in the vacuum is often called a condensate, a term borrowed from condensed-matter physics. FIGURE 2.2.1 According to the rules of quantum field theory, the vacuum is not empty but it is actively populated by particle-antiparticle pairs that appear, annihilate, and disappear, existing for only brief instants.

OCR for page 15
Successful as it is at describing atomic-level processes, QED is not a complete theory of matter. The basic properties of nuclei are not described by QED. Additional interactions, which cannot be either electromagnetic or gravitational, must also exist. These interactions must be strong enough to hold together the positively charged atomic nucleus. These most powerful of all forces, the strong interactions, are also important in understanding the dynamics of astrophysical objects and of the earliest moments of the universe. Nuclear decays also exhibit processes wherein one kind of particle turns into another. The prototype for this is the decay of a neutron into a proton, electron, and antineutrino, but there are many closely related processes (including the radioactive decay of the famous isotope carbon-14). Collectively, these weak interactions (so-called because they occur very slowly compared with strong reactions) are central to astrophysics and cosmology. They provide some of the mechanisms for fusion processes by which stars produce energy and build chemical elements heavier than hydrogen. Thus the weak and the strong interactions are essential to understanding the structure and decay of nuclei and their formation in stellar and early universe environments. However, they are difficult to study in everyday settings because the distances over which they are detectable are incredibly small. In constructing QED, physicists were able to use the rules of electricity and magnetism derived from studying visible objects (pith balls, magnets, coils, and so on) in the late 18th and early 19th centuries. These had been consolidated into the unified equations of electromagnetism by James Clerk Maxwell in 1864. Amazingly, these same equations, interpreted in the framework of quantum mechanics, describe atomic physics. In contrast, to study weak and strong interactions and thereby understand subnuclear processes, physicists had to invent new tools. They ultimately developed tools for studying processes occurring on incredibly tiny distance scales (a thousand times smaller than an atomic nucleus). The story of how such experiments developed, and the remarkably complete understanding achieved, is rich and complex, but this is not the place to relate it fully. In the early days, naturally occurring radioactive elements and cosmic rays from outer space played a central role. Over the past 50 years, particle accelerators, with a steady increase in the energy of the available particle beams, have been essential. The great scientific achievements of these machines, and the development of the Standard Model theory to incorporate their discoveries, would not have been possible without generous support from government agencies worldwide. Some important aspects of this mod

OCR for page 15
ern theory of the strong, weak, and electromagnetic interactions are discussed in Box 2.1; Figure 2.1 provides an inventory of the small number of fundamental particles and their simple properties. To the best of current knowledge it appears that these particles have no substructure, at least not in the traditional sense of being built from yet smaller particles. Attempts to simplify the picture by this approach have failed, and no experimental evidence to date points in that direction. Two essential conceptual features of the Standard Model theory have fundamentally transformed the understanding of nature. Already in QED the idea arose that empty space may not be as simple a concept as it had seemed. The Standard Model weak interaction theory takes this idea a step further. In formulating that theory, it became evident that the equations did FIGURE 2.1 Standard Model particles and the forces by which they interact. The fundamental particles include both fermions, the matter particles, and bosons, the force carriers. Masses of all particles are given in GeV/c2, a unit in which the mass of the proton is approximately 0.94; electric charge is listed in units of the electron’s charge. The Higgs particle has not yet been observed; if it is, it will join the bosons. As is discussed in Chapter 3, it now appears likely that the model needs to be extended to allow small neutrino masses. Image courtesy of the Particle Data Group, Lawrence Berkeley National Laboratory.

OCR for page 15
not allow the introduction of mass for the particles. The theory made sense— that is, it gave finite predictions for some measurable effects, but only if it was written so that each and every fundamental particle had zero mass. But this was not the case experimentally. However, the zero-mass prediction depended on the assumption that the vacuum state was empty, with all fields having everywhere zero value. Physicists realized the theory could be constructed more like the real world by introducing a pervasive condensate into this simplest of pictures. A condensate in elementary particle physics corresponds to the circumstance where the lowest energy state has a non-trivial property; for instance, instead of having zero field value everywhere, the lowest energy state is filled with a particular nonzero value for the field. (The term is coined from the notion that the field “condenses” in the low-energy limit to a nonzero value.) In the Standard Model the field that forms such a condensate is called the Higgs field. Particles get their mass through interactions with this field. In such a theory, mass is just another form of interaction energy. But what does it mean to have a nonzero field in the vacuum? In a crude but useful analogy, it is as if we lived inside a giant invisible magnet. Imagine for a moment how the laws of physics would look to people inside such a magnet. Particles would move in peculiar helical paths because of the influence of the magnetic field, and the equations describing these paths would be complicated. Therefore, the laws of motion for a particle subjected to no perceived force would be considerably messier than a straight line. Eventually the inhabitants might realize that they could get a simpler, yet more profound, understanding of nature by starting with the fundamental equations for an empty, nonmagnetic world and then specializing the equations to take account of the complicated medium. The theory of the weak interaction uses a similar idea. Instead of a pervasive magnetic field, the theory leads to a need for a less familiar background: the Higgs condensate. But unlike magnetic fields, the Higgs field has no preferred direction. It changes the way particles move through space in the same way for all directions of motion. The presence of pervasive condensates is an additional way, beyond the bubbling in and out of existence of virtual particles, that seemingly empty space acts as a dynamical medium in modern quantum theories. Aside from its effect on particle masses, the Higgs condensate is not noticeable in any way because it is everywhere the same. The things observed as particles are differences in the fields from their vacuum values. The theory predicts Higgs particles—fluctuations of the Higgs field away from its constant vacuum value—in just the same way as fluctuations of other fields away from their zero vacuum value

OCR for page 15
are seen as particles. The Higgs particle is the only particle type predicted by the Standard Model that has not yet been observed. The modern theory of the weak interactions achieved its mature form around 1970 with a unified description of the weak and electromagnetic interactions (sometimes called electroweak theory). Since then, it has achieved many triumphs. Five fundamental particles predicted by the theory, namely the charm and top quarks, the tau neutrino, and the W and Z bosons, have been discovered. The theory predicted many properties of each of these particles; they were found as predicted. For the W and the Z boson, the masses (around 100 times that of the proton) were a key part of the structure of the theory. The existence and properties of W and Z bosons were inferred from a theory designed by Sheldon Glashow, Abdus Salam, and Steven Weinberg. These particles were subsequently discovered experimentally by Carlo Rubbia, Simon van der Meer, and their collaborators, at the European Organization for Nuclear Research (CERN). The theory of the strong interaction began to take its modern shape once it was realized that all the observed strongly interacting particles (baryons and mesons) could be explained as built from more elementary building blocks: the quarks. Compelling evidence for quarks came from experiments that directly measured the fractional electrical charge and other properties of these pointlike constituents of protons, neutrons, and mesons (these and particles like them are collectively called hadrons). However, the interactions among the quarks had to have very peculiar properties. The strength (or intensity) of these interactions must be tiny when the quarks are close together, but must grow enormously in strength as the quarks are pulled apart. This property, requiring infinite energy to move two quarks completely away from each other, explains why individual quarks are never observed: they are always found bound in triads (as in the proton and neutron and other baryons) or paired with antiquarks (as in the mesons). Although required by the observations, this force between quarks was a new pattern. Physicists had great difficulty finding a consistent theory to describe it. All previous experience, and all simple calculations in quantum field theory, suggested that forces between particles always grow weaker at large separation. A solution to the problem was found in the quantum correction effects mentioned above, which must be included in a correct calculation. For most theories examined up until that time, this effect also leads to forces that grow weaker at larger distances. However, physicists found a class of theories in which quantum corrections have just the opposite effect: forces grow weaker at small distances. This property is called asymptotic freedom.

OCR for page 15
With the need for asymptotic freedom in explaining the strong interaction, a unique theory emerged, one that could explain many observations. It introduces particles called gluons as the carriers of the strong force (just as photons carry electromagnetic forces). The “charge” of the strong interactions, called the color charge because of superficial similarities to the familiar properties of visual color, is held by quarks and antiquarks and also by gluons. But all observed hadrons are combinations of these particles in which the total “color” is neutral (much as suitable combinations of primary colors yield white). This theory, which describes the strong interactions, is an essential part of the Standard Model and was dubbed quantum chromodynamics, or QCD. Since achieving its mature form in the 1970s, QCD has explained many observations and correctly predicted many others (see Figure 2.2 for an FIGURE 2.2 An example of one of the many successes of the quantum chromodynamics (QCD) sector of the Standard Model. Shown are theoretical predictions (black solid curve), which agree well with experimental data (red points) over 11 orders of magnitude. The data come from high-energy proton-antiproton collisions at Fermilab’s Tevatron. The plot shows the relative rate of quark and gluon jet production carrying energy of the amount shown on the horizontal axis, in a direction transverse to the incoming proton and antiproton directions. Adapted from an image courtesy of the D0 Collaboration.

OCR for page 15
example of QCD’s success). Highlights include the discovery of direct effects of gluons, verification of the asymptotic freedom property and its consequences in many and varied experiments, and continued success in modeling the outcomes of high-energy collision processes. Together with the weak interaction theory, QCD is now a firmly established part of the Standard Model. The story of how experimental evidence for the top quark (also called the t quark) was discovered provides an impressive illustration of the power of the Standard Model. The patterns of the electroweak interaction required such a particle to exist and specified how it would decay. Further, as mentioned above, calculation of its indirect effect on well-measured quantities, via quantum corrections, predicted an approximate value for its mass. The strong interaction part of the Standard Model predicted the easiest methods by which it could be produced and how often. Equally important, since QCD describes other particle production processes as well, physicists could calculate the rates for various other processes that can mimic the process of t production and decay. This knowledge enabled them to devise a way to search for it in which these competing processes were minimized. This capability is vital, because the relevant events are extremely rare—less than one in a trillion collisions! By putting all this information together, physicists were able to develop appropriate procedures for the search. In 1995, the top quark was discovered in experiments done at Fermilab, as illustrated in Figure 2.3. While its mass was unexpectedly large (about that of an atom of gold), its other properties were as predicted. The Standard Model has now been tested in so many ways, and so precisely, that its basic validity is hardly in question. It provides a complete description of what kinds of ordinary matter can exist and how they behave under ordinary conditions, with a very broad definition of “ordinary.” It certainly extends to any conditions attained naturally on Earth, and even to most astrophysical environments, including the interior of stars. In this sense, it is very likely the definitive theory of known matter, and this marks an epoch in physics. To solve the equations in useful detail in complicated situations is another question. Particle physicists make no claim that achieving this theory of matter answers the important practical questions posed by materials scientists, chemists, or astrophysicists. Significant challenges remain to complete the Standard Model and understand all that it implies. The Higgs particle is yet to be found. Intense, focused research programs are planned to search for it, both at Fermilab and at the Large Hadron Collider at CERN. The equations of QCD must be solved with greater accuracy in more complicated (and real) situations.

OCR for page 15
BOX 2.4 SUPERSYMMETRY Supersymmetry is a bold and profoundly original proposal to extend the space-time symmetry of special relativity. It postulates, in addition to the traditional dimensions of space and time, additional “quantum” dimensions (not to be confused with the extra dimensions of string theory) that together with the traditional dimensions constitute an extended framework for physics called super-space. Each quantum “direction,” unlike the continuous space and time directions, has only two discrete values; changing this quantum label is equivalent to changing the particle type! Supersymmetry thus predicts that every particle has a supersymmetric partner particle—normal particles of integer spin have spin one-half partners, while spin one-half particles have integer spin partners, as shown in Figure 2.4.1. Since the matter particles (quarks and leptons) have spin one half and the force carriers (photons, gluons, and W and Z bosons) have spin one, supersymmetry relates the constituents of matter to the particles that mediate the forces that hold matter together. Not only may supersymmetry unify the matter constituents with the force carriers, but it may also unify gravity with the other forces of nature. Although supersymmetry was invented for other purposes and has a rich history, it is a key element of string theory, the most promising idea that physicists have for incorporating quantum mechanics into gravity and putting gravity on an equal basis with the other forces. Supersymmetry may help to explain the enormous range of energy scales found in particle physics (often referred to as the hierarchy-of-energy-scale problem). Supersymmetry is mathematically elegant. Nature, however, always has the last word. Is supersymmetry a property of the physical world, or just interesting mathematics? As yet there is no direct evidence for supersymmetry. It is attractive to theorists both for its elegance and because it makes certain features of the Standard Model occur more naturally. At best it is imperfectly realized. Perfect symmetry requires equal mass pairings of particles and their superpartners. No such pairings are found among the known particles, and thus a whole family of superpartners must be postulated. However, valid symmetries of the fundamental laws can be obscured by the existence of pervasive fields (called condensates) in the vacuum. Supersymmetry is such a hidden symmetry. All the superpartners of the known particles can only be as-yet-undiscovered massive particles, and many versions of supersymmetry, in particular those that account best for the merging of the three couplings, predict that these particles should be found at masses accessible with existing or planned accelerators. Searches for these particles may soon reveal or exclude these versions of supersymmetry theory.

OCR for page 15
FIGURE 2.4.1 Depiction of the one-to-one correspondence of supersymmetric partner particles for every type of ordinary particle. Because the superpartners are expected to be very massive, they have not yet been directly observed.

OCR for page 15
particle is a candidate for being the dark matter that pervades the universe. The neutralino is discussed further in later chapters. PHYSICS OF SPACE AND TIME: RELATIVITY AND BEYOND The Triumph of General Relativity When general relativity, Einstein’s theory of gravity, was first proposed in 1915, it was a gigantic leap of the imagination. It incorporated several concepts quite new to physics, including the curvature of space-time and the bending of light, and led to the prediction of other completely new phenomena, including gravitational radiation, the expanding universe, and black holes. General relativity was widely accepted and admired by physicists almost from the start. It reduces to Newton’s successful theory of gravity for practical purposes, for not-too-massive bodies moving with small velocities. It is consistent with the special theory of relativity, unlike Newton’s theory. Moreover, by relating gravity to space-time curvature, general relativity provided a profound explanation for a striking fact that appears coincidental in Newton’s theory: that in a given gravitational field, all bodies accelerate in the same way; this is known as the principle of equivalence or the universality of free fall. For many years, however, general relativity was not very relevant to the rest of physics; it made few testable new predictions. Only a few observations could not be adequately explained with Newton’s simpler theory and thus required the more complete theory. Well into the 1970s, textbooks spoke of only three tests of relativity (namely, the advance of the perihelion of Mercury, the gravitational redshift of light when photons are emitted from the Sun and other massive bodies, and the bending of light by the Sun). In recent years the picture has changed completely, mainly because of revolutionary improvements in high-precision instrumentation and in the observational techniques of astronomy. The accuracy of each of these three tests now exceeds a part in a thousand, and numerous new precise tests, unimagined by Einstein, have been successfully performed. Over a thousand neutron stars have been found in the form of pulsars; their gravitational fields can be adequately described using only general relativity. A binary system containing a very remarkable pulsar was discovered in 1976 by Russell Hulse and Joseph Taylor and then studied extensively by Taylor and other collaborators. The orbital motion of the pulsar was measured with great accuracy, thereby enabling precision tests of gen-

OCR for page 15
eral relativity. Most dramatic was the observation that as a consequence of the emission of energy into gravitational radiation, the total energy of the orbital motion decreases with time at a rate predicted by general relativity. The agreement is better than one-third of 1 percent. Today, general relativistic corrections to the flow of time on orbiting satellites as compared with the rate on Earth are an essential part of the Global Positioning System (GPS), which allows commercial and military users to calculate a precise location on the surface of the Earth and to transfer accurate time readings using triangulation with satellite signals. Numerous convincing black hole candidates have been identified through astronomical observations. They fall into two classes. One class, arising from the collapse of stars, has masses ranging from a few times that of the Sun to around 10 times that of the Sun and radii of a few kilometers. The second class, typically found at the centers of galaxies, can have masses millions to billions of times that of the Sun, and radii comparable to that of the solar system. There is compelling evidence that our own galaxy contains such a black hole. It is probable that the most violently energetic objects in the universe, the quasars, are powered by accretion of matter onto such gigantic spinning black holes. Developments in general relativistic cosmology have been still more remarkable. The theory of the expanding universe has been resoundingly verified by observation of the velocities of distant objects. The gravitational redshift of spectral lines is evolving: once an exotic, difficult test of general relativity, it is becoming a standard tool of astronomy. The bending of light, first observed as a tiny apparent angular displacement for a star with light grazing the Sun during a solar eclipse, is now the basis of a fruitful technique to map dark matter using gravitational lensing. The mass of intervening galaxies is observed, in many cases, to distort the light from more distant sources drastically, even producing multiple images. This provides a way to search for massive objects that produce no light or other detectable radiation. In all these applications, the predictions of general relativity have not been contradicted. Looking Beyond General Relativity Despite these great successes, there are compelling reasons to think that general relativity is not the last word on gravity. A primary stumbling block for understanding the physics of the very earliest moments and the birth of the universe itself is the lack of progress in developing a consistent theory that includes both general relativity and

OCR for page 15
quantum mechanics. The difficulties are similar to, but more severe than, the difficulties discussed above in connection with the history of quantum electrodynamics. All the successful applications of general relativity have used equations that ignore quantum corrections. When the corrections are actually calculated, they are found to be infinite. Again, one can follow the procedure used in QED and improve the calculation by taking into account the effects of virtual particle clouds on, say, the interaction of a particle with gravity. However, although the infinities are then avoided, the ability to calculate the behavior of the particle at high energies is lost, because the clouds interact strongly and in a very complex manner. Another difficulty arises out of Stephen Hawking’s recognition that, when the effects of quantum mechanics are included, black holes are not, strictly speaking, black. Rather they radiate. The radiation rate is far too small to be detectable for any of the known black holes, but it has serious consequences. In Hawking’s approximate calculation, the radiation appears to be random (thermal). A fundamental requirement of quantum mechanics is a specific connection between the future and the past. But if black holes, which have swallowed objects carrying quantum information from the past, can evaporate by radiating in a random fashion, this connection is apparently broken. Many believe this leads to a paradox whose ultimate resolution will bring deep insights into the quantum nature of space and time. While general relativity provides an essential framework for big bang cosmology, it leaves open most of the details, just as Newton’s theory described the motion of planets but did not determine the size and shape of the solar system. Indeed, the particular size and shape of our solar system almost certainly arose from the specific details of its history; other planetary systems elsewhere are quite different. Yet the universe as a whole has some strikingly simple features. Such features beg for a theory to explain them. Among the most striking features of the universe as a whole are its approximate homogeneity and its spatial flatness. Homogeneity means that any large region of the universe of a given age looks very much like any other large region at the same age. Spatial flatness means that space (as opposed to space-time) is not curved on large scales. Both of these properties of the universe have now been observed and measured with considerable precision, through study of the microwave background radiation. Neither homogeneity nor spatial flatness is required by classical general relativity, but they are allowed. The question then arises, Why is our universe so homogeneous and flat? It is possible that these properties would emerge from a correct, quantum-mechanical treatment of the earliest mo-

OCR for page 15
ments of the big bang. But since no one knows how to calculate the behavior of quantum gravity at high energies, such speculation is difficult to test, or even codify. Some physicists believe that these problems can be solved by delving more deeply into general relativity itself. But others believe that the solution will necessarily involve an integration of gravity with the other forces of nature. As the discussion below indicates, some intriguing progress has recently been made toward a synthesis of general relativity, the theory of space-time, with our current understanding of the other forces of nature. THE CONVERGENCE OF MATTER AND SPACE-TIME PHYSICS In most laboratory situations, gravity, as a force between elementary particles, is very much weaker than the strong, the electromagnetic, and even the weak interactions. For this reason, it has been possible to develop an extremely complete and accurate theory of subatomic and subnuclear processes (the Standard Model) while ignoring gravity altogether. But since all objects attract one another gravitationally, the power of gravity is cumulative; on cosmic scales it comes to dominate the other forces. Traditionally, there has been a division of labor between the study of matter, on the one hand, and the study of gravitation and cosmology, on the other. A major theme of this report, however, is that this division is becoming increasingly artificial. Physicists, eager to build on their successful theories of matter and of space-time, seek to create an overarching theory of space-time-matter. To understand the earliest times in the universe and the extreme conditions near black holes will require such a theory. New approaches to tackle these problems are, as yet, speculative and immature. However, the consequences for the view of the universe and for its history at the earliest times are profound. Inflation The homogeneity and spatial flatness of the universe can both be explained by assuming that the universe, early in its history, underwent a period of exceptionally rapid expansion. Expansion tends to diminish the spatial curvature, just as blowing up a balloon makes its surface appear flatter. The enormous expansion associated with inflation means that the universe we see today began from a very tiny region of space that could have been smooth before inflation. While inflation cannot completely eliminate the dependence of the state of the universe today upon its initial state, it does greatly lessen that dependence.

OCR for page 15
Inflation theory is more plausible, and exciting, because it can be grounded in particle physics ideas about unification and symmetry breaking. The unified theories require the existence of condensates, whose space-filling presence makes the symmetry of the fundamental equations less manifest but more consistent with observation. As described in the section “Unification and the Identity of the Dark Matter” in Chapter 3, the phenomenon is known to occur in the weak interaction. It also occurs (in a somewhat different way) in the strong interaction, in the theory of superconductivity, and in many other examples in the physics of materials. It is not an extravagant novelty. In all physical examples of condensates, when the temperature is raised sufficiently the condensate evaporates or melts away. (Such a phase transition occurs when ice melts to become water.) The laws of physics at the higher temperature look quite different—they have more symmetry. Another example may be useful. In an ordinary magnet, the spins of the atoms are aligned at low temperatures because the total energy of the system is lower in such a configuration. This alignment obscures the isotropy of space by making it appear that there is a preferred direction (the direction in which the spins are aligned). At high temperatures, the energy advantage associated with aligned spins is no longer important, the spins of the individual atoms are no longer aligned, and the isotropy of space is no longer obscured (the broken symmetry at low temperatures is restored at high temperatures). In a cosmological context, the consequences of a phase transition can be dramatic. At high temperature, before the condensate settles down to its equilibrium, or vacuum, value (its “frozen” state, to continue the aqueous analogy), a great deal of energy is present in the system. That energy is in a very unusual form—not as particle mass or motion but as field energy, or false vacuum energy. False vacuum energy has quite different gravitational properties from other forms of energy. It turns out that if a large amount of vacuum energy is dissipated only slowly, it causes a period of inflation, of exponentially rapid expansion of the universe. As is discussed in later chapters, observational cosmology has recently yielded powerful hints that inflation occurred. The ideas of particle physics suggest why it might have occurred. But as yet there is no single convincing, specific model for inflation. Existing models contain many arbitrary assumptions and are ad hoc. While they show that inflation has a plausible theoretical basis, they are certainly unsatisfactory in detail. Thus to understand properly this central facet of cosmology may require the development of a more complete unified theory of gravity and matter.

OCR for page 15
Another simple yet profound property of the known universe is that it is made of matter rather than antimatter. More specifically, distant stars and galaxies are all made out of protons and neutrons, while antiprotons and antineutrons are very rare in the universe. In the Standard Model, at low temperature, the number of protons minus antiprotons (or, to be more precise, the number of quarks minus antiquarks) cannot change. If that were the whole story, the asymmetry between matter and antimatter would simply be an initial condition from the big bang, locked in for all time. There would be no deeper explanation of it, nor any deduction of its magnitude from the laws of physics. But the unified theories, as discussed above, include interactions that change quarks into antiquarks (or other particles). Thus the number of quarks minus antiquarks is not frozen in; rather, it can evolve with time. Indeed, if any such processes occur, then at sufficiently high temperature symmetry will be restored, and there will be equal numbers of quarks and antiquarks. The present-day universe, where matter dominates antimatter, must have evolved from past equality. So the stage is set for a dynamical calculation of the universal difference between quark and antiquark densities. Many models have been considered. With some assumptions, it is possible to achieve agreement with observation, although not with the Standard Model alone. As was the case for inflation, in order to develop a proper, convincing theory of matter-antimatter asymmetry, physicists need a deeper theory. Particle Candidates for Dark Matter and the Mystery of Dark Energy Perhaps the most tangible hint for new physics from cosmology is the problem of dark matter. A wide variety of astronomical measurements indicate that the universe contains considerably more matter than can be accounted for by ordinary matter in all forms (e.g., stars, gas, and black holes). This additional mass (dark matter) is not seen directly but rather through the effect of its gravity on the motion or the light of visible objects. Here arises a truly extraordinary opportunity for discovery—what is this stuff that makes up most of the universe by weight? To heighten the tension, developments in particle physics suggest two quite specific, very plausible candidate particles. Indeed, each of these candidates was proposed for theoretical reasons unrelated to the dark mass problem, and only later was their potential to solve this problem realized. One candidate arises from the idea of supersymmetry. This hypothetical extension of the Standard Model is introduced in Chapter 3, in the section “Unification and the Identity of Dark Matter.” It postulates a doubling in the

OCR for page 15
number of fundamental particles, pairing each known particle with an as yet unseen “superpartner” particle. One of these is a light, stable, neutral fermion called the neutralino. This is a leading candidate for a major component of the dark matter. The other leading candidate is a hypothetical particle called the axion. It appears as a consequence of theoretical extensions introduced to solve a quite different problem in the foundations of the Standard Model. The axion is a very light particle but could have been produced very copiously during the big bang. The special detectors needed to search for axions are very different in detail from those that can search for neutralinos. But, as in the neutralino case, first-generation experiments exist, and improvements to reach the needed sensitivity are feasible. Finally, the most intriguing and most recent hint for new physics from cosmology is the observation that the expansion of the universe is speeding up, rather than slowing down. If correct, this indicates the presence of a mysterious energy form—dark energy—that pervades the universe with a gravitational effect that is repulsive rather than attractive. While particle physics has had much to say about dark matter, thus far it has shed little or no light on dark energy. Nonetheless, it seems clear that a fundamental understanding of this new form of energy will require new physics. Dark energy and dark matter are discussed at greater length in Chapter 5. Theoretical Questions and Insights Theoretical physicists have long sought to extend the range of applicability of their theories, synthesize the explanations of diverse physical phenomena, and unify the underlying principles. After the towering achievements of the last century, briefly reviewed in the previous sections, there is better material to work with than ever before—a remarkably detailed, specific, and powerful theory of matter, and a beautiful, fruitful theory of space-time. Can they be joined together? There are good reasons to be optimistic. This discussion has reviewed how the unification of interaction strengths could arise, despite their different observed values, as a result of the effects of quantum corrections. The underlying equality of the strong, weak, and electromagnetic couplings emerges only when they are extrapolated to very high energy. Extending this calculation to include the gravitational coupling as well yields a delightful surprise: The extrapolated gravitational coupling meets the others, at nearly the same high energy (see Figure 2.4)! Is nature hinting at unification of all forces?

OCR for page 15
The most ambitious and impressive attempts to construct a unified space-time-matter theory involve an evolving set of ideas known variously as string theory, superstring theory, or M theory. Here the term “string theory” is used to denote the entire complex of ideas. String theory is not yet fully developed; so far, no specific predictions about the physical world have emerged. But even the current partial understanding suggests to many physicists that string theory may be on the right track. This report is not able to do justice to what has become a large and mathematically sophisticated body of work; it confines itself to a few brief indications. String theory takes as its starting point the problem of constructing a consistent quantum theory of strings (as the progenitors of “elementary” particles). Remarkably, this theory predicts the existence of gravity. Moreover, the resulting theory of gravity, unlike conventional general relativity, does not suffer from the problem of infinite quantum corrections. Further, it appears that string theory avoids the apparent paradox associated with Hawking radiation, by showing that the radiation emitted from black holes is not at all random. Thus string theory offers the only known solution to two major theoretical problems that emerge when quantum mechanics is applied to gravity. Clearly, this is a direction to be pursued. String theory is most naturally formulated in 10 or 11 space-time dimensions; it cannot be made consistent using only the observed 4. In constructing models of the physical world one must assume that most of these dimensions somehow curl up, leaving the familiar 4 (3 space, 1 time) extended dimensions. At first this may sound artificial, but many solutions of the theory having this behavior are known. Some even contain particles and interactions that broadly resemble the Standard Model, and they can incorporate low-energy supersymmetry, unification of couplings, and axions. Unfortunately there are also apparently equally valid solutions that do not have these features. No completely satisfactory theoretical reason for preferring one model to another has yet emerged. Nor is any single known solution empirically satisfactory in all respects. A key feature of string theory is supersymmetry, the symmetry that relates matter particles and the force carriers (see Box 2.4). While many aspects of string theory do not easily lend themselves to testing, super-symmetry’s prediction of the doubling of the number of elementary particles is imminently testable and the quest of the next generation of particle accelerators. Finally, any theory of space-time-matter must address what seems at present to be the most profoundly mysterious question in physical science. Researchers know that the vacuum state is anything but trivial: it is popu-

OCR for page 15
lated with virtual particles and symmetry-breaking condensates. One might think all this structure would contain energy. The definition of zero energy can be arbitrarily adjusted in many theories, but once the adjustment is made in one epoch of the universe it cannot be altered. One would therefore expect the effects of quantum corrections to give a vacuum energy in all epochs. Indeed, as argued above, this can account for the early inflationary epoch. Straightforward estimates of the expected scale of this energy in the present epoch give values far in excess of what is allowed experimentally. This is called the problem of the cosmological constant, because the mathematical description of the energy of the vacuum is equivalent to the cosmological constant originally introduced by Einstein to keep the universe from expanding or contracting. The discrepancy, depending on how the estimates are made, is at least a factor of 1055, and indicates a major gap in understanding of the vacuum and gravity. Until very recently, it seemed reasonable to hope that some yet-undiscovered symmetry would require that all the sources of energy must cancel, so that empty space would have exactly zero energy today. But recent measurements indicate that the energy of the vacuum, while absurdly small compared with the theoretical estimates mentioned above, is not zero (see Chapter 4). Indeed, it seems to contribute about twice as much to the average energy density of the universe as all matter (ordinary plus “dark”). For the other problems mentioned here, physicists have identified some very promising lines of attack. But for the cosmological constant problem, some fundamentally new idea seems required. Many of the challenging questions today could not even be formulated only a few years ago. The experimental and observational data and techniques at hand today are beginning to provide access to information directly relevant to our questions. Rapid progress toward better data can be anticipated. It is an exciting time for this area of science, which has blossomed through the overlapping interests of physics and astronomy.