Taking the current data at face value (low Ωm, ages of globular clusters ≈15 Gyr), then even for “low” values of H0, ≈50 km·sec−1·Mpc−1, there appears to be a conflict with the standard Einstein-de Sitter model. One way out of these current difficulties is to introduce a nonzero value of a cosmological constant, Λ. In fact, it is precisely the resolution of these problems that has led to a recent resurgence of interest in a nonzero value of Λ [e.g., Ostriker and Steinhardt (2) and Krauss and Turner (3)]. On the basis of purely theoretical considerations, a very low value of H0 (≤30) could also resolve these issues [e.g., Bartlett et al. (4)]. However, the central critical issues now are (and in fact have always been) testing for and eliminating sources of significant systematic error in the measurements of cosmological parameters. These issues, including also a discussion of age determinations, are discussed in more detail in a very recent review by Freedman (5), and the current text summarizes that discussion.
A wide range of different techniques have been developed for measuring the matter density of the Universe. These techniques apply over a wide range of scales, from galaxy (≈100– 200 kpc), through cluster (Mpc), on up to more global scales (redshifts of a few). Excellent recent articles summarizing measurements of Ωm have been published by Dekel et al. (6) and Bahcall et al. (7). On the scale sizes of galaxies up to clusters of galaxies, matter density estimates have been made by a number of techniques, including mass-to-light ratios, dynamics of satellite galaxies, x-ray measurements, weak lensing. estimates of the baryon fraction, pairwise and peculiar velocities (see refs. 6 and 7 and references therein). Most of these determinations are consistent with a value of Ω= 0.2–0.3. On larger scale sizes (up to 300 Mpc), the situation is less clear. In the measurements of peculiar velocities of galaxies, differences in both the analysis of the Tully-Fisher data, in addition to the models, are leading independent groups to come to very different conclusions, with estimates of Ωm ranging from about 0.2 to 1.3. Understanding the sources of these disagreements is clearly an important goal.
Because lower values of the matter density tend to be measured on smaller spatial scales, it has given rise to the suspicion that the true global value of Ω0 must be measured on scales beyond even those of large clusters—i.e., scales of greater than ≈100 Mpc [e.g., Dekel (8)]. In that way. one might reconcile the low values of Ωm inferred locally with a spatially flat Universe. However, recent studies (7) suggest that the mass-to-light ratios of galaxies do not continue to grow beyond a scale size of about 200 kpc (corresponding to the sizes of large halos of individual galaxies). In their analysis of the dynamics of 16 rich clusters, Carlberg et al. (9) also see no further trend with scale. Hence, currently the observational evidence does not indicate that measurements of Ωm on cluster size scales are biased to lower values than the true global value. Clearly, determining whether there is a smooth component to the matter density on the largest scales is a critical issue that must be definitively resolved.
It is important to keep in mind that all of these methods are based on a number of assumptions. Although in many cases 95% confidence limits are quoted, these estimates must ultimately be evaluated in the context of the validity of their underlying assumptions. It is nontrivial to assign a quantitative uncertainty in many cases, but in fact systematic effects may be the dominant source of uncertainty. They include, for example, diverse assumptions about mass tracing light, mass-to-light ratios being constant, clusters being representative of the Universe, clumping of x-ray gas, nonevolution of type Ia supernovae, and the nonevolution of elliptical galaxies. For methods that operate over very large scales (gravitational lensing and type la supernovae), assumptions about ΩΛ or Ωtotal are currently required to place limits on Ωm.
The subject of the cosmological constant Λ has had a long and checkered history in cosmology. There are many reasons to be skeptical about a nonzero value of the cosmological constant. First, there is a discrepancy of ≥120 orders of magnitude between current observational limits and estimates of the vacuum energy density based on current standard particle theory (e.g., ref. 1). Second, it would require that we are now-living at a special epoch when the cosmological constant has begun to affect the dynamics of the Universe (other than during a time of inflation). In addition, it is difficult to ignore the fact that historically a nonzero Λ has been called upon to explain a number of other apparent crises, and moreover, adding additional free parameters to a problem always makes it easier to fit data. Moreover, the oft-repeated quote from Einstein to Gamov about his “biggest blunder” continues to undermine the credibility of a nonzero value for Λ.
However, despite the very persuasive arguments that can be made for Λ=0, there are solid reasons to keep an open mind on the issue. First, at present there is no known physical principle that demands Λ=0. [In fact, supersymmetry provides a mechanism, but it is known that supersymmetry is broken (10)]. Second, unlike the case of Einstein’s original arbitrary constant term, standard particle theory and inflation now provide a physical interpretation of Λ: it is the energy density of the vacuum (e.g., ref. 10). Third, if theory demands Ωtotal=1, then a number of observational results can be explained with a low Ωm and Ωm+ΩΛ=1:
For instance, the observed large-scale distribution of galaxies, clusters, large voids, and walls is in conflict with that predicted by the (standard) cold dark matter model for the origin of structure [e.g., Davis et al. (11) and Peacock and Dodds (12).
The low values of the matter density based on a number of methods as described above. In addition, the discrepancy between the ages of the oldest stars and the expansion age can be resolved. Perhaps the most important reason to keep an open mind is that this is an issue that ultimately must be resolved by experiment.
The importance of empirically establishing whether there is a nonzero value of Λ cannot be overemphasized. However, it underscores the need for high-accuracy experiments: aspects of the standard model of particle theory have been tested in the laboratory to precisions unheard of in most measurements in observational cosmology. Nevertheless, cosmology offers an opportunity to test the standard model over larger scales and higher energies than can ever be achieved by other means. It scarcely needs to be said that overthrowing the Standard Model (i.e., claiming a measurement of a nonzero value for Λ) will require considerably higher accuracy than is currently available.
In the next section, limits on Λ based on the observed numbers of quasars multiply imaged by galaxy “lenses” are briefly discussed.
Fukugita et al. (13, 14) and Turner (15) suggested that a statistical study of the number density of gravitational lenses could provide a powerful test of a nonzero Λ. The basic idea behind this method is simple: for larger values of ΩΛ, there is a greater probability that a quasar will be lensed because the volume over a given redshift interval is increased. In a flat universe with a value of ΩΛ=1, approximately an order of magnitude more gravitational lenses are predicted than in a universe with ΩΛ=0 (15). In practice, however, there are a