Abstracts of Additional Sessions of the Frontiers of Science Symposia
Many applications of lasers and fiber optics depend on molecules and crystals that show nonlinear optical activity. This can involve doubling or tripling the frequency of laser light; a laser of standard type then can produce light with wavelength better suited to its intended use. Another type of activity rotates the plane of polarization of light in response to an applied electric field (the Pockels effect), to modulate or switch an optical signal.
In preparing materials that show nonlinear optical activity, a standard approach has featured the use of polarizable molecules. These have an electron donor at one end and an electron acceptor at the other, separated by a bridge. A bridge is an organic structure containing both single and double carbon-to-carbon bonds.
Seth Marder, of the Jet Propulsion Laboratory, has been seeking better molecules by optimizing properties of the bridge. He defines a quantity, bond length alternation (BLA), as the difference in length between adjacent bonds within the bridge.
In conventional materials, BLA = 0.10 angstrom, approximately, and Marder finds that this value is too large to optimize the proper performance of the donor-acceptor dye. Using computational chemistry, he finds that an index of nonlinear activity has maximum value when BLA = 0.04 angstrom. Standard techniques of organic chemistry then permit synthesis of molecules whose bridges show this value.
Commonly used polymers include stilbenes such as dimethylamino nitro stilbene (DANS), with an index value of 466. Marder has prepared counterparts, using thiobarbituric acid acceptors, with an index as high as 19,000. In implementing the Pockels effect, this increase would reduce the necessary electric field from 200 volts to 4 or 5.
For engineering improved bulk crystalline materials, following a suggestion in the literature from Gerald Meredith (now at Dupont), Marder has studied organic salts. He finds particular promise in dimethylamino methylstilbazolium tosylate (DAST); its frequency-doubling efficiency is 20 times that of lithium niobate, a commonly used crystal. Moreover, its electrooptical properties are superb and may lead to entirely new types of devices. This enhancement results from the almost ideal orientation of the molecules in the crystal lattice.
A separate topic in materials science features the search for principles of chemistry that can permit construction of an organic ferromagnet. The rationale lies in observing that physicists today identify some 14 distinct magnetic states of matter, representing modes of ordering of electron spin. Five of these magnetisms are known classically, having the prefixes ferro-, antiferro-, ferri-, para-, and dia-. Most of the rest have been discovered only since about 1975 and merit further research. Because organic chemistry offers great freedom in modifying molecular structures, it suggests the prospect of creating families of related materials that display novel types of magnetism in varying degrees. An organic ferromagnet then represents a first step, an initial problem in this area.
Difficulties exist in the chemical syntheses, and characterization often demands liquid helium temperatures. Nevertheless, standard rules permit selection of candidate molecular radicals that display the fundamental ferromagnetic property of having two unpaired electrons with spin parallel. Dennis Dougherty of the California Institute of Technology, working with colleagues, has tested these candidates' suitability by combining them with trimethylenemethane, which also has spin-parallel electrons. This work shows that m-phenylene and cyclobutane offer particular promise.
As a next step, these chemists have inserted units of m-phenylene, a 1,3-substituted benzene, within long-chain molecules of polyacetylene. The resulting short runs of polyacetylene can be chemically treated—doped—to remove one electron. The treated segments are called polarons, and they also have spins. In addition, they are stable at room temperature. A variant of this material shows properties indicating that within each such polymer molecule, and in the presence of an external magnetic field, an average of nine polaronic spins align ferromagnetically. This stands as a step toward a true ferromagnet, for which all spins within a bulk sample would align, even without the external field.
Dougherty's polymers are paramagnets rather than ferromagnets. Researchers in Japan have created a true organic ferromagnet, but it is a molecular, rather than a polymeric, system that is ferromagnetic only below 0.65 Kelvin. By contrast, iron shows ferromagnetism up to 1044 Kelvin. Thus, while progress is being made, many challenges remain for the field of organic magnetism.
A protective ozone layer exists within the stratosphere. If brought to sea level, the earth's ozone would form a band only 3 millimeters thick. Nevertheless, this diffuse gas suffices to screen out solar ultraviolet, which is hazardous to life. Hence there has been a great deal of concern over the ozone hole, a seasonal reduction in atmospheric content of this gas, over the Antarctic, by as much as 95 percent at some altitudes. This reduction was first seen in ground-based measurements, around 1985, and has since been confirmed repeatedly through satellite observations.
Several theories have sought to account for this hole, which reappears annually, during the south polar spring. It might result from atmospheric dynamics, with ozone-poor air rising from below to dilute the air of the stratosphere. Alternately, the ozone hole might result from increased solar activity during the 11-year sunspot cycle, for energetic solar particles might stimulate formation of nitrogen oxides, which destroy ozone. A third theory holds that the ozone hole results from human activities. Chlorofluorocarbons (CFCs) rise slowly into the stratosphere, where molecules break apart from solar radiation, releasing reactive chlorine. This reactive chlorine then goes on to attack ozone in catalytic cycles.
Mario Molina of the Massachusetts Institute of Technology, a coauthor of the original 1974 paper on the CFC theory, notes that these
three alternate explanations have been subject to observational test. The atmospheric dynamics theory fails because upwelling air would contain significant amounts of trace constituents such as methane and nitrous oxide, which are not found. The solar cycle theory also fails. It would imply that the antarctic stratosphere should contain excessive amounts of nitrogen oxide. In fact, the measured amount is less than expected, which rules out that theory as well.
The third theory, that of CFCs, holds particularly that chlorine oxide, ClO, should serve as a reactive intermediary in catalyzing the breakdown of ozone. Each ClO molecule should destroy a substantial number of ozone molecules, while being regenerated following each such molecular reaction. Aircraft observations indeed show that in late August, during the antarctic winter, elevated concentrations of ClO exist at high southern latitudes. Then, 3 weeks later, the zone of high ClO also shows drastically lowered concentrations of ozone. This gives strong support to the CFC theory.
In making these observations, ground- and satellite-based work finds a complement in flights of the ER-2, an instrumented research airplane that can cruise at 20 kilometers altitude. David Fahey, of the National Oceanic and Atmospheric Administration's Aeronomy Laboratory, notes that such flights have also been of value in studying ozone in both the northern and southern hemispheres. Various changes have been noted. In the northern hemisphere, for example, the reduction has approached 5 percent in the temperate latitudes during winter. At Oslo, Norway, it has at times topped 30 percent.
Fahey emphasizes that rapid ozone destruction in polar regions involves the presence of polar stratospheric clouds, which form below temperatures of 195 Kelvin and provide surfaces on which chlorine-liberating reactions take place. Where such temperatures exist, as he puts it, "you see ClO turn on like a lightbulb, reaching one and a half parts per billion"—a very high concentration.
These findings have contributed to major efforts aimed at doing away with CFCs. They are valuable commodities, which have seen wide use as air-conditioning and refrigeration coolants, in producing styrofoam and similar materials, and in cleaning electronic circuit boards. However, international agreements now mandate that they must be phased out. Replacements could include chemically related substances that break down more readily when released to the atmosphere, so that far fewer of their molecules would reach the stratosphere. Some of these replacements are to be chlorine-free, further reducing the danger. There also is interest in avoiding chemicals of this type altogether.
Thus, in pressurized spray cans, hydrocarbons have replaced the CFCs that formerly served as propellants.
This does not mean the ozone problem is at an end or will be soon. CFC molecules are extremely long lived and stable, while the processes that transport them to the stratosphere, and that destroy them once they are there, take decades to operate. Indeed, Fahey notes that atmospheric concentrations of CFCs are projected to stay above "pre ozone" values until mid to late next century.
A topologist has been described as a mathematician who can't tell the difference between a coffee cup and a doughnut. Both are three-dimensional shapes that have a single hole, and which hence can be deformed or mapped into each other, point for point. This relationship is called homotopy equivalence. In topology a basic problem is to list all spaces that are homotopy equivalent.
A point of departure lies in work by J. H. C. Whitehead, who introduced the concept of a CW complex: a space that can be constructed out of n-dimensional spheres. A torus is such a space; it then can be mapped to either the doughnut or cup. Whitehead proved a fundamental theorem: every "interesting" topological space is homotopy equivalent to a CW complex. It follows that a solution to the basic problem lies in writing down all maps from spheres to other spheres, in n dimensions.
We require all spaces to have a basepoint, which remains invariant under mapping. This permits defining an addition of two maps; this additive property then leads to definition of groups πk (Sn). Here Sn is the n-sphere and πk is the set of maps from Sk to Sn, with two maps regarded as identical if they are homotopic. For fixed k and n > k, the groups πn+kSn are all the same and are known together as the kth stable homotopy group of spheres.
A focus of attention then is the chromatic tower, which breaks the homotopy groups of spheres into "monochromatic" layers, one for each integer n > 0. The first such layer draws on work by Hurewicz, who introduced a method for constructing maps between spheres. The part of the homotopy groups of spheres constructed with this method is called the image of J, and this appears as the n = 1 monochromatic layer.
The second monochromatic layer has been calculated by the topologist Katsumi Shimomura in 1986. Some of its properties are encoded within a diagram, which has a bottom half that is symmetric with the top half, and with numerical information in both halves that is related in a
simple fashion, through a shifting factor. Analogous diagrams exist for the nth monochromatic layer. They are also symmetric and have known shifting factors.
Nevertheless, this offers no more than a partial solution to the basic problem, which again is to list all spaces that are homotopy equivalent. Michael Hopkins of the Massachusetts Institute of Technology notes that even within the Shimomura diagram ''there are many suggestive patterns, but there is no really good theory. One of the most important problems in homotopy theory today is to find a theory that predicts the qualitative features of this diagram, and its analogs for the higher chromatic layers."
Another topic in topology involves knots and unknots. An unknot is a simple closed curve in three-space that can be deformed into a round circle. Alan Hatcher has shown that the deforming or untangling is continuous and involves no choices. A knot lacks this property; it cannot untangle into the simple loop of a circle.
Michael Freedman and Zheng-Xu He define the energy of a closed curve, E, by introducing a 1/r2 potential. Two similar curves have equal energy, and if two strands come close, as if to cross, then E blows up. A curve then can untangle by following the gradient of E. Freedman and He find a theorem: If E < C, where C is a constant, the curve is an unknot. C > 22, but C is not well known; Freedman says it could be about 70. For a round circle, E = 4; hence, only unknots exist for 4 < E < C.
A related issue is the average number of crossings that a curve makes, when one projects its shape onto a randomly oriented plane. If the curve forms a knot, the number is at least 3; you can see this with a loop of string. Both for knots and unknots, Freedman and He find that this number has an upper bound in terms of energy: 11E/12π + 1/π. This theorem draws on the work of Gauss.
The energy criterion for an unknot recalls a similar result due to John Milnor. Milnor defines a total curvature of a loop, T. For a round circle, T = 2π. Milnor has shown that for T < 4π, the loop or curve is an unknot. This then raises a question: In addition to the energy and total curvature, do other integral quantities exist whose values distinguish a knot from an unknot?
QUANTUM CONFINED SEMICONDUCTORS
In semiconductor physics a significant topic involves preparation of materials in unusually small volumes and thin layers and using them to fabricate new devices. At AT&T Bell Labs, Louis Brus has directed
studies of crystallites, nanometer-size particles containing 103 to 104 atoms. These have high percentages of surface atoms and offer properties intermediate between those of single molecules and bulk crystals.
Michael Steigerwald has produced crystallites of CdSe, a II–VI compound, by conducting the synthesis within a solution of surfactant or soap. The crystallites form within soap bubbles; their sizes are controllable and range from 15 to 60 angstroms, ± 15 percent or better for any batch. When these crystallites are coated with small organic molecules, they can be removed and exhibited.
Other investigators have prepared similar crystallites of other II–VI compounds, as well as of III–V compounds such as GaAs and I–VII compounds such as AgCl. A general rule holds: smaller particles give larger "bandgaps." In Steigerwald's CdSe, for instance, this leads to changes in the color of bulk crystallite powder, from yellow to orange to red with increasing particle size. Current experiments feature particles that are all of the same size, and Steigerwald has synthesized macromolecules, for example, containing lattice arrays of 20 nickel atoms and 18 telluriums, that have this property.
At Caltech a team of researchers have discovered several different routes to the synthesis of GaAs and silicon nanocrystals based on gas-phase nucleation. In certain cases, whole new effects occur as size is reduced. For example, some silicon nanocrystals luminesce. This is remarkable because bulk silicon does not exhibit photoluminescence. According to researchers involved with this project, the ability to make individual optically active monocrystals may help to resolve another mystery: the origin of optical luminescence from porous silicon that is generated by chemical etching.
In practical device fabrication it often suffices to grow polycrystalline layers on a substrate. Metal conducting paths are examples. But one achieves specialized electronic properties using epitaxial thin films, which take their crystal structure from that of the underlying substrate. Epitaxial growth demands a close crystallographic match between layer and substrate and tight control of substrate temperature, vacuum, and deposition rate. The substrate also must be atomically clean. Meeting these conditions, however, permits growth of multilayer structures featuring different materials, which have the crystallography of single crystals.
Two basic classes of deposition techniques are in use: physical vapor deposition (PVD) and chemical vapor deposition (CVD). Liquid-phase epitaxy is a third method for depositing thin films or layers of semiconducting crystals on substrates or other crystals. PVD is well suited for
processing individual wafers. It demands high vacuum and delivers atoms or molecules directly to the substrate. A laser may evaporate deposited materials (laser ablation); an energetic beam of inert atoms may sputter this material from a block, or it may boil away to form a molecular beam, which the user turns on and off by opening a shutter.
CVD techniques take place at near-atmospheric pressures and are well suited for batch processing of numerous wafers. CVD delivers chemical precursors in gaseous form. These are organic compounds that break down at the surface to yield the desired species. Both CVD and PVD can grow epitaxial layers at rates of some one atomic layer per second; layers one atom thick are achievable. Resulting devices, which rely on epitaxial films, include very fast transistors, tiny solid-state lasers, and advanced solar cells.
Such fabrication techniques also permit creation of novel semiconductor structures. Quantum wells, known since the 1970s, attract electrons and confine them in two-dimensional sheets. Quantum wires are the electronic analog of a single-mode optical fiber. There also are quantum dots, 10 to 20 nanometers in diameter, which confine electrons in zero dimensions. These clusters of atoms have quantized electron energy states.
A topic of current research is the pursuit of bulk semiconductor materials possessing large densities of quantum wires or dots, which are to be approximately uniform in size.
At Caltech another approach to fabrication of quantum structures takes the precise thickness control for one-dimensional layers that has been developed for formation of quantum wells and extends it into two and three dimensions to fabricate quantum wires and quantum dots. Kerry Vahala says that nanocrystals can be coaxed into locally nucleating on a specially prepared substrate. These nuclei become quantum dots or wires that can be easily embedded into a higher-bandgap host material during the growth process. Vahala's group is currently working toward incorporation of arrays of dots and wires in an optical wave guide that would be suitable for use as a laser or optical amplifier. Such devices should exhibit greatly improved efficiency and have a broad range of applications.
Mass extinctions have occurred repeatedly over the past 600 million years. Major events occurred at the Permian-Triassic boundary, some 250 million years before the present (myr), and at the Cretaceous-Tertiary
(KT), 65 myr. Lesser extinctions include the late Cambrian, 510 myr to 520 myr; the Frasnian-Famennian at 365 myr; the Triassic-Jurassic, 210 myr; the late Cenomanian at 91 myr; and the late Eocene, 34 myr.
It is not clear how many have been due to asteroid or comet impacts. The Permian-Triassic event, for example, was the most severe. It wiped out 90 percent or more of all species, including half the families of marine invertebrates. However, very little is known about this mass extinction.
At the Cretaceous-Tertiary boundary (KTB), by contrast, extinctions were somewhat less severe. Yet there is abundant evidence for a bolide impact: iridium, shocked quartz, spherules, and tektites. In addition, the extinctions evidently were quite sudden. Planktonic organisms, notably coccoliths and Foraminifera, flourished right up to the boundary before being cut off. The paleontologist Peter Sheehan also finds no falloff in the abundance and diversity of dinosaurs during the last 3 million years of the Cretaceous. At the KTB, though, the dinosaurs also go extinct.
Jan Smit of the Free University of Amsterdam, along with Walter Alvarez and his colleagues Alessandro Montanari and Nicola Swinburne of the University of California at Berkeley, have explored the candidate impact site: a 300-kilometer-diameter crater centered near the town of Chicxulub in northern Yucatan. Alvarez and co-workers propose that the impact took place on land but produced a kilometer-high tsunami in the adjacent Gulf of Mexico, as ejecta fell into the water. They find evidence for a backwash from this immense wave, in a thick deposit at the KTB boundary that contains abundant plant remains. Smit describes these as driftwood from coastal swamps, swept into the sea. They also find ripple marks indicative of surface waves—at depths below 400 meters. This suggests that the great wave sloshed back and forth within the Gulf. Alvarez describes these findings as consistent with the impact of an object with a diameter of 10 kilometers, traveling at tens of kilometers per second. The bolide would have struck with an energy of 108 megatons, some 10,000 times greater than that in the world's nuclear arsenal.
The astronomer Piet Hut, of the Institute for Advanced Study, proposes that the impactor was probably a comet, not an asteroid. He notes that there indeed are two craters that appear to have the right age, 65 myr: Chicxulub in Yucatan and Manson in Iowa. He describes such an impact sequence as resulting from the breakup of the comet as it rounded the sun, producing closely spaced fragments resembling a swarm of buckshot.
Hut also notes that there is danger even today from similar impacts. Once a year, on average, a rock strikes with an energy of 10 kilotons. Once a century the earth experiences an event such as Tunguska in Siberia (1908), in the megaton range. About every million years a kilometer-size body strikes, with energy exceeding all nuclear weapons together. That would produce worldwide crop failures and kill some 109 people, yet it would not qualify as a mass-extinction event. These would be considerably rarer.
Problems remain, nevertheless, in showing just how the KTB mass extinction occurred. Certainly, a 10-kilometer bolide would have produced vast environmental stress. Blast and tsunami would have affected only part of the earth, but the impact could have produced great quantities of nitrogen oxides, yielding a rainfall resembling nitric acid. That could have killed forests and dissolved shells of plankton. Dead forests then would have burned in continent-wide wildfires, pouring massive amounts of soot into the air. This soot, along with atmospheric dust, could have blanketed the planet, cutting off sunlight, shutting down photosynthesis, and setting the stage for further death of forests. Next would come additional wildfires and still more soot. There even could have been a long-term greenhouse effect, for thick limestone formations underlie the Yucatan. Their vaporization, due to impact, would have dumped a great deal of carbon dioxide into the atmosphere.
Evidence exists in isotopic anomalies for large-scale shutdown of photosynthesis, while a sooty layer at the KTB indeed points to massive fires. But such a sweeping catastrophe raises the question of, not why so many species went extinct, but why vertebrate life persisted at all. Plants have seeds that resist damage. But why did turtles, crocodiles, amphibians, and fishes—but not marsupial mammals—come through the KTB little the worse for wear? And if the KTB extinctions were less severe than those of the Permian-Triassic, why was this so? These are among the critical questions that paleontologists and paleoecologists will have to answer through further high-resolution studies of the geological record. Such studies will certainly lead to a better understanding of the behavior of the world's ecosystem under high-stress conditions.
THEORETICAL COMPUTER SCIENCE
In computer science a significant set of themes involves complexity and its applications. One topic includes the checking of very long proofs in mathematics, wherein a single mistake can yield an erroneous result. Transparent proofs offer a solution. A transparent proof is a lengthened
counterpart of the original one, structured in such a manner as to cause any error to propagate throughout it. Then, by checking only part of this new proof, one has a very high probability of finding an error. The lack of such an error then validates the original proof. Furthermore, the longer one makes the transparent proof, the more broadly an error will propagate and the shorter is the computation needed to detect it. Indeed, the checking of a transparent proof can take much less time than to merely read the original one. At modest cost in additional checking time, one can reduce the probability of a mistake—of failing to catch an error and hence of accepting an incorrect proof as valid—to vanishingly low values such as 10−30.
An initial formulation, according to Laszlo Babai of the University of Chicago, has the user define two quantities: є, a measure of increased length of the transparent proof, and δ, the probability of accepting a false theorem. The original proof has a length of N characters. The transparent proof then has a length of N1+є. The length of its verification is (ln N)2/є × ln(1/δ), where ln is the natural logarithm. More recent work shows that the length for verification can be a constant, independent of N.
The complexity of graphic detail is at the center of issues in computer graphics. This field features such applications as real-time interactive graphics used in flight simulation for pilot training; presentation of data in multiple variables or dimensions, including solutions for partial differential equations; and generation of computational grids.
In all these areas a central problem is the rapid division of many-sided polygons into constituent triangles. Such polygons often model shapes or scenes to be portrayed graphically. The computer then renders such objects using texture and a description of the available lighting, displaying them with assistance from a z-buffer in hardware, which is used to remove hidden lines and surfaces. Triangulated polygons also serve in data display and in representing computational boundaries, such as the wing of an aircraft.
Given a polygon with N vertices, then, a key problem is the development of rapid algorithms for triangulation. Since 1978 a number of techniques have come into common use that carry out the triangulation in time proportional to N ln N. In 1988 Robert Tarjan of Princeton University and Chris Van Wyck of AT&T Bell Laboratories introduced an algorithm offering N ln ln N time. Then in 1990 Bernard Chazelle, also of Princeton, achieved an algorithm linear in N. Even so, Maria Klawe of the University of British Columbia notes that N ln N algorithms remain the ones in standard use. That is because they are simple,
whereas the newer ones are complex and offer insufficient advantage to users.
Complexity in computer science also offers problems in the design of hardware as well as software. Such a problem is the optimal interconnection of parallel processors having large numbers of computational modes. Networks derived from hypercubes form the basis for architectures of such systems as the BBN Butterfly, the IBM RP3 and GF11, the Intel iPSC, the Connection Machine, and the NCUBE. However, such architectures introduce worst-case communication problems for which the run time scales as the square root of N, where N is the number of processors.
Tom Leighton, of the Massachusetts Institute of Technology, notes that randomly wired interconnected networks represent a useful alternative. Such networks are not wholly random; the randomness is subject to constraints. Even so, they can outperform traditional well-structured networks in several important respects. The worst-case problems disappear; all problems offer run times that scale as ln N. In addition, randomly wired networks have exceptional fault tolerance because they offer multiple redundant paths. They are well suited for both packet-routing and circuit-switching applications.