counterpart of the original one, structured in such a manner as to cause any error to propagate throughout it. Then, by checking only part of this new proof, one has a very high probability of finding an error. The lack of such an error then validates the original proof. Furthermore, the longer one makes the transparent proof, the more broadly an error will propagate and the shorter is the computation needed to detect it. Indeed, the checking of a transparent proof can take much less time than to merely read the original one. At modest cost in additional checking time, one can reduce the probability of a mistake—of failing to catch an error and hence of accepting an incorrect proof as valid—to vanishingly low values such as 10−30.

An initial formulation, according to Laszlo Babai of the University of Chicago, has the user define two quantities: є, a measure of increased length of the transparent proof, and δ, the probability of accepting a false theorem. The original proof has a length of N characters. The transparent proof then has a length of N1+є. The length of its verification is (ln N)2/є × ln(1/δ), where ln is the natural logarithm. More recent work shows that the length for verification can be a constant, independent of N.

The complexity of graphic detail is at the center of issues in computer graphics. This field features such applications as real-time interactive graphics used in flight simulation for pilot training; presentation of data in multiple variables or dimensions, including solutions for partial differential equations; and generation of computational grids.

In all these areas a central problem is the rapid division of many-sided polygons into constituent triangles. Such polygons often model shapes or scenes to be portrayed graphically. The computer then renders such objects using texture and a description of the available lighting, displaying them with assistance from a z-buffer in hardware, which is used to remove hidden lines and surfaces. Triangulated polygons also serve in data display and in representing computational boundaries, such as the wing of an aircraft.

Given a polygon with N vertices, then, a key problem is the development of rapid algorithms for triangulation. Since 1978 a number of techniques have come into common use that carry out the triangulation in time proportional to N ln N. In 1988 Robert Tarjan of Princeton University and Chris Van Wyck of AT&T Bell Laboratories introduced an algorithm offering N ln ln N time. Then in 1990 Bernard Chazelle, also of Princeton, achieved an algorithm linear in N. Even so, Maria Klawe of the University of British Columbia notes that N ln N algorithms remain the ones in standard use. That is because they are simple,

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement