Ron Shamir

Tel Aviv University^{1}

The main results on probabilistic analysis of the simplex method and on randomized algorithms for linear programming are reviewed briefly.

The **linear programming (LP)** problem calls for minimizing a linear objective function subject to linear constraints. Every such problem can be presented in the following form:

Here the vectors *c*^{T} = (*c*_{1}, . . . , *c*_{d}), *b*^{T} = (*b*_{1}, . . . , *b*_{m}), and the *m* × *d* matrix *A* constitute the input, and *x* = (*x*_{1}, . . . ,*x*_{d}) is the vector of variables to be optimized. Usually, one assumes that all input numbers are integers or rationals. The goal is to find an optimal *x* if such exists, and if not, to determine that the problem is unbounded (i.e., the minimum is ) or infeasible (i.e., there is no *x* satisfying all the constraints).

Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.

Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter.
Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 131

Probability and Algorithms
9.
Probabilistic Analysis in Linear Programming
Ron Shamir
Tel Aviv University1
ABSTRACT
The main results on probabilistic analysis of the simplex method and on randomized algorithms for linear programming are reviewed briefly.
9.1 The Linear Programming Problem
The linear programming (LP) problem calls for minimizing a linear objective function subject to linear constraints. Every such problem can be presented in the following form:
Here the vectors cT = (c1, . . . , cd), bT = (b1, . . . , bm), and the m × d matrix A constitute the input, and x = (x1, . . . ,xd) is the vector of variables to be optimized. Usually, one assumes that all input numbers are integers or rationals. The goal is to find an optimal x if such exists, and if not, to determine that the problem is unbounded (i.e., the minimum is ) or infeasible (i.e., there is no x satisfying all the constraints).
1
This chapter was written while the author was a visitor at DIMACS and RUTCOR at Rutgers University. Supported by AFOSR grants 89-0512 and 90-0008 and by NSF grant NSF-STC88-09648.

OCR for page 131

Probability and Algorithms
LP is one of the fundamental problems in operations research and computer science. It was formulated in the 1940s by George Dantzig, who also proposed a method for solving it. Since then, the problem has been studied in thousands of scientific papers and hundreds of books, and interest in the problem has stayed on a very high level. One reason for this is the wide applicability of LP: it is used to model practical, real-life problems in economics, industry, communications, military, and numerous other areas. It is perhaps the most widely used optimization model in the world, and except data structures, perhaps the largest single use of computer resources (see Lovász, 1980.) Another reason is that LP problems with special structure arise in various theoretical and practical areas. Yet another reason is that some fundamental properties of LP and its algorithms are still not fully understood. One of these is the efficiency of the simplex method, which will be discussed here.
The method that was proposed by Dantzig for solving LP is the simplex method (e.g., Dantzig, 1963). To describe it geometrically, we need some terminology: A point in Rd is feasible if it satisfies all the constraints. The set of feasible points,
is a polyhedron, i.e., the intersection of a finite set of half-spaces. Each half-space is of the form , and its corresponding hyperplane is {x|Ai*x = bi}. (Ai*. denotes the ith row in A.) For simplicity, we assume that the problem is nondegenerate; i.e., each set of d hyperplanes meet at a distinct single point. In that case, if , the dimension of X is d, and each feasible point in which exactly d of the inequalities are satisfied as equalities is a vertex of X. Vertices are adjacent if they are connected by an edge of X, or, equivalently, if their defining sets of equalities have d-1 identical elements. It is not hard to see that if there is an optimal solution, its value will be obtained at some vertex of X. Assuming non degeneracy, the set of feasible points lying on one defining hyperplane is (if nonempty) of dimension d -1 and is called a facet of the polyhedron.
The simplex method starts from a feasible vertex, and moves in each iteration to an adjacent vertex, until eventually either an optimal vertex is reached or the algorithm recognizes at some vertex that the problem is unbounded. The move from one vertex to an adjacent one is called a pivot, and it can be realized mathematically by a simple manipulation of the constraints matrix. Usually, the next vertex is chosen so that the objective

OCR for page 131

Probability and Algorithms
function value will improve after the pivot step. Even so, there is still a lot of freedom in choosing the next vertex. The specific rule of this choice is called a pivoting rule. Numerous pivoting rules have been devised for the simplex method, each giving rise to a different variant of the method. To obtain the initial feasible vertex, one can set up another LP problem (called the phase I problem) to which there is always a known feasible vertex, and apply the simplex method to that problem. This problem is set up in such a way that it is bounded, and when its optimal solution is reached, one either has a starting feasible vertex for the original problem or knows that that problem is infeasible. The scope of this chapter does not allow us to get into more details. We refer the interested reader to the (very incomplete) reference list. Also, several parenthetical comments on duality can be ignored by the reader who is not familiar with it. They are needed for accuracy but not for understanding the essence of the results.
Within a few years of its introduction, LP had become a central—perhaps the central—paradigm of operations research. The simplex method had proved itself to be extremely useful and highly efficient in practice. However, by the early 1970s it became clear that the simplex is not—as far as we know—a theoretically efficient algorithm: the complexity of the simplex method was shown to be finite, but not polynomial in the input length. In fact, on specially structured ''bad'' examples, many simplex variants were shown to require a number of pivots exponential in the problem dimensions. Hence, as far as we know, the simplex is not a good algorithm from the theoretical point of view. On the other hand, Monte Carlo studies and vast practical experience with real-life problems show that the typical number of pivots is very low: it appears to be a linear or slightly superlinear function of the dimensions. How can we explain the huge gap between the probably bad worst-case bounds and the efficiency of the method in practice? A natural approach is to apply probabilistic analysis: Assume that the input data are distributed according to some prespecified distribution and show that on the average the number of pivots is small.
This chapter gives a brief survey of various results from probabilistic analysis of the simplex method. The scope of the chapter forces us to omit many details, to say nothing of proofs. We shall be able only to give the flavor of the results and the different approaches, in the hope of kindling the reader's curiosity. The interested reader may find more detailed surveys in Borgwardt (1987), Todd (1991), and Megiddo (1987) and in Shamir (1987), from which this chapter originated. Much more on the context of the problem can be found in the references thereof.

OCR for page 131

Probability and Algorithms
9.2 Probabilistic Analysis
In order to perform probabilistic analysis, one has to specify the simplex variant used, the initialization (phase I) procedure, the form of LP used, and the probabilistic assumption on the distribution of problem inputs. Given those parameters, we denote by ρ(n,d) (or ρ(m, d)) the average number of pivots for problems with n inequalities in d variables (or m inequalities in d nonnegative variables).
All the probabilistic models discussed here generate problems that are nondegenerate with probability one. To avoid being too technical, we do not describe the conditions that guarantee nondegeneracy in each model, but simply assume throughout that all the models deal only with (primal and dual) nondegenerate problems.
9.2.1 Parametric Simplex Variants
The variant of the simplex that has been proved most amenable to probabilistic analysis is the parametric objective simplex algorithm, due to Gass and Saaty (1955). This variant—explicitly or in disguise—plays a central role in most of the studies discussed below. Let us sketch the parametric simplex method and introduce some terminology that is needed later. Suppose we have a LP with two objective functions, the original objective c and a co-objective c'. Given an optimal solution to problem (P) with objective c', the goal is to solve problem (P) with the original objective c. The problem can be presented as one with a parametric objective:
Here λ is a scalar parameter, a solution for P(0) is known, and we want to solve problem P(1). In the parametric simplex algorithm, the value of λ is gradually increased. A new problem is obtained for each λ, but for a certain range of λ values the optimal vertex remains the same. When the current optimal vertex ceases to be optimal, a pivot step is performed and the algorithm moves (along an edge) to a new vertex that is optimal for larger λ The process terminates when the value λ = 1 is reached or when the solution to the problem becomes unbounded at some critical value λ < 1, in which case problem P(1) is unbounded. Under our blanket assumption of nondegeneracy, the set of optimal points visited by the algorithm is a path of vertices and edges. This set,

OCR for page 131

Probability and Algorithms
is called the efficient path. Hence the number of vertices along that path equals the number of pivot steps performed by the parametric simplex algorithm. Another set of points can be obtained by the same algorithm when the value of λ decreases from 0 to . The solution for corresponds to the solution for maximizing (rather than minimizing) cT x. The set of solutions for is called the co-optimal path.
A similar idea can be applied to the following problem, in which b and c are parameterized simultaneously:
The parametric self-dual (PSD) simplex algorithm, which was introduced by Dantzig (1963), starts from an optimal solution to Π(0). It increases λ gradually and performs a sequence of (primal or dual) pivots that yield solutions for Π(λ) for increasing values of λ. Note that with an initial choice of c'T = (1, 1,. . .,1) and-b'T = (1, 1,. . .,1), x=0 is an optimal solution for Π(0), so this algorithm avoids the need for two simplex phases.
The convenience in analyzing the parametric simplex is due to its property that, given a vertex of the feasible set, one can determine directly if the simplex path passes through that vertex. "Directly" means by an algebraic closed-form condition depending on the vertex and the input data, so the pivots need not be performed.
9.2.2 Borgwardt's Results
The first major probabilistic analysis results were due to Karl Heinz Borgwardt (1977a,b; 1978). He analyzed problems of the form
where A is of dimension n × d and . Note that such problems are always feasible, since x = 0 is feasible. The basic probabilistic model he used requires that each of the n vectors constituting the rows of A and the vector c assumes independently the same spherically symmetric distribution on Rd-{0}. In other words, the distribution of each such vector is invariant under any rotation around the origin. One can visualize such a distribution as a "cloud" around the origin, with the density of the cloud at each point proportional to the probability of choosing the vector from the origin to that point. In a spherically symmetric distribution, the

OCR for page 131

Probability and Algorithms
density at each point on any sphere centered at the origin is the same, but spheres of different radii may have different densities. Hence the radial part of the distribution is the only part that may vary among such distributions.
Borgwardt used the parametric simplex. Borgwardt (1978) showed that when c' is also chosen randomly and independently according to the same distribution, the expected number of pivots along the efficient path for d fixed and is asymptotically bounded by
A crucial step in Borgwardt's analysis was to restate the problem in terms of the polar set X* of the feasible set X, which is also a polyhedron (cf. Grünbaum, 1967). The number of vertices along the efficient path equals the number of faces of X* intersected by a two-dimensional cone spanned by the two objectives c and c'. Borgwardt then estimates the expectation of that number by an integral expression, which eventually leads to the results.
Borgwardt (1977a,b) also obtained interesting asymptotic upper and lower bounds for several specific spherically symmetric distributions. For example, when all the mass of the distribution is on the unit sphere, he obtained a lower bound of d1.5n1/(d-1) on the expected number of pivots. He also showed that, whenever d is fixed and , for every distribution with bounded support and that there exist distributions for which ρ(n, d) grows slower than any polynomial in n.
Later, Borgwardt (1981, 1982) obtained a nonasymptotic bound on the expected length of such an efficient path, namely,
In order to obtain a result for a complete (two-phased) simplex method, Borgwardt defined a new algorithm. It generates a sequence of subproblems with increasing numbers of variables and uses the parametric algorithm to solve each subproblem. Borgwardt (1982) proves a polynomial expected bound for that algorithm. This was the first proof that the average number of pivots of any complete simplex variant is polynomial. The bound for Borgwardt's complete algorithm and model was subsequently improved (Borgwardt, 1987) and now stands at
Recently, K.H. Borgwardt (private communication, 1992) has improved the asymptotic analysis for the complete algorithm (i.e., phase I plus phase II) to const · d2.5n1/(d-l).

OCR for page 131

Probability and Algorithms
Because of the importance of Borgwardt's work, this section concludes by noting three problems in his model, which may be improved upon in the future: First, the model generates only feasible problems, for which a feasible point is known in advance. Second, the complete algorithm that was "tailored" to facilitate the analysis can be used only on problems whose form fits the model. (For some progress on these two problems, see Borgwardt (1990).) Third, its phase I solves a sequence of n-1 problems to get a feasible point for phase II, although such a point is known a priori. For experimental studies on the model described above, see Borgwardt et al. (1990).
9.2.3 Asymptotic Results: Smale and Others
Smale (1983a,b) investigated linear programs of the form (P). The probabilistic model in Smale (1983a) requires that the vector (cT,-b T) assume a spherically symmetric distribution in R.d+m-{0}, and A assumes a spherically symmetric distribution in Rm×d-{0}.
The variant analyzed was the PSD simplex. Smale used a theory developed by Eaves and Scarf (1976) that implies a presentation of the PSD algorithm for problem Π(λ) as a set of piecewise-linear equations. Using this presentation, Smale (1983a) showed how to express the probability that a vertex is in the PSD-path in terms of the volume of a corresponding cone. A series of estimates for the volume of the special cone structure generated for LP eventually yielded the result
In particular, this result implies that when d is fixed, ρ(m, d) grows with m more slowly than any fixed positive power of m. The dependence on d is, however, exponential.
Smale's model seems to have a certain advantage over Borgwardt's, because it uses a more practical variant and there is no restriction on the form of the linear programs it can solve. Also, the model generates both feasible and infeasible problems. However, the asymptotic results assume that d is fixed and ; in that situation most problems generated under this model become infeasible, so it is not clear how to interpret such results. We shall return to this point below.
Following Smale, Blair (1986) showed that essentially the same asymptotic result can be obtained by a simpler argument, under somewhat weaker probabilistic assumptions. Smale's estimate (S) is meaningful only when d is fixed and . Blair approached that case directly. One way to

OCR for page 131

Probability and Algorithms
restate his observation is that when , under Smale's probabilistic model, most of the constraints are "dominated" by other constraints and cannot participate in vertices of the PSD path. (Intuitively, a constraint is dominated if another constraint separates it from the origin in the positive orthant. We omit the formal definition of domination here.) By obtaining an upper bound of the expected number of undominated constraints, he proved that for fixed d and
The exponent of log m is higher than in (S), but the result is essentially the same. Namely, for fixed d, ρ (m, d) grows with m slower than any polynomial. Blair also showed that the same result holds for two other simplex variants.
Adler et al. (1986) investigated several probabilistic models that generalize Smale's. Their most general model requires for LP of the form (P), in addition to some nondegeneracy assumptions, that the data distribution is invariant with respect to reversing the signs of any subset of the rows of the matrix (A, b). They define a family of algorithms that proceed according to a "constraint-by-constraint" (CBC) principle. This principle requires that vertices that satisfy as equality the kth constraint may be considered only if the subproblem defined by the k-1 preceding constraints has been shown to be feasible. (A dual form of the CBC method is closely related to the phase I given in Borgwardt (1982).) This facilitates the detection of infeasibility of a problem at a relatively early stage in order to exploit the high probability that a problem is infeasible under this model when .
Using the CBC principle, the authors show that even if full enumeration of both feasible and infeasible intersection points of d hyperplanes is done in every subproblem,
independent of m. In particular, when d is fixed and , the expected number of steps is bounded by a constant. By using more efficient CBC algorithms and stronger probabilistic assumptions, the upper bound is reduced to a smaller function of d, but it still remains exponential. The authors also show that these results hold for a broad family of simplex variants.
Although the bound obtained above is better than in Smale (1983a), it is not an improvement on Smale's, because the variant that he analyzed is not a CBC algorithm. Using the approach developed in Smale (1983a),

OCR for page 131

Probability and Algorithms
Megiddo (1986a) improved the analysis for Smale's model to
Furthermore, Megiddo showed that ρ>(m, d) decreases to the limit for any fixed d. However, is superexponential in d.
The results of Blair (1986) and Adler et al. (1986) emphasize a considerable drawback of the asymptotic analyses of sign-invariant (and spherically symmetric) models: Blair showed that Smale's result can be explained in terms of the small expected number of vertices of the feasible set, without really using the properties of the specific parametric simplex variant. Adler et al. showed that upper bounds for ρ(m, d) that depend on d only cam be obtained by enumeration procedures, without using any property of the simplex method. Both of these results indicate that those asymptotic bounds reflect the properties of the probabilistic model rather than those of the simplex method. Later studies, as we shall soon see, overcame this drawback by obtaining results that are meaningful for all d and n.
Later, Smale (1983b) extended his results to a more general probabilistic model. The main addition in that model is the replacement of spherical symmetry with invariance of the probability distribution under coordinate permutations of Rd. The results of Adler et al. (1986) and Megiddo (1986a) do not apply to the more general model. Blair's result does apply to Smale's second model, and it indicates a similar drawback in interpreting asymptotic results under it.
9.2.4 Adler, Haimovich: Sign-Invariant Model for Phase II
Adler (1983) and Haimovich (1983) studied independently the expected number of pivots along paths generated by some parametric simplex variants. We present their results with respect to the form P(λ). The probabilistic model they used assumes sign invariance: The input distribution must be invariant under changing the signs of any subset rows and columns of
(Changing the sign of row i simply replaces the inequality by . Changing the sign of column i is equivalent to changing the constraint to . In other words, the hyperplane does not change; rather, the other half-space that is supported by it is chosen.) In addition, a

OCR for page 131

Probability and Algorithms
certain nondegeneracy condition is required. This model generalizes Smale's first model, since every spherically symmetric distribution satisfies the sign-invariance condition. Certain discrete distributions and certain problems with sparse matrices also fall under that model.
The main advantage of this model is that it facilitates analysis in terms of simple counting arguments. Under this model, Adler and Haimovich showed that the average number of pivots along a nonempty co-optimal path in a feasible LP is no more than
Similar results were obtained using other parameterizations. In particular, Adler showed the same bound for the PSD simplex with the sign invariance assumptions extended to the additional right-hand side vector b'.
The Adler-Haimovich results were very significant but still less than a complete explanation of the good real-life behavior of the simplex method. These results show that the co-optimal path from a point that minimizes a random objective to a point maximizing that objective is on average short. One problem is that this analysis assumes that a phase II algorithm starts from a random point, and this assumption has not been proved so fax for any known variant. Furthermore, the co-objective must be chosen independently of the particular problem. In known phase I algorithms the starting point is not random but is either fixed or dependent on the particular data. Such differences may change the results dramatically. There are striking examples from linear complementarity theory (Megiddo, 1986b; Saigal, 1983) where a path starting from a random point has expected polynomial length and a path starting at a fixed point has expected exponential length.
9.2.5 The Quadratic Results
Toward the end of 1983, three independent investigations obtained an upper bound on ρ(m, d) that is quadratic in min(m, d). Two of the studies, in Todd (1986) and Adler and Megiddo (1985), analyzed the PSD simplex. The third, Adler et al. (1987), analyzed a parametric version of the constraint-by-constraint method. The probabilistic model was the same in all three investigations. It was essentially the sign-invariance model described in the previous section, with a stronger non degeneracy assumption. It is still more general than the model in Smale (1983a).
The PSD simplex was analyzed under a specific choice of so-called lexicographic initialization vector. Todd analyzed that algorithm using tools

OCR for page 131

Probability and Algorithms
from matroid theory, whereas Adler and Megiddo developed further Smale's approach. Adler et al. analyzed a parametric CBC algorithm with a lexicographic co-objective, building on the previous results and ideas of Adler and Haimovich.
After the completion of these three investigations, Megiddo (1985) observed that, although the parametric CBC algorithm and the PSD algorithm are in general quite different, their lexicographic versions execute exactly the same sequence of pivots. Thus, all three investigations were concerned with the same simplex variant, and obtained the result
Later, Adler and Megiddo (1985) obtained a quadratic lower bound on the average number of pivots. The probabilistic assumptions required for that result are stronger. They require that all coordinates of the data (A, b, c) are independent, identically distributed random variables with a common distribution symmetric about the origin. Their result was obtained by establishing bounds on the correlation between the probabilities for bases to be both primal and dual feasible simultaneously. That result, with the above, establishes that, at least for the stronger model, ρ(m, d) is in fact bounded between two quadratic functions of min(m, d). This also proves that there cannot exist a subquadratic upper bound for the weaker model.
Unlike the Adler-Haimovich results, the quadratic bound was not obtained under conditioning of feasibility. The sign-invariance model manifests the same behavior as Smale's spherically symmetric model; namely, as the ratio n/d grows, the proportion of feasible problems diminishes, and the meaning of averaging the number of pivots over all feasible and infeasible problems is put in question. However, using the fact that the quadratic re-suit holds for all n and d, it was shown that for n = 2d the expected number of pivots in a feasible problem under that model is at most const · d2.5.
9.3 Randomized Algorithms
Suppose we allow a simplex algorithm to choose randomly among several possibilities at certain points along its computation. For example, the algorithm can randomly choose the next vertex to move to among the adjacent vertices that provide a better value to the objective. We can then compute the expected number of pivots required to solve a problem, when the average is taken over the possible random choices. The expected number of pivots of

OCR for page 131

Probability and Algorithms
such a randomized algorithm is the largest average number of pivots on any problem with the same size. (The maximum is over all problems of this size, and the average is over the internal randomizations performed by the algorithm.) Hence in this approach the algorithm is randomized, and the input data are not. In contrast, in the probabilistic analyses described in previous sections the algorithm is completely deterministic, and the randomization is over the problem set.
Randomized pivoting rules have been suggested in the past for LP; see Dantzig (1963). Recently, there were several important developments in the theory of randomized algorithms for LP. Dyer and Frieze (1989) and Clarkson (1991) described a randomized algorithm for LP in case the dimension d is fixed, improving the deterministic algorithm of Megiddo (1984) for this special case. Clarkson's algorithm has expected complexity O(d2n + dd/2+O(1) log n). Clarkson's method was also exploited to obtain a fast parallel algorithm for LP in fixed dimension (Alon and Megiddo, 1990) and generalized to convex programming (Adler and Shamir, 1989). Seidel (1990, 1991) described another very simple "dual simplex" randomized algorithm for LP in fixed dimension, of complexity Θ(d!n). This has been improved by Sharir and Welzl (1992) and extended to cover a fairly general class of convex programming problems, to a dual simplex algorithm with O(n · 2d) expected number of pivots. A previous attempt at improving Seidel's algorithm is due to Welzl (1991). It seems to run very fast in practice, but no analysis of its complexity is known yet.
In a remarkable recent development; Kale (1992b) has given the first randomized simplex algorithm for general LP problems that is subexponential. His algorithm takes on the average
pivots, where C is an absolute constant. Roughly speaking, his algorithm starts from a vertex v, then reaches vertices on r different facets, randomly chooses one facet among them, and works recursively on that facet to find the optimal vertex in it. A judicious choice of r (and a careful analysis) leads to the above bound.
Although this bound is not polynomial, this is a substantial improvement over the exponential upper bounds known previously. Kalai's work was preceded by important results on the diameter of polyhedra. (See Kalai (1991, 1992a) and Kalai and Kleitman (1992) and the excellent survey by Klee and Kleinschmidt (1987) for the context of this problem. In that context, Kalai has also shown that if one does not limit the computational

OCR for page 131

Probability and Algorithms
effort per pivot step, nlog d pivot steps always suffice.) Shortly after Kalai's result, Matoušek et al. (1992) showed that a careful analysis of the previous randomized simplex algorithm of Sharir and Welzl (1992) leads to the same expected complexity bound.
9.4 The Road Ahead
In retrospect, one can see two parallel lines of research on probabilistic simplex analysis. The first line of research, on the feasible rotation-symmetric model, was carried out single-handedly by Borgwardt. The second started with Smale's model and was later generalized to the sign-invariant model, leading finally to the quadratic results. In both models the analysis advanced in three stages:
asymptotic results for fixed dimension,
nonasymptotic results for phase II, starting from a vertex optimal with respect to a random objective, and
results for a complete simplex algorithm.
In both cases the average complexity and the ''naturalness'' of the variant had to be sacrificed to some extent between stages 2 and 3.
Although the gap between the good practical behavior of the simplex and its proven bad worst-case bounds has been explained in part by the probabilistic analyses, many related questions are still open. Both models have some undesirable properties that were mentioned above. Analyzing, or even merely defining, a widely acceptable model that generates "real-life" problems is a major challenge. Any analysis of the concentration of the number of pivots around the mean (e.g., the variance) will be new and interesting. More natural simplex variants, especially those used in practice, are also still awaiting analysis.
A short time after the quadratic results were obtained, Karmarkar (1984) announced a new algorithm for LP that is both theoretically and practically efficient. The interest of the research community shifted to this exciting development, and for a while some people believed that the simplex method has met its match and was now obsolete. Now that most of the dust has settled, it looks as if both algorithms have their respective advantages, and each has problems that it solves faster. Moreover, usually it is not possible to tell a priori which algorithm will be faster on a given problem. (One of

OCR for page 131

Probability and Algorithms
the wonderful side benefits of this development is that the existing codes for the simplex method have been improved by almost two orders of magnitude to make them competitive with the new challenger.) As is often the case, new solutions raise new problems, and one of them is related to our theme: the efficiency of Karmarkar's method in practice is much faster than what can be proved theoretically. Explaining this gap by probabilistic analysis is a new great challenge for the LP community.
In randomized algorithms for LP, a major challenge is now to find a randomized simplex algorithm that is polynomial, or to prove that no such algorithm exists.
References
Adler, I. (1983), The expected number of pivots needed to solve parametric linear programs and the efficiency of the self-dual simplex method, Working Paper, Department of Industrial Engineering and Operations Research, University of California at Berkeley.
Adler, I., and N. Megiddo (1985), A simplex algorithm whose average number of steps is bounded between two quadratic functions of the smaller dimension , J. Assoc. Comput. Mach. 32, 871-895.
Adler, I., and R. Shamir (1989), A randomization scheme for speeding up algorithms for linear and convex programming problems with high constraints-to-variables ratio, Technical Report 89-7, DIMACS Center, Rutgers University. To appear in Math. Programming.
Adler, I., R.M. Karp, and R. Shamir (1986), A family of simplex variants solving an m × d linear program in expected number of pivots depending on d only, Math. Oper. Res. 11, 570-590.
Adler, I, R.M. Karp, and R. Shamir (1987), A simplex variant solving an m × d linear program in O(min(m2, d2)) expected number of pivot steps,

OCR for page 131

Probability and Algorithms
J. Complex. 3, 371-387.
Alon, N., and N. Megiddo (1990), Parallel linear programming in fixed dimension almost surely in constant time, in Proc. 31st IEEE Symposium on the Foundations of Computer Science, IEEE Computer Society Press, Los Alamitos, Calif., 574-582. To appear in J. Assoc. Comput. Mach .
Blair, C. (1986), Random linear programs with many variables and few constraints, Math. Programming 34, 62-71.
Borgwardt, K.H. (1977a), Untersuchungen zur Asymptotik der mittleren Schrittzahl von Simplexverfahren in der Linearen Optimierung, Dissertation, Kaiserslautern University, Kaiserslautern, Germany.
Borgwardt, K.H. (1977b), Untersuchungen zur Asymptotik der mittleren Schrittzahl yon Simplexverfahren in der Linearen Optimierung, Methods Oper. Res. 28, 332-345.
Borgwardt, K.H. (1978), Zum Rechenaufwand yon Simplexverfahren, Methods Oper. Res. 31, 83-97.
Borgwardt, K.H. (1981), The expected number of pivot steps required by a certain variant of the simplex method is polynomial, Methods Oper. Res. 43, 35-41.
Borgwardt, K.H. (1982), The average number of pivot steps required by the simplex-method is polynomial, Z. Oper. Res. 26, 157-177.
Borgwardt, K.H. (1987), The Simplex Method: A Probabilistic Approach , Springer-Verlag, Berlin.
Borgwardt, K.H. (1990), Probabilistic analysis of the simplex method, in Mathematical Developments Arising From Linear Programming, J.C. Lagarias and M.J. Todd, eds., Contemporary Mathematics, no. 114, American Mathematical Society, Providence, R.I., 21-34.
Borgwardt, K.H., R. Datum, R. Donig, and G. Joas (1990), Empirical studies on the average efficiency of Simplex variants under rotation symmetry, Report, Institut für Mathematik, Universtät Augsburg, Germany.

OCR for page 131

Probability and Algorithms
Clarkson, K.L. (1991), A Las Vegas algorithm for linear programming when the dimension is small, draft manuscript, AT&T Bell Laboratories, November. (Earlier version with same title in Proc. 20th Annual Symposium on Theory of Computing (1988), ACM Press, New York, 452-456.)
Dantzig, G.B. (1963), Linear Programming and Extensions, Princeton University Press, Princeton, N.J.
Dyer, M.E., and A.M. Frieze (1989), A randomized algorithm for fixed-dimensional linear programming, Math. Programming 44, 202-212.
Eaves, C., and H. Scarf (1976), The solution of systems of piecewise linear equations, Math. Oper. Res. 1, 1-27.
Gass, S.I., and E.L. Salty (1955), The computational algorithm for the parametric objective function, Naval Res. Logistic Quart. 2, 39-45.
Grünbaum, B. (1967), Convex Polytopes, John Wiley & Sons, New York.
Haimovich, M. (1983), The simplex algorithm is very good!—On the expected number of pivot steps and related properties of random linear programs, draft, Columbia University, New York.
Kalai, G. (1991), The diameter problem for convex polytopes and f-vector theory, in The Victor Klee Festschrift, P. Gritzman and B. Sturmfels, eds., American Mathematical Society, Providence, R.I., 387-411.
Kalai, G. (1992a), Upper bounds for the diameter of graphs of convex polytopes, Discrete Comput. Geom. 7, to appear.
Kalai, G. (1992b), A subexponential randomized simplex algorithm (extended abstract), in Proc. 24th Annual A CM Symposium on Theory of Computing, ACM Press, New York, to appear.
Kalai, G., and D.J. Kleitman (1992), A quasi-polynomial bound for diameter of graphs of polyhedra, Bull Amer. Math. Soc. 24, in press.
Karmarkar, N. (1984), A new polynomial-time algorithm for linear programming, Combinatorica 4, 373-395.

OCR for page 131

Probability and Algorithms
Klee, V., and P. Kleinschmidt (1987), The d-steps conjecture and its relatives, Math. Oper. Res. 12, 718-755.
Lovász, L. (1980), A new linear programming algorithm—better or worse than the simplex method?, Math. Intell. 2, 141-146.
Matoušek, J., M. Sharir, and E. Welzl (1992), A subexponential bound for linear programming, in Proc. 8th Annual Symposium on Computational Geometry, ACM Press, New York, to appear.
Megiddo, N. (1984), Linear programming in linear time when the dimension is fixed, J. Assoc. Comput. Mach. 31, 114-127.
Megiddo, N. (1985), A note on the generality of the self-dual algorithm with various starting points, Methods Oper. Res. 49, 271-275.
Megiddo, N. (1986a), Improved asymptotic analysis of the average number of steps performed by the self dual simplex algorithm, Math. Programming 35, 140-172.
Megiddo, N. (1986b), On the expected number of linear complementarity cones intersected by random and semi-random rays, Math. Programming 35, 225-235.
Megiddo, N. (1987), On the complexity of linear programming, in Advances in Economic Theory, T.F. Bewley, ed., Cambridge University Press, New York, 225-268.
Saigal, R. (1983), On some average results for linear complementarity problems, Manuscript, Department of Industrial Engineering and Management Sciences, Northwestern University, Evanston, Ill.
Seidel, R. (1990), Linear programming and convex hulls made easy, in Proc. 6th Annual A CM Symposium on Computational Geometry, ACM Press, New York, 211-215.
Seidel, R. (1991), Low dimensional linear programming and convex hulls made easy, Discrete Comput. Geom. 6, 423-434.

OCR for page 131

Probability and Algorithms
Shamir, R. (1987), The efficiency of the simplex method: A survey, Manage. Sci. 33, 301-334.
Sharir, M., and E. Welzl (1992), A combinatorial bound for linear programming and related problems, in Proc. 9th Symposium on the Theoretical Aspects of Computer Science, Lecture Notes in Computer Science no. 577, Springer-Verlag, New York, 569-579.
Smale, S. (1983a), On the average speed of the simplex method of linear programming, Math. Programming 27, 241-262.
Smale, S. (1983b), The problem of the average speed of the simplex method, in Mathematical Programming: The State of the Art, A. Bachem, M. Gröt-schel, and B. Korte, eds., Springer-Verlag, Berlin, 530-539.
Todd, M.J. (1986), Polynomial expected behavior of a pivoting algorithm for linear complementary and linear programming problems, Math. Programming 35, 173-192.
Todd, M.J. (1991), Probabilistic models for linear programming, Math. Oper. Res. 16, 671-693.
Welzl, E. (1991), Smallest enclosing disks (balls and ellipsoids), in New Results and New Trends in Computer Science, H. Maurer, ed., Lecture Notes in Computer Science no. 555, Springer-Verlag, New York, 359-370.