Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
27 pages
1 file
Minimizing a polynomial function over a region defined by polynomial inequalities models broad classes of hard problems from combinatorics, geometry and optimization. New algorithmic approaches have emerged recently for computing the global minimum, by combining tools from real algebra (sums of squares of polynomials) and functional analysis (moments of measures) with semidefinite optimization. Sums of squares are used to certify positive polynomials, combining an old idea of Hilbert with the recent algorithmic insight that they can be checked efficiently with semidefinite optimization. The dual approach revisits the classical moment problem and leads to algorithmic methods for checking optimality of semidefinite relaxations and extracting global minimizers. We review some selected features of this general methodology, illustrate how it applies to some combinatorial graph problems, and discuss links with other relaxation methods.
In recent years semidefinite programming has become a widely used tool for designing more efficient algorithms for approximating hard combinatorial optimization problems and, more generally, polynomial optimization problems, which deal with optimizing a polynomial objective function over a basic closed semi-algebraic set. The underlying paradigm is that while testing nonnegativity of a polynomial is a hard problem, one can test efficiently whether it can be written as a sum of squares of polynomials by using semidefinite programming. In this note we sketch some of the main mathematical tools that underlie this approach and illustrate its application to some graph problems dealing with maximum cuts, stable sets and graph colouring.
Journal of Global Optimization, 2009
We briefly review the duality between moment problems and sums of squares (s.o.s.) representations of positive polynomials, and compare s.o.s. versus nonnegative polynomials. We then describe how to use such results to define convergent semidefinite programming relaxations in polynomial optimization as well as for the two related problems of computing the convex envelope of a rational function and finding all zeros of a system of polynomial equations.
2009
We consider the problem of minimizing a polynomial over a semialgebraic set defined by polynomial equations and inequalities, which is NP-hard in general. Hierarchies of semidefinite relaxations have been proposed in the literature, involving positive semidefinite moment matrices and the dual theory of sums of squares of polynomials. We present these hierarchies of approximations and their main properties: asymptotic/finite convergence, optimality certificate, and extraction of global optimum solutions. We review the mathematical tools underlying these properties, in particular, some sums of squares representation results for positive polynomials, some results about moment matrices (in particular, of Curto and Fialkow), and the algebraic eigenvalue method for solving zero-dimensional systems of polynomial equations. We try whenever possible to provide detailed proofs and background.
SIAM Journal on Optimization, 2001
We consider the problem of finding the unconstrained global minimum of a realvalued polynomial p(x) : R n → R, as well as the global minimum of p(x), in a compact set K defined by polynomial inequalities. It is shown that this problem reduces to solving an (often finite) sequence of convex linear matrix inequality (LMI) problems. A notion of Karush-Kuhn-Tucker polynomials is introduced in a global optimality condition. Some illustrative examples are provided.
Eprint Arxiv 1106 1666, 2011
We make use of a result of Hurwitz and Reznick, and a consequence of this result due to Fidalgo and Kovacec, to determine a new sufficient condition for a polynomial $f\in\mathbb{R}[X_1,...,X_n]$ of even degree to be a sum of squares. This result generalizes a result of Lasserre and a result of Fidalgo and Kovacec, and it also generalizes the improvements of these results given in [6]. We apply this result to obtain a new lower bound $f_{gp}$ for $f$, and we explain how $f_{gp}$ can be computed using geometric programming. The lower bound $f_{gp}$ is generally not as good as the lower bound $f_{sos}$ introduced by Lasserre and Parrilo and Sturmfels, which is computed using semidefinite programming, but a run time comparison shows that, in practice, the computation of $f_{gp}$ is much faster. The computation is simplest when the highest degree term of $f$ has the form $\sum_{i=1}^n a_iX_i^{2d}$, $a_i>0$, $i=1,...,n$. The lower bounds for $f$ established in [6] are obtained by evaluating the objective function of the geometric program at the appropriate feasible points.
2012
There are a wide variety of mathematical problems in different areas which are classified under the title of Moment Problem. We are interested in the moment problem with polynomial data and its relation to real algebra and real algebraic geometry. In this direction, we consider two different variants of moment problem.
2009
The purpose of this note is to survey a methodology to solve systems of polynomial equations and inequalities. The techniques we discuss use the algebra of multivariate polynomials with coefficients over a field to create large-scale linear algebra or semidefinite programming relaxations of many kinds of feasibility or optimization questions. We are particularly interested in problems arising in combinatorial optimization.
Optimization problems arise in widely varying contexts. The general optimization problem is to find the minimum value of a certain function, the objective, on a certain set, defined by constraints. To make such problems amenable to analysis, further restrictions must be imposed on the kinds of objectives and constraints that may arise. A priori, it might seem useful to require them to be polynomial. After all, the entire toolbox of algebra and calculus is available to deal with polynomials. They can be represented, evaluated, and manipulated easily, and they are often used to approximate more complicated functions. Partly for these reasons, they arise in many kinds of applications. As this course has demonstrated, it so happens that this is not the most fruitful restriction for optimization problems. Even problems with quadratic objectives and quadratic constraints may be difficult to solve. Rather, it is the field of convex optimization which has developed a body of theory and practice leading to computationally effective solution procedures. However, the aforementioned reasons why polynomial optimization would be desirable are still valid. Thus, this paper attempts to explain and demonstrate how to use the techniques of convex optimization to approximately (often, exactly) solve polynomial optimization problems. For concreteness, the problems will be posed as minimization problems. For simplicity, the constraints will be linear, or absent.
Mathematical Programming, 2015
The rapidly growing field of polynomial optimisation (PO) is concerned with optimisation problems in which the objective and constraint functions are all polynomials. There are applications of PO in a surprisingly wide variety of contexts, including, for example, operational research, statistics, applied probability, quantitative finance, theoretical computer science and various branches of engineering and the physical sciences. Not only that, but current research on PO is remarkably inter-disciplinary in nature, involving researchers from all of the above-mentioned disciplines, together with several branches of mathematics including graph theory, numerical analysis, algebraic geometry, commutative algebra and moment theory. This special issue of Mathematical Programming Series B was originally conceived during a 4-week residential programme on PO which took place in July and August 2013 at the Isaac Newton Institute for the Mathematical Sciences, an internationally recognised research institute in Cambridge, United Kingdom. The programme B Adam N.
Lecture Notes in Economics and Mathematical Systems, 1992
It is known that point searching in basic semialgebraic sets and the search for globally minimal points in polynomial optimization tasks can be carried out using (s d) O(n) arithmetic operations, where n and s are the numbers of variables and constraints and d is the maximal degree of the polynomials involved. Subject to certain conditions, we associate to each of these problems an intrinsic system degree which becomes in worst case of order (n d) O(n) and which measures the intrinsic complexity of the task under consideration. We design non-uniform deterministic or uniform probabilistic algorithms of intrinsic, quasi-polynomial complexity which solve these problems.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Journal of Pure and Applied Algebra, 2009
Proceedings of the 2010 International Symposium on Symbolic and Algebraic Computation - ISSAC '10, 2010
Lecture Notes in Computer Science, 2012
Optimization Methods and Software, 2015
SIAM Journal on Optimization, 2005
Journal of the Operations Research Society of Japan, 2008
SIAM Journal on Optimization, 2011
Mathematical Programming, 2006
Computational Optimization and Applications, 2014
Mathematics of Operations Research
Journal of Symbolic Computation, 2012
49th IEEE Conference on Decision and Control (CDC), 2010