Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
16 pages
1 file
Optimization problems arise in widely varying contexts. The general optimization problem is to find the minimum value of a certain function, the objective, on a certain set, defined by constraints. To make such problems amenable to analysis, further restrictions must be imposed on the kinds of objectives and constraints that may arise. A priori, it might seem useful to require them to be polynomial. After all, the entire toolbox of algebra and calculus is available to deal with polynomials. They can be represented, evaluated, and manipulated easily, and they are often used to approximate more complicated functions. Partly for these reasons, they arise in many kinds of applications. As this course has demonstrated, it so happens that this is not the most fruitful restriction for optimization problems. Even problems with quadratic objectives and quadratic constraints may be difficult to solve. Rather, it is the field of convex optimization which has developed a body of theory and practice leading to computationally effective solution procedures. However, the aforementioned reasons why polynomial optimization would be desirable are still valid. Thus, this paper attempts to explain and demonstrate how to use the techniques of convex optimization to approximately (often, exactly) solve polynomial optimization problems. For concreteness, the problems will be posed as minimization problems. For simplicity, the constraints will be linear, or absent.
Journal of Global Optimization, 2010
A polynomial optimization problem whose objective function is represented as a sum of positive and even powers of polynomials, called a polynomial least squares problem, is considered. Methods to transform a polynomial least square problem to polynomial semidefinite programs to reduce degrees of the polynomials are discussed. Computational efficiency of solving the original polynomial least squares problem and the transformed polynomial semidefinite programs is compared. Numerical results on selected polynomial least square problems show better computational performance of a transformed polynomial semidefinite program, especially when degrees of the polynomials are larger.
In recent years semidefinite programming has become a widely used tool for designing more efficient algorithms for approximating hard combinatorial optimization problems and, more generally, polynomial optimization problems, which deal with optimizing a polynomial objective function over a basic closed semi-algebraic set. The underlying paradigm is that while testing nonnegativity of a polynomial is a hard problem, one can test efficiently whether it can be written as a sum of squares of polynomials by using semidefinite programming. In this note we sketch some of the main mathematical tools that underlie this approach and illustrate its application to some graph problems dealing with maximum cuts, stable sets and graph colouring.
A polynomial optimizaton problem whose objective function is represented as a sum of positive and even powers of polynomials, called a polynomial least squares problem, is con- sidered. Methods to transform a polynomial least square problem to polynomial semidefi- nite programs to reduce degrees of the polynomials are discussed. Computational eciency of solving the original polynomial least squares problem and the transformed polynomial semidefinite programs is compared. Numerical results on selected polynomial least square problems show better computational performance of a transformed polynomial semidefinite program, especially when degrees of the polynomials are larger.
49th IEEE Conference on Decision and Control (CDC), 2010
We present a new algorithm for solving a polynomial program P based on the recent "joint + marginal" approach of the first author for parametric polynomial optimization. The idea is to first consider the variable x1 as a parameter and solve the associated (n − 1)-variable (x2,. .. , xn) problem P(x1) where the parameter x1 is fixed and takes values in some interval Y1 ⊂ R, with some probability ϕ1 uniformly distributed on Y1. Then one considers the hierarchy of what we call "joint+marginal" semidefinite relaxations, whose duals provide a sequence of univariate polynomial approximations x1 → p k (x1) that converges to the optimal value function x1 → J(x1) of problem P(x1), as k increases. Then with k fixed a priori, one computesx * 1 ∈ Y1 which minimizes the univariate polynomial p k (x1) on the interval Y1, a convex optimization problem that can be solved via a single semidefinite program. The quality of the approximation depends on how large k can be chosen (in general for significant size problems k = 1 is the only choice). One iterates the procedure with now an (n − 2)variable problem P(x2) with parameter x2 in some new interval Y2 ⊂ R, etc. so as to finally obtain a vectorx ∈ R n. Preliminary numerical results are provided.
Minimizing a polynomial function over a region defined by polynomial inequalities models broad classes of hard problems from combinatorics, geometry and optimization. New algorithmic approaches have emerged recently for computing the global minimum, by combining tools from real algebra (sums of squares of polynomials) and functional analysis (moments of measures) with semidefinite optimization. Sums of squares are used to certify positive polynomials, combining an old idea of Hilbert with the recent algorithmic insight that they can be checked efficiently with semidefinite optimization. The dual approach revisits the classical moment problem and leads to algorithmic methods for checking optimality of semidefinite relaxations and extracting global minimizers. We review some selected features of this general methodology, illustrate how it applies to some combinatorial graph problems, and discuss links with other relaxation methods.
Lectures on Modern Convex Optimization, 2001
In semidefinite programming, one minimizes a linear function subject to the constraint that an affine combination of symmetric matrices is positive semidefinite. Such a constraint is nonlinear and nonsmooth, but convex, so semidefinite programs are convex optimization problems. Semidefinite programming unifies several standard problems (e.g., linear and quadratic programming) and finds many applications in engineering and combinatorial optimization. Although semidefinite programs are much more general than linear programs, they are not much harder to solve. Most interior-point methods for linear programming have been generalized to semidefinite programs. As in linear programming, these methods have polynomial worst-case complexity and perform very well in practice. This paper gives a survey of the theory and applications of semidefinite programs and an introduction to primaldual interior-point methods for their solution.
Journal of the Operations Research Society of Japan, 2008
The SDP (semidefinite programming) relaxation fbr general POPs (polynomial optimization problems), which was proposed as a method for computing global optimal solutions of POPs by Lasserre, has become an active research subject recently. We propose a new heuristic method exploiting the equality constraints in a given POP, and strengthen the SDP relaxation so as to achieve faster convergence to the global optimum of the POP. We can apply this method te both of the dense SDP relaxation which was originally proposed by Lasserre, and the sparse SDP relaxation which was later proposed by Kim, Kejima, Muramatsu and Waki. E$pecially, our heuristic method incorporated into the sparse SDP relaxation method has shown a promising performance in numerical experiments on large scale sparse POPs, Roughly speaking, we induce valid equality constraints from the original equality constraints of the POP, and then use them to convert the dense or sparse SDP relaxation into a new stronger SDP relaxation. Our method is enlightened by some strong theoretical results on the convergence of SDP relaxations for POPs with equality constraints provided by Lasserre , Parrilo and Laurent, but we place the main emphasis oll the practical aspect to compute mere accurate lower bounds of larger sparse POPs.
Optimization Methods and Software, 2003
We introduce a computer program PENNON for the solution of problems of convex Nonlinear and Semidefinite Programming (NLP-SDP). The algorithm used in PENNON is a generalized version of the Augmented Lagrangian method, originally introduced by Ben-Tal and Zibulevsky for convex NLP problems. We present generalization of this algorithm to convex NLP-SDP problems, as implemented in PENNON and details of its implementation. The code can also solve second-order conic programming (SOCP) problems, as well as problems with a mixture of SDP, SOCP and NLP constraints. Results of extensive numerical tests and comparison with other optimization codes are presented. The test examples show that PENNON is particularly suitable for large sparse problems.
In this paper, we consider computational methods for optimizing a multivariate inhomogeneous polynomial function over a general convex set. The focus is on the design and analysis of polynomial-time approximation algorithms. The methods are able to deal with optimization models with polynomial objective functions in any fixed degrees. In particular, we first study the problem of maximizing an inhomogeneous polynomial over the Euclidean ball. A polynomial-time approximation algorithm is proposed for this problem with an assured (relative) worst-case performance ratio, which is dependent only on the dimensions of the model. The method and approximation ratio are then generalized to optimize an inhomogeneous polynomial over the intersection of a finite number of co-centered ellipsoids. Furthermore, the constraint set is extended to a general convex compact set. Specifically, we propose a polynomial-time approximation algorithm with a (relative) worst-case performance ratio for polynomial optimization over some convex compact sets, e.g. a polytope. Finally, numerical results are reported, revealing good practical performance of the proposed algorithms for solving some randomly generated instances.
SIAM Journal on Optimization, 2006
Unconstrained and inequality constrained sparse polynomial optimization problems (POPs) are considered. A correlative sparsity pattern graph is defined to find a certain sparse structure in the objective and constraint polynomials of a POP. Based on this graph, sets of supports for sums of squares (SOS) polynomials that lead to efficient SOS and semidefinite programming (SDP) relaxations are obtained. Numerical results from various test problems are included to show the improved performance of the SOS and SDP relaxations.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Mathematical Programming, 2015
Kluwer Academic Publishers eBooks, 2000
Journal of Optimization Theory and Applications, 2010
Mathematical Programming, 2002
Japan Journal of Industrial and Applied Mathematics, 2013
SIAM Journal on Optimization, 2009
SIAM Journal on Optimization
SIAM Journal on Optimization, 2009
Mathematical Programming, 2011
Annals of Operations Research, 1993
Cornell University - arXiv, 2019
Handbook on Semidefinite, Conic and Polynomial Optimization, 2011
Optimization Methods and Software, 2015
SIAM Journal on Optimization, 2005
Journal of Global Optimization, 2009