Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2011
…
7 pages
1 file
We present a new algorithm for solving a polynomial program P based on the recent "joint + marginal" approach of the first author for parametric polynomial optimization. The idea is to first consider the variable x1 as a parameter and solve the associated (n − 1)-variable (x2,. .. , xn) problem P(x1) where the parameter x1 is fixed and takes values in some interval Y1 ⊂ R, with some probability ϕ1 uniformly distributed on Y1. Then one considers the hierarchy of what we call "joint+marginal" semidefinite relaxations, whose duals provide a sequence of univariate polynomial approximations x1 → p k (x1) that converges to the optimal value function x1 → J(x1) of problem P(x1), as k increases. Then with k fixed a priori, one computesx * 1 ∈ Y1 which minimizes the univariate polynomial p k (x1) on the interval Y1, a convex optimization problem that can be solved via a single semidefinite program. The quality of the approximation depends on how large k can be chosen (in general for significant size problems k = 1 is the only choice). One iterates the procedure with now an (n − 2)variable problem P(x2) with parameter x2 in some new interval Y2 ⊂ R, etc. so as to finally obtain a vectorx ∈ R n. Preliminary numerical results are provided.
49th IEEE Conference on Decision and Control (CDC), 2010
We present a new algorithm for solving a polynomial program P based on the recent "joint + marginal" approach of the first author for parametric polynomial optimization. The idea is to first consider the variable x1 as a parameter and solve the associated (n − 1)-variable (x2,. .. , xn) problem P(x1) where the parameter x1 is fixed and takes values in some interval Y1 ⊂ R, with some probability ϕ1 uniformly distributed on Y1. Then one considers the hierarchy of what we call "joint+marginal" semidefinite relaxations, whose duals provide a sequence of univariate polynomial approximations x1 → p k (x1) that converges to the optimal value function x1 → J(x1) of problem P(x1), as k increases. Then with k fixed a priori, one computesx * 1 ∈ Y1 which minimizes the univariate polynomial p k (x1) on the interval Y1, a convex optimization problem that can be solved via a single semidefinite program. The quality of the approximation depends on how large k can be chosen (in general for significant size problems k = 1 is the only choice). One iterates the procedure with now an (n − 2)variable problem P(x2) with parameter x2 in some new interval Y2 ⊂ R, etc. so as to finally obtain a vectorx ∈ R n. Preliminary numerical results are provided.
Optimization problems arise in widely varying contexts. The general optimization problem is to find the minimum value of a certain function, the objective, on a certain set, defined by constraints. To make such problems amenable to analysis, further restrictions must be imposed on the kinds of objectives and constraints that may arise. A priori, it might seem useful to require them to be polynomial. After all, the entire toolbox of algebra and calculus is available to deal with polynomials. They can be represented, evaluated, and manipulated easily, and they are often used to approximate more complicated functions. Partly for these reasons, they arise in many kinds of applications. As this course has demonstrated, it so happens that this is not the most fruitful restriction for optimization problems. Even problems with quadratic objectives and quadratic constraints may be difficult to solve. Rather, it is the field of convex optimization which has developed a body of theory and practice leading to computationally effective solution procedures. However, the aforementioned reasons why polynomial optimization would be desirable are still valid. Thus, this paper attempts to explain and demonstrate how to use the techniques of convex optimization to approximately (often, exactly) solve polynomial optimization problems. For concreteness, the problems will be posed as minimization problems. For simplicity, the constraints will be linear, or absent.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2011
Semidefinite programming has been used successfully to build hierarchies of convex relaxations to approximate polynomial programs. This approach rapidly becomes computationally expensive and is often tractable only for problems of small sizes. We propose an iterative scheme that improves the semidefinite relaxations without incurring exponential growth in their size. The key ingredient is a dynamic scheme for generating valid polynomial inequalities for general polynomial programs. These valid inequalities are then used to construct better approximations of the original problem. As a result, the proposed scheme is in principle scalable to large general combinatorial optimization problems. For binary polynomial programs, we prove that the proposed scheme converges to the global optimal solution for interesting cases of the initial approximation of the problem. We also present examples illustrating the computational behaviour of the scheme and compare it to other methods in the literature.
Journal of the Operations Research Society of Japan, 2008
The SDP (semidefinite programming) relaxation fbr general POPs (polynomial optimization problems), which was proposed as a method for computing global optimal solutions of POPs by Lasserre, has become an active research subject recently. We propose a new heuristic method exploiting the equality constraints in a given POP, and strengthen the SDP relaxation so as to achieve faster convergence to the global optimum of the POP. We can apply this method te both of the dense SDP relaxation which was originally proposed by Lasserre, and the sparse SDP relaxation which was later proposed by Kim, Kejima, Muramatsu and Waki. E$pecially, our heuristic method incorporated into the sparse SDP relaxation method has shown a promising performance in numerical experiments on large scale sparse POPs, Roughly speaking, we induce valid equality constraints from the original equality constraints of the POP, and then use them to convert the dense or sparse SDP relaxation into a new stronger SDP relaxation. Our method is enlightened by some strong theoretical results on the convergence of SDP relaxations for POPs with equality constraints provided by Lasserre , Parrilo and Laurent, but we place the main emphasis oll the practical aspect to compute mere accurate lower bounds of larger sparse POPs.
SIAM Journal on Optimization
A bilevel program is an optimization problem whose constraints involve the solution set to another optimization problem parameterized by upper level variables. This paper studies bilevel polynomial programs (BPPs), i.e., all the functions are polynomials. We reformulate BPPs equivalently as semi-infinite polynomial programs (SIPPs), using Fritz John conditions and Jacobian representations. Combining the exchange technique and Lasserre-type semidefinite relaxations, we propose a numerical method for solving BPPs. For simple BPPs, we prove the convergence to global optimal solutions. Numerical experiments are presented to show the efficiency of the proposed algorithm.
Mathematical Programming, 1982
he purpose of this paper is to present an acceleratedalgorithm for solving 0-1 positive Polynomial (PP)iproblems of findinga 0-1 vector x that maximizes cx subject to f(x) '!b, where f(x) = ((x)) is an m-vector of polynomials with nonnegative coefficients. Like our covering relaxation algorithm (1979) the accelerated algorithm is a cutting-plane method in which * ,i .iI ,~r
Journal of Optimization Theory and Applications, 2010
We consider polynomial optimization problems pervaded by a sparsity pattern. It has been shown in [1, 2] that the optimal solution of a polynomial programming problem with structured sparsity can be computed by solving a series of semidefinite relaxations that possess the same kind of sparsity. We aim at solving the former relaxations with a decompositionbased method, which partitions the relaxations according to their sparsity pattern. The decomposition-based method that we propose is an extension to semidefinite programming of the Benders decomposition for linear programs [3] .
Mathematics of Operations Research, 2003
Sherali and Adams SA90], Lov asz and Schrijver LS91] and, recently, Lasserre Las01b] have proposed lift and project methods for constructing hierarchies of successive linear or semide nite relaxations of a 0?1 polytope P R n converging to P in n steps. Lasserre's approach uses results about representations of positive polynomials as sums of squares and the dual theory of moments. We present the three methods in a common elementary framework and show that the Lasserre construction provides the tightest relaxations of P . As an application this gives a direct simple proof for the convergence of the Lasserre's hierarchy. We describe applications to the stable set polytope and to the cut polytope.
Journal of Global Optimization, 1995
We review various relaxations of (0,1)-quadratic programming problems. These include semidefinite programs, parametric trust region problems and concave quadratic maximization. All relaxations that we consider lead to efficiently solvable problems. The main contributions of the paper are the following. Using Lagrangian duality, we prove equivalence of the relaxations in a unified and simple way. Some of these equivalences have been known previously, but our approach leads to short and transparent proofs. Moreover we extend the approach to the case of equality constrained problems by taking the squared linear constraints into the objective function. We show how this technique can be applied to the Quadratic Assignment Problem, the Graph Partition Problem and the Max-Clique Problem. Finally we show our relaxation to be best possible among all quadratic majorants with zero trace.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Lecture Notes in Computer Science, 2012
Mathematical Programming, 2015
Journal of Global Optimization, 2010
SIAM Journal on Optimization, 2006
International Journal of Metaheuristics, 2011
Siam Journal on Optimization, 2011
Lecture Notes in Economics and Mathematical Systems, 1992
Journal of Global Optimization, 2010
Journal of Global Optimization, 2010
Operations Research Letters, 2014