Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Optimization deals with problems of minimization or maximization an object function of several variables usually subject to equality and/or inequality constraints. This paper is concerned with the construction of an effective algorithm for finding a global optimal solution (or global optimal solutions) of the constrained nonlinear programming problem. Objective function is assumed to satisfy Lipschitz condition and its Lipschitz constant is utilized in order to bound searching space effectively and terminate searching iteration. We can regard Lipschitz algorithm as the Branch-and-Bound method that is a kind of a divide and conquer method.
Applied Mathematics, 2012
Constrained nonlinear optimization problems are well known as very difficult problems. In this paper, we present a new algorithm for solving such problems. Our proposed algorithm combines the Branch-and-Bound algorithm and Lipschitz constant to limit the search area effectively; this is essential for solving constrained nonlinear optimization problems. We obtain a more appropriate Lipschitz constant by applying the formula manipulation system of each divided area. Therefore, we obtain a better approximate solution without using a lot of searching points. The efficiency of our proposed algorithm has been shown by the results of some numerical experiments.
Journal of Global Optimization
In this paper, Lipschitz univariate constrained global optimization problems where both the objective function and constraints can be multiextremal are considered. The constrained problem is reduced to a discontinuous unconstrained problem by the index scheme without introducing additional parameters or variables. A Branch-and-Bound method that does not use derivatives for solving the reduced problem is proposed. The method either determines the infeasibility of the original problem or finds lower and upper bounds for the global solution. Not all the constraints are evaluated during every iteration of the algorithm, providing a significant acceleration of the search. Convergence conditions of the new method are established. Test problems and extensive numerical experiments are presented.
In this paper, Lipschitz univariate constrained global optimization problems where both the objective function and constraints can be multiextremal are considered. The constrained problem is reduced to a discontinuous unconstrained problem by the index scheme without introducing additional parameters or variables. A Branch-and-Bound method that does not use derivatives for solving the reduced problem is proposed. The method either determines the infeasibility of the original problem or finds lower and upper bounds for the global solution. Not all the constraints are evaluated during every iteration of the algorithm, providing a significant acceleration of the search. Convergence conditions of the new method are established. Extensive numerical experiments are presented.
European Journal of Operational Research, 2010
In this paper a linear programming-based optimization algorithm called the Sequential Cutting Plane algorithm is presented. The main features of the algorithm are described, convergence to a Karush-Kuhn-Tucker stationary point is proved and numerical experience on some well-known test sets is showed. The algorithm is based on an earlier version for convex inequality constrained problems, but here the algorithm is extended to general continuously differentiable nonlinear programming problems containing both nonlinear inequality and equality constraints. A comparison with some existing solvers shows that the algorithm is competitive with these solvers. Thus, this new method based on solving linear programming subproblems is a good alternative method for solving nonlinear programming problems efficiently. The algorithm has been used as a subsolver in a mixed integer nonlinear programming algorithm where the linear problems provide lower bounds on the optimal solutions of the nonlinear programming subproblems in the branch and bound tree for convex, inequality constrained problems. j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / e j o r conveniently provide a lower bound on the optimal solution of the convex NLP subproblem. Lower bounds are needed in the branch and bound procedure for efficiency reasons. See [21] for more details. Very promising results are reported in [20] for a special set of difficult block optimization problems. The MINLP version of the algorithm found better solutions in one minute compared to the solutions that other commercial solvers found in 12 h.
Top, 2012
We study a simple, yet unconventional approach to the global optimization of unconstrained nonlinear least-squares problems. Non-convexity of the sum of least-squares objective in parameter estimation problems may often lead to the presence of multiple local minima. Here, we focus on the spatial branch-and-bound algorithm for global optimization and experiment with one of its implementations, BARON (Sahinidis in J. Glob. Optim. 8(2):201–205, 1996), to solve parameter estimation problems. Through the explicit use of first-order optimality conditions, we are able to significantly expedite convergence to global optimality by strengthening the relaxation of the lower-bounding problem that forms a crucial part of the spatial branch-and-bound technique. We analyze the results obtained from 69 test cases taken from the statistics literature and discuss the successes and limitations of the proposed idea. In addition, we discuss software implementation for the automation of our strategy.
Journal of Numerical Analysis and Approximation Theory
In this paper, we propose an algorithm based on branch and bound method to underestimate the objective function and reductive transformation which is transformed the all multivariable functions on univariable functions. We also demonstrate several quadratic lower bound functions are proposed which they are better/preferable than the others well-known in literature. We obtain that our experimental results are more effective when we face different nonconvex functions.
Applied Mathematics and Computation, 2011
In this paper we present a new hybrid method, called the SASP method. The purpose of this method is the hybridization of the simulated annealing (SA) with the descent method, where we estimate the gradient using simultaneous perturbation. Firstly, the new hybrid method finds a local minimum using the descent method, then SA is executed in order to escape from the currently discovered local minimum to a better one, from which the descent method restarts a new local search, and so on until convergence. The new hybrid method can be widely applied to a class of global optimization problems for continuous functions with constraints. Experiments on 30 benchmark functions, including high dimensional functions, show that the new method is able to find near optimal solutions efficiently. In addition, its performance as a viable optimization method is demonstrated by comparing it with other existing algorithms. Numerical results improve the robustness and efficiency of the method presented.
2006
Accurate modelling of real-world problems often requires nonconvex terms to be introduced in the model, either in the objective function or in the constraints. Nonconvex programming is one of the hardest fields of optimization, presenting many challenges in both practical and theoretical aspects. The presence of multiple local minima calls for the application of global optimization techniques. This paper is a mini-course about global optimization techniques in nonconvex programming; it deals with some theoretical aspects of nonlinear programming as well as with some of the current stateof-the-art algorithms in global optimization. The syllabus is as follows. Some examples of Nonlinear Programming Problems (NLPs). General description of two-phase algorithms. Local optimization of NLPs: derivation of KKT conditions. Short notes about stochastic global multistart algorithms with a concrete example (SobolOpt). In-depth study of a deterministic spatial Branch-and-Bound algorithm, and con...
2010 9th IEEE/IAS International Conference on Industry Applications, INDUSCON 2010, 2010
This paper presents a comparison of the effectiveness of different linear programming (LP) solvers when used with a branch-and-bound algorithm to find optimal solutions to mixed integer linear problems (MIP). We compare the performance of an improved implementation of the branch-and-bound algorithm with three different LP solvers in order to evaluate each one. We also compare our implementation with other available solvers, to obtain multiple optimal solutions when they exist. The comparison between those different situations was useful in order to prove the robustness and the efficiency of the developed algorithm.
2004
Abstract: this report we compare two stochastic algorithms for global optimization, PGSL byRaphael & Smith [7] and global by Csendes [2] with the deterministic algorithm MCSby Huyer & Neumaier [4] on the test set used by Jones et al. [6], which consists of theseven standard test functions from Dixon & Szeg o [3] and two test functions from Yao[8] with the standard box bounds as given in [4]. For simplicity, we refer to this test set asthe Jones test set
IEEE Transactions on Systems, Man, and Cybernetics, 1988
Aktruct -The theon and implementation of a new global search method of optimiration in n dinlensions ir presented, inspired by Kushner'5 method in one dimension. This method is meant to address optimization problem\ where the function has many extrema, where it may or may not be differentiable, and where it is important to reduce the nuniber of ebaluations of the function at the expense of increased computation. Compariwnr are made to the performance of other global optimi7ation technique\ on a set of \tandard differentiable test functions. A new cla\s of diwrete-valued test functions is introduced, and the performance of the method is determined on a randomly generated set of these functions. Overall, this method containr the power of other Bayesian/sanipling techniques without the need of a separate local optimization technique for improied comergence. This yields the ability for the search to operate on utik~iowti functions which ma) contain one or more discrete componentr.
In this paper, we present a new method to handle inequality constraints and apply it in NOVEL (Nonlinear Optimization via External Lead), a system we have developed for solving constrained continuous nonlinear optimization problems. In general, in applying Lagrange-multiplier methods to solve these problems, inequality constraints are rst converted into equivalent equality constraints. One such conversion method adds a slack variable to each inequality constraint in order to convert it into an equality constraint. The disadvantage of this conversion is that when the search is inside a feasible region, some satis ed constraints may still pose a non-zero weight in the Lagrangian function, leading to possible oscillations and divergence when a local optimum lies on the boundary of a feasible region. We propose a new conversion method called the MaxQ method such that all satis ed constraints in a feasible region always carry zero weight in the Lagrange function; hence, minimizing the Lagrange function in a feasible region always leads to local minima of the objective function. We demonstrate that oscillations do not happen in our method. We also propose methods to speed up convergence when a local optimum lies on the boundary of a feasible region. Finally, we show improved experimental results in applying our proposed method in NOVEL on some existing benchmark problems and compare them to those obtained by applying the method based on slack variables.
2020
The problem of finding global minima of nonlinear discrete functions arises in many fields of practical matters. In recent years, methods based on discrete filled functions become popular as ways of solving these sort of problems. However, they rely on the steepest descent method for local searches. Here we present an approach that does not depend on a particular local optimization method, and a new discrete filled function with the useful property that a good continuous global optimization algorithm applied to it leads to an approximation of the solution of the nonlinear discrete problem. Numerical results are given showing the efficiency of the new approach.
Optimization Letters, 2008
The multistart clustering global optimization method called GLOBAL has been introduced in the 1980s for bound constrained global optimization problems with black-box type objective function. Since then the technological environment has been changed much. The present paper describes shortly the revisions and updates made on the involved algorithms to utilize the novel technologies, and to improve its reliability. We discuss in detail the results of the numerical comparison with the old version and with C-GRASP, a continuous version of the GRASP method. According to these findings, the new version of GLOBAL is both more reliable and more efficient than the old one, and it compares favorably with C-GRASP too.
2006
Mathematica provides a suite of built-in and 3rd party tools for nonlinear optimization. These tools are tested on a set of hard problems. The built-in Mathematica functions are tested as well as the tools in the MathOptimizer and Global Optimization packages. The problems tested represent classes of problems that cause difficulties for global solvers, including those with local minima, discontinuous and black box modules, problems with non-real regions, and constrained problems with complicated and wavy constraints. In addition, scaling of performance with problem size is tested. In general, no tool could solve all problems but all problems could be solved by at least on tool. All of the tools except the Global Optimization tools GlobalSearch and GlobalPenaltyFn were prone to returning infeasible solutions on discontinuous and black box modules, problems with non-real regions, and constrained problems with complicated and wavy constraints. The GlobalSearch and GlobalPenaltyFn tools were thus the most robust, and were in many cases also the fastest.
2001
Performance comparison of Epperly's method [64] (an interval method), DONLP2 (SQP) and CSA in solving Floudas and Pardalos' continuous constrained NLPs [68]. CSA is based on (Cauchy 1 , S-uniform, M) and α = 0.8. All times are in seconds on a Pentium-III 500-MHz computer running Solaris 7. '-' stands for no feasible solution found for Epperly's method. Both SQP and CSA use the same sequence of starting points.. .. .. .. .. .. .. .. . 5.7 Comparison results of LANCELOT, DONLP2, and CSA in solving discrete constrained NLPs that are derived from selected continuous problems from CUTE using the starting point specified in each problem. CSA is based on (Cauchy 1 , S-uniform, M) and α = 0.8. All times are in seconds on a Pentium-III 500-MHz computer running Solaris 7. − means that no feasible solution can be found by both the public version (01/05/2000) and the commercial version of LANCELOT (by submitting problems through the Internet, http://www-neos.mcs.anl.gov/neos/solvers/NCO:LANCELOT/), and that no feasible solution can be found by DONLP2. Numbers in bold represent the best solutions among the three methods if they have different solutions. 5.8 Comparison results of LANCELOT, DONLP2, and CSA in solving mixedinteger constrained NLPs that are derived from selected continuous problems from CUTE using the starting point specified in each problem. CSA is based on (Cauchy 1 , S-uniform, M) and α = 0.8. All times are in seconds on a Pentium-III 500-MHz computer running Solaris 7. − means that no feasible solution can be found by both the public version (01/05/2000) and the commercial version of LANCELOT (by submitting problems through the Internet, http://www-neos.mcs.anl.gov/neos/solvers/NCO:LANCELOT/), and that no feasible solution can be found by DONLP2. Numbers in bold represent the best solutions among the three methods if they have different solutions. xiii 5.9 Comparison results of LANCELOT, DONLP2, and CSA in solving selected continuous problems from CUTE using the starting point specified in each problem. CSA is based on (Cauchy 1 , S-uniform, M) and α = 0.8, and does not use derivative information in each run. All times are in seconds on a Pentium-III 500-MHz computer running Solaris 7. − means that no feasible solution can be found by both the public version (01/05/2000) and the commercial version of LANCELOT (by submitting problems through the Internet, http://www-neos.mcs.anl.gov/neos/solvers/NCO:LANCELOT/), and that no feasible solution can be found by DONLP2. * means that solutions are obtained by the commercial version (no CPU time is available) but cannot be solved by the public version. Numbers in bold represent the best solutions among the three methods if they have different solutions.. .. .. .. .. . 5.11 Experimental Results of applying LANCELOT on selected CUTE problems that cannot be solved by CSA at this time. All times are in seconds on a Pentium-III 500-MHz computer running Solaris 7. − means that no feasible solution can be found by both the public version (01/05/2000
The IMA Volumes in Mathematics and its Applications, 1999
This paper presents computational results of the parallelized version of the BB global optimization algorithm. Important algorithmic and implementational issues are discussed and their impact on the design of parallel branch and bound methods is analyzed. These issues include selection of the appropriate architecture, communication patterns, frequency of communication, and termination detection. The approach is demonstrated with a variety of computational studies aiming at revealing the various types of behavior of the distributed branch and bound global optimization algorithm can exhibit. These include ideal behavior, speedup, detrimental, and deceleration anomalies.
Journal of Global Optimization, 1995
A branch and bound global optimization method, BB, for general continuous optimization problems involving nonconvexities in the objective function and/or constraints is presented. The nonconvexities are categorized as being either of special structure or generic. A convex relaxation of the original nonconvex problem is obtained by (i) replacing all nonconvex terms of special structure (i.e. bilinear, fractional, signomial) with customized tight convex lower bounding functions and (ii) by utilizing the parameter as defined in [17] to underestimate nonconvex terms of generic structure. The proposed branch and bound type algorithm attains finite -convergence to the global minimum through the successive subdivision of the original region and the subsequent solution of a series of nonlinear convex minimization problems. The global optimization method, BB, is implemented in C and tested on a variety of example problems.
Special Issue “Some Novel Algorithms for Global Optimization and Relevant Subjects”, Applied and Computational Mathematics (ACM), 2017
Global optimization is necessary in some cases when we want to achieve the best solution or we require a new solution which is better the old one. However global optimization is a hazard problem. Gradient descent method is a well-known technique to find out local optimizer whereas approximation solution approach aims to simplify how to solve the global optimization problem. In order to find out the global optimizer in the most practical way, I propose a so-called descending region (DR) algorithm which is combination of gradient descent method and approximation solution approach. The ideology of DR algorithm is that given a known local minimizer, the better minimizer is searched only in a so-called descending region under such local minimizer. Descending region is begun by a so-called descending point which is the main subject of DR algorithm. Descending point, in turn, is solution of intersection equation (A). Finally, I prove and provide a simpler linear equation system (B) which is derived from (A). So (B) is the most important result of this research because (A) is solved by solving (B) many enough times. In other words, DR algorithm is refined many times so as to produce such (B) for searching for the global optimizer. I propose a so-called simulated Newton – Raphson (SNR) algorithm which is a simulation of Newton – Raphson method to solve (B). The starting point is very important for SNR algorithm to converge. Therefore, I also propose a so-called RTP algorithm, which is refined and probabilistic process, in order to partition solution space and generate random testing points, which aims to estimate the starting point of SNR algorithm. In general, I combine three algorithms such as DR, SNR, and RTP to solve the hazard problem of global optimization. Although the approach is division and conquest methodology in which global optimization is split into local optimization, solving equation, and partitioning, the solution is synthesis in which DR is backbone to connect itself with SNR and RTP.
SIAM Journal on Optimization, 2013
This paper deals with two kinds of the one-dimensional global optimization problems over a closed finite interval: (i) the objective function f (x) satisfies the Lipschitz condition with a constant L; (ii) the first derivative of f (x) satisfies the Lipschitz condition with a constant M . In the paper, six algorithms are presented for the case (i) and six algorithms for the case (ii). In both cases, auxiliary functions are constructed and adaptively improved during the search. In the case (i), piece-wise linear functions are constructed and in the case (ii) smooth piece-wise quadratic functions are used. The constants L and M either are taken as values known a priori or are dynamically estimated during the search. A recent technique that adaptively estimates the local Lipschitz constants over different zones of the search region is used to accelerate the search. A new technique called the local improvement is introduced in order to accelerate the search in both cases (i) and (ii). The algorithms are described in a unique framework, their properties are studied from a general viewpoint, and convergence conditions of the proposed algorithms are given. Numerical experiments executed on 120 test problems taken from the literature show quite a promising performance of the new accelerating techniques.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.