Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
36 pages
1 file
A computationally expensive multi-modal optimization problem is considered. After an optimization loop it is desirable that the optimality gap, i.e., the difference between the best value obtained and the true optimum, is as small as possible. We define the concept of maximum loss as being the supremum of the optimality gaps over a set of functions, i.e., the largest possible optimality gap assuming that the unknown objective function belongs to a certain set of functions. The minimax strategy for global optimization is then to-at each iteration-choose a new evaluation point such that the maximum loss is decreased as much as possible. This strategy is in contrast to the maximum gain strategy, which is utilized in several common global optimization algorithms, and the relation between these strategies is described. We investigate how to implement the minimax strategy for the Lipschitz space of functions on box-constrained domains. Several problems are revealed. For example, to obtain uniqueness of the set of solutions to the minimax problem it is often necessary to decrease the domain such that the problem is more localized. We propose a number of algorithmic schemes, based on sequential linearization, to solve the different subproblems that appear. The algorithms are illustrated by numerical examples. We conclude that the minimax strategy is promising for global optimization when the main concern is to guarantee that the resulting solution is near-optimal.
Journal of Mathematical Analysis …, 2008
Many real life problems can be stated as a continuous minimax optimization problem. Well-known applications to engineering, finance, optics and other fields demonstrate the importance of having reliable methods to tackle continuous minimax problems. In this paper a new approach to the solution of continuous minimax problems over reals is introduced, using tools based on modal intervals. Continuous minimax problems, and global optimization as a particular case, are stated as the computation of semantic extensions of continuous functions, one of the key concepts of modal intervals. Modal intervals techniques allow to compute, in a guaranteed way, such semantic extensions by means of an efficient algorithm. Several examples illustrate the behavior of the algorithms in unconstrained and constrained minimax problems.
2020
The paper proposes a method for solving computationally time-consuming multidimensional global optimization problems. The developed method combines the use of a nested dimensional reduction scheme and numerical estimates of the objective function derivatives. Derivatives significantly reduce the cost of solving global optimization problems, however, the use of a nested scheme can lead to the fact that the derivatives of the reduced function become discontinuous. Typical global optimization methods are highly dependent on the continuity of the objective function. Thus, to use derivatives in combination with a nested scheme, an optimization method is required that can work with discontinuous functions. The paper discusses the corresponding method, as well as the results of numerical experiments in which such an optimization scheme is compared with other known methods.
Journal of Numerical Analysis and Approximation Theory
In this paper, we propose an algorithm based on branch and bound method to underestimate the objective function and reductive transformation which is transformed the all multivariable functions on univariable functions. We also demonstrate several quadratic lower bound functions are proposed which they are better/preferable than the others well-known in literature. We obtain that our experimental results are more effective when we face different nonconvex functions.
arXiv: Optimization and Control, 2020
In this paper, a sequential search method for finding the global minimum of an objective function is presented, The descent gradient search is repeated until the global minimum is obtained. The global minimum is located by a process of finding progressively better local minima. We determine the set of points of intersection between the curve of the function and the horizontal plane which contains the local minima previously found. Then, a point in this set with the greatest descent slope is chosen to be a initial point for a new descent gradient search. The method has the descent property and the convergence is monotonic. To demonstrate the effectiveness of the proposed sequential descent method, several non-convex multidimensional optimization problems are solved. Numerical examples show that the global minimum can be sought by the proposed method of sequential descent.
2013
A procedure for generating non-differentiable, continuously differentiable, and twice continuously differentiable classes of test functions for multiextremal multidimensional box-constrained global optimization and a corresponding package of C subroutines are presented. Each test class consists of 100 functions. Test functions are generated by defining a convex quadratic function systematically distorted by polynomials in order to introduce local minima. To determine a class, the user defines the following parameters: (i) problem dimension, (ii) number of local minima, (iii) value of the global minimum, (iv) radius of the attraction region of the global minimizer, (v) distance from the global minimizer to the vertex of the quadratic function. Then, all other necessary parameters are generated randomly for all 100 functions of the class. Full information about each test function including locations and values of all local minima is supplied to the user. Partial derivatives are also generated where possible.
Computational Optimization and Applications, 2006
This paper proposes a new algorithm for solving constrained global optimization problems where both the objective function and constraints are one-dimensional non-differentiable multiextremal Lipschitz functions. Multiextremal constraints can lead to complex feasible regions being collections of isolated points and intervals having positive lengths. The case is considered where the order the constraints are evaluated is fixed by the nature of the problem and a constraint i is defined only over the set where the constraint i − 1 is satisfied. The objective function is defined only over the set where all the constraints are satisfied. In contrast to traditional approaches, the new algorithm does not use any additional parameter or variable. All the constraints are not evaluated during every iteration of the algorithm providing a significant acceleration of the search. The new algorithm either finds lower and upper bounds for the global optimum or establishes that the problem is infeasible. Convergence properties and numerical experiments showing a nice performance of the new method in comparison with the penalty approach are given.
2000
Many optimization problems in engineering and science require solutions that are globally optimal. These optimization problems are characterized by the nonconvexity of the feasible domain or the objective function and may involve continuous and/or discrete variables. In this paper we highlight some recent results and discuss current research trends on deterministic and stochastic global optimization and global continuous approaches to discrete optimization.
2006
Accurate modelling of real-world problems often requires nonconvex terms to be introduced in the model, either in the objective function or in the constraints. Nonconvex programming is one of the hardest fields of optimization, presenting many challenges in both practical and theoretical aspects. The presence of multiple local minima calls for the application of global optimization techniques. This paper is a mini-course about global optimization techniques in nonconvex programming; it deals with some theoretical aspects of nonlinear programming as well as with some of the current stateof-the-art algorithms in global optimization. The syllabus is as follows. Some examples of Nonlinear Programming Problems (NLPs). General description of two-phase algorithms. Local optimization of NLPs: derivation of KKT conditions. Short notes about stochastic global multistart algorithms with a concrete example (SobolOpt). In-depth study of a deterministic spatial Branch-and-Bound algorithm, and con...
Many deterministic minimization algorithms can be seen as discrete dynamical systems coming from the discretization of first or second order Cauchy problems. On the other hand, global optimization can be considered as an over-determined Boundary Value Problem (BVP) for these problems. For instance, the steepest descent method leads to the following BVP:
Applied and Computational Mathematics, Special Issue “Some Novel Algorithms for Global Optimization and Relevant Subjects”, 2017
We always try our best to create best results but how we can do so? Mathematical optimization is a good answer for above question if our problems can be modeled by mathematical model. The common model is analytic function and it is very easy for us to know that optimization becomes finding out extreme points of such function. The issue focuses on global optimization which means that how to find out the global peak over the whole function. It is very interesting problem because there are two realistic cases as follows: 1. We want to get the best solution and there is no one better than this solution. 2. Given a good solution, we want to get another better solution. However, global optimization is also complicated because it is relevant to other mathematical subject such as solution existence and approximation. The issue also mentions these subjects. Your attention please, the issue focuses on algorithms and applied methods to solve problem of global optimization. Thus, theoretical aspects relevant to functional analysis are mentioned very little.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Proceedings of the Third Metaheuristics International …
Journal of Computational and Applied Mathematics, 2021
IEEE Transactions on Systems, Man, and Cybernetics, 1991
Optimization Letters, 2013
SIAM Journal on Optimization, 2013
Proceedings of the Eleventh International Conference on Computational Structures Technology
Proceedings of the 9th annual conference on Genetic and evolutionary computation - GECCO '07, 2007
Int. J. of Mathematical Modelling and Numerical Optimisation
Journal of Optimization Theory and Applications, 2009
Journal of Global Optimization
IEEE Transactions on Systems, Man, and Cybernetics, 2011
Communications in Nonlinear Science and Numerical Simulation, 2015
International Journal for Uncertainty Quantification, 2021
IEEE Transactions on Systems, Man, and Cybernetics, 2000
Optimization Letters, 2008