Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2020, arXiv: Optimization and Control
In this paper, a sequential search method for finding the global minimum of an objective function is presented, The descent gradient search is repeated until the global minimum is obtained. The global minimum is located by a process of finding progressively better local minima. We determine the set of points of intersection between the curve of the function and the horizontal plane which contains the local minima previously found. Then, a point in this set with the greatest descent slope is chosen to be a initial point for a new descent gradient search. The method has the descent property and the convergence is monotonic. To demonstrate the effectiveness of the proposed sequential descent method, several non-convex multidimensional optimization problems are solved. Numerical examples show that the global minimum can be sought by the proposed method of sequential descent.
Journal of Global Optimization, 2004
In this paper, a hybrid descent method, consisting of a simulated annealing algorithm and a gradient-based method, is proposed. The simulated annealing algorithm is used to locate descent points for previously converged local minima. The combined method has the descent property and the convergence is monotonic. To demonstrate the effectiveness of the proposed hybrid descent method, several multi-dimensional non-convex optimization problems are solved. Numerical examples show that global minimum can be sought via this hybrid descent method.
2003
A new efficient global optimization technique, named Coupled Local Minimizers (CLM), is presented in the paper. The CLM method uses a set of search points, initially spread over the search space. In each search point the function and derivative values are calculated and used to direct the search process. But instead of performing separate, independent searches from each of these points (i.e. multi-start local optimization), the set of optimizers are coupled during the search process in order to create interaction between them, which results in a cooperative search mechanism. The combination of a fast convergence -due to the derivative information that is used-with the capability of finding the global minimum -resulting from the parallel strategy-guarantees an efficient global optimization algorithm. The paper proposes an implementation based on the second-order Newton method in order to increase the convergence speed. The CLM method and its implementation are described extensively in the paper and are illustrated with a test function containing several local minima. The paper focusses on low-dimensional optimization problems only.
2020
The paper proposes a method for solving computationally time-consuming multidimensional global optimization problems. The developed method combines the use of a nested dimensional reduction scheme and numerical estimates of the objective function derivatives. Derivatives significantly reduce the cost of solving global optimization problems, however, the use of a nested scheme can lead to the fact that the derivatives of the reduced function become discontinuous. Typical global optimization methods are highly dependent on the continuity of the objective function. Thus, to use derivatives in combination with a nested scheme, an optimization method is required that can work with discontinuous functions. The paper discusses the corresponding method, as well as the results of numerical experiments in which such an optimization scheme is compared with other known methods.
Journal of Global Optimization, 1999
Inspired by a method by Jones et al.(1993), we present a global optimization algorithm based on multilevel coordinate search. It is guaranteed to converge if the function is continuous in the neighborhood of a global minimizer. By starting a local search from certain good points, an improved convergence result is obtained. We discuss implementation details and give some numerical results.
Top, 1998
The development of efficient algorithms that provide all the local minima of a function is crucial to solve certain subproblems in many optimization methods. A "multi-local" optimization procedure using inexact line searches is presented, and numerical experiments are also reported. An application of the method to a semi-infinite programming procedure is included.
Journal of Global Optimization, 2006
This paper presents a general approach that combines global search strategies with local search and attempts to find a global minimum of a real valued function of n variables. It assumes that derivative information is unreliable; consequently, it deals with derivative free algorithms, but derivative information can be easily incorporated. This paper presents a nonmonotone derivative free algorithm and shows numerically that it may converge to a better minimum starting from a local nonglobal minimum. This property is then incorporated into a random population to globalize the algorithm. Convergence to a zero order stationary point is established for nonsmooth convex functions, and convergence to a first order stationary point is established for strictly differentiable functions. Preliminary numerical results are encouraging. A Java implementation that can be run directly from the Web allows the interested reader to get a better insight of the performance of the algorithm on several standard functions. The general framework proposed here, allows the user to incorporate variants of well known global search strategies.
A computationally expensive multi-modal optimization problem is considered. After an optimization loop it is desirable that the optimality gap, i.e., the difference between the best value obtained and the true optimum, is as small as possible. We define the concept of maximum loss as being the supremum of the optimality gaps over a set of functions, i.e., the largest possible optimality gap assuming that the unknown objective function belongs to a certain set of functions. The minimax strategy for global optimization is then to-at each iteration-choose a new evaluation point such that the maximum loss is decreased as much as possible. This strategy is in contrast to the maximum gain strategy, which is utilized in several common global optimization algorithms, and the relation between these strategies is described. We investigate how to implement the minimax strategy for the Lipschitz space of functions on box-constrained domains. Several problems are revealed. For example, to obtain uniqueness of the set of solutions to the minimax problem it is often necessary to decrease the domain such that the problem is more localized. We propose a number of algorithmic schemes, based on sequential linearization, to solve the different subproblems that appear. The algorithms are illustrated by numerical examples. We conclude that the minimax strategy is promising for global optimization when the main concern is to guarantee that the resulting solution is near-optimal.
Optimization Letters, 2018
In this study, we introduce a new global optimization technique for a multi-dimensional unconstrained optimization problem. First, we present a new smoothing auxiliary function. Second, we transform the multi-dimensional problem into a one-dimensional problem by using an auxiliary function to reduce the number of local minimizers and then we find the global minimizer of the one-dimensional problem. Finally, we find the global minimizer of the multi-dimensional smooth objective function with the help of a new algorithm.
Special Issue “Some Novel Algorithms for Global Optimization and Relevant Subjects”, Applied and Computational Mathematics (ACM), 2017
Global optimization is necessary in some cases when we want to achieve the best solution or we require a new solution which is better the old one. However global optimization is a hazard problem. Gradient descent method is a well-known technique to find out local optimizer whereas approximation solution approach aims to simplify how to solve the global optimization problem. In order to find out the global optimizer in the most practical way, I propose a so-called descending region (DR) algorithm which is combination of gradient descent method and approximation solution approach. The ideology of DR algorithm is that given a known local minimizer, the better minimizer is searched only in a so-called descending region under such local minimizer. Descending region is begun by a so-called descending point which is the main subject of DR algorithm. Descending point, in turn, is solution of intersection equation (A). Finally, I prove and provide a simpler linear equation system (B) which is derived from (A). So (B) is the most important result of this research because (A) is solved by solving (B) many enough times. In other words, DR algorithm is refined many times so as to produce such (B) for searching for the global optimizer. I propose a so-called simulated Newton – Raphson (SNR) algorithm which is a simulation of Newton – Raphson method to solve (B). The starting point is very important for SNR algorithm to converge. Therefore, I also propose a so-called RTP algorithm, which is refined and probabilistic process, in order to partition solution space and generate random testing points, which aims to estimate the starting point of SNR algorithm. In general, I combine three algorithms such as DR, SNR, and RTP to solve the hazard problem of global optimization. Although the approach is division and conquest methodology in which global optimization is split into local optimization, solving equation, and partitioning, the solution is synthesis in which DR is backbone to connect itself with SNR and RTP.
Journal of Global Optimization, 2010
A large number of algorithms introduced in the literature to find the global minimum of a real function rely on iterative executions of searches of a local minimum. Multistart, tunneling and some versions of simulated annealing are methods that produce well-known procedures. A crucial point of these algorithms is to decide whether to perform or not a new local search. In this paper we look for the optimal probability value to be set at each iteration so that by moving from a local minimum to a new one, the average number of function evaluations evals is minimal. We find that this probability has to be 0 or 1 depending on the number of function evaluations required by the local search and by the size of the level set at the current point. An implementation based on the above result is introduced. The values required to calculate evals are estimated from the history of the algorithm at running time. The algorithm has been tested both for sample problems constructed by the GKLS package and for problems often used in the literature. The outcome is compared with recent results.
SIAM Journal on Optimization, 2002
This paper presents sequential and parallel derivative-free algorithms for finding a local minimum of smooth and nonsmooth functions of practical interest. It is proved that, under mild assumptions, a sufficient decrease condition holds for a nonsmooth function. Based on this property, the algorithms explore a set of search directions and move to a point with a sufficiently lower functional value. If the function is strictly differentiable at its limit points, a (sub)sequence of points generated by the algorithm converges to a first-order stationary point (∇f (x) = 0). If the function is convex around its limit points, convergence (of a subsequence) to a point with nonnegative directional derivatives on a set of search directions is ensured. Preliminary numerical results on sequential algorithms show that they compare favorably with the recently introduced pattern search methods.
Soft Computing, 2011
IEEE Transactions on Systems, Man, and Cybernetics, 1991
A numerical method for finding the global minimum of nonconvex functions is presented. The method is based on the principles of simulated annealing, but handles continuously valued variables in a natural way. The method is completely general, and optimizes functions of up to 30 variables. Several examples are presented. A general-purpose program, INTEROPT, is described, which finds the minimum of arbitrary functions, with user-friendly, quasi-natural-language input
Journal of Global Optimization, 2014
Locating and identifying points as global minimizers is, in general, a hard and timeconsuming task. Difficulties increase in the impossibility of using the derivatives of the functions defining the problem. In this work, we propose a new class of methods suited for global derivative-free constrained optimization. Using direct search of directional type, the algorithm alternates between a search step, where potentially good regions are located, and a poll step where the previously located promising regions are explored. This exploitation is made through the launching of several instances of directional direct searches, one in each of the regions of interest. Differently from a simple multistart strategy, direct searches will merge when sufficiently close. The goal is to end with as many direct searches as the number of local minimizers, which would easily allow locating the global extreme value. We describe the algorithmic structure considered, present the corresponding convergence analysis and report numerical results, showing that the proposed method is competitive with currently commonly used global derivative-free optimization solvers.
CEJM, 2008
An algorithm for univariate optimization using a linear lower bounding function is extended to a nonsmooth case by using the generalized gradient instead of the derivative. A convergence theorem is proved under the condition of semismoothness. This approach gives a globally superlinear convergence of algorithm, which is a generalized Newton-type method.
Gradient-based optimization algorithms are probably the most efficient option for the solution of a local optimization problem. These methods are intrinsically limited in the search of a local optimum of the objective function: if a global optimum is searched, the application of local optimization algorithms can be still successful if the algorithm is initialized starting from a large number of different points in the design space (multistart algorithms). As a counterpart, the cost of the exploration is further increased, linearly with the number of adopted starting points. Consequently, the use of a multistart local optimization algorithm is rarely adopted, mainly for two reasons: i) the large computational cost and ii) the absence of a guarantee about the success of the search (in fact, there is not a general indication about the minimum number of starting points able to guarantee the success of global optimization).
Journal of Mathematical Analysis and Applications, 2007
This paper presents variable neighborhood search (VNS) for the problem of finding the global minimum of a nonconvex function. The variable neighborhood search, which changes systematically neighborhood structures in the search for finding a better solution, is used to guide a set of standard improvement heuristics. This algorithm was tested on some standard test functions, and successful results were obtained. Its performance was compared with the other algorithms, and observed to be better.
2007
Abstract This paper introduces a modified version of the well known global optimization technique named line search method. The modifications refer to the way in which the direction and the steps are determined. The modified line search technique (MLS) is applied for some global optimization problems. Functions having a high number of dimensions are considered (50 in this case).
Applied Mathematics and Computation, 2011
In this paper we present a new hybrid method, called the SASP method. The purpose of this method is the hybridization of the simulated annealing (SA) with the descent method, where we estimate the gradient using simultaneous perturbation. Firstly, the new hybrid method finds a local minimum using the descent method, then SA is executed in order to escape from the currently discovered local minimum to a better one, from which the descent method restarts a new local search, and so on until convergence. The new hybrid method can be widely applied to a class of global optimization problems for continuous functions with constraints. Experiments on 30 benchmark functions, including high dimensional functions, show that the new method is able to find near optimal solutions efficiently. In addition, its performance as a viable optimization method is demonstrated by comparing it with other existing algorithms. Numerical results improve the robustness and efficiency of the method presented.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.