Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1990, The Journal of the Australian Mathematical Society. Series B. Applied Mathematics
Design of an interior point method for linear programming is discussed, and results of a simulation study reported. Emphasis is put on guessing the optimal vertex at as early a stage as possible.
Computational Optimization and Applications, 2019
Numerical experiments using the Netlib test set are made, which show that this approach is competitive when compared to well established solvers, such as PCx.
Journal of Mathematics and System Science, 2015
In this paper we present a new method combining interior and exterior approaches to solve linear programming problems. With the assumption that a feasible interior solution to the input system is known, this algorithm uses it and appropriate constraints of the system to construct a sequence of the so called station cones whose vertices tend very fast to the solution to be found. The computational experiments show that the number of iterations of the new algorithm is significantly smaller than that of the second phase of the simplex method. Additionally, when the number of variables and constraints of the problem increase, the number of iterations of the new algorithm increase in a slower manner than that of the simplex method.
Mathematical Programming, 1998
The layered-step interior-point algorithm was introduced by Vavasis and Ye. The algorithm accelerates the path following interior-point algorithm and its arithmetic complexity depends only on the coefficient matrix A. The main drawback of the algorithm is the use of an unknown big constant ZA in computing the search direction and to initiate the algorithm. We propose a modified layered-step interior-point algorithm which does not use the big constant in computing the search direction. The constant is required only for initialization when a well-centered feasible solution is not available, and it is not required if an upper bound on the norm of a primal dual optimal solution is known in advance. The complexity of the simplified algorithm is the same as that of Vavasis and Ye.
Applied Mathematics & Information Sciences
In this paper, we describe a new method for finding search directions for interior point methods (IPMs) in linear optimization (LO). The theoretical complexity of the new algorithms are calculated and we prove that the iteration bound is O(log(n//)) in this case too.
Interfaces, 1990
The world/of mathematical programming has seen a remarkable surge of activity follo ing publication of Karmarkar's projective alggithu ,May 984.
Computer Science and Operations Research, 1992
In this paper we describe a unified algorithmic framework for the interior point method (IPM) of solving Linear Programs (LPs) which allows us to adapt it over a range of high performance computer architectures. We set out the reasons as to why IPM makes better use of high performance computer architecture than the sparse simplex method. In the inner iteration of the IPM a search direction is computed using Newton or higher order methods. Computationally this involves solving a sparse symmetric positive definite (SSPD) system of equations. The choice of direct and indirect methods for the solution of this system and the design of data structures to take advantage of coarse grain parallel and massively parallel computer architectures are considered in detail. Finally, we present experimental results of solving NETLIB test problems on examples of these architectures and put forward arguments as to why integration of the system within sparse simplex is beneficial. 2. Sparse Simplex and Interior Point Method: Hardware Platforms Progress in the solution of large LPs has been achieved in three ways, namely hardware, software and algorithmic developments. Most of the developments during the 70's and early 80's in the Sparse Simplex method were based on serial computer architecture. The main thrust of these developments were towards exploiting sparsity and finding methods which either reduced simplex iterations or reduced the computational work in each iteration [BIXBY91, M1TAMZ91]. In general these algorithmic and software developments of the sparse simplex method cannot be readily extended to parallel computers. In contrast the interior point methods which have proven to be robust and competitive appear to be better positioned to make use of newly emerging high * The primal-dual algorithm converges to the optimal solution in at most O (n1/2 L) iterations [MONADL89] where n denotes the dimension of the problems and L the input size. It
Oxford University Press eBooks, 1996
Analele Stiintifice ale Universitatii Ovidius Constanta, Seria Matematica
In this paper we treat numerical computation methods for linear programming. Started from the analysis of the efficiency and defficiency of the simplex procedure, we present new possibilities offered by the interior-point methods, which appears from practical necessity, from the need of efficient means of solving large-scale problems. We realise the implementation in Java of the Karmarkar's method.
Mathematical Methods of Operations Research, 2016
In this paper, we present a proposal for a variation of the predictor-corrector interior point method with multiple centrality corrections. The new method uses the continued iteration to compute a new search direction for the predictor corrector method. The purpose of incorporating the continued iteration is to reduce the overall computational cost required to solve a linear programming problem. The computational results constitute evidence of the improvement obtained with the use of this technique combined with the interior point method.
In this paper we use the interior point methodology to cover the main issues in linear programming: duality theory, parametric and sensitivity analysis, and algorithmic and computational aspects. The aim is to provide a global view on the subject matter.
Algorithmica, 1991
Since Karmarkar published his algorithm for linear programming, several different interior directions have been proposed and much effort was spent on the problem transformations needed to apply these new techniques. This paper examines several search directions in a common framework that does not need any problem transformation. These directions prove to be combinations of two problem-dependent vectors, and can all be improved by a bidirectional search procedure.
Handbook of Combinatorial Optimization, 1998
Research on using interior point algorithms to solve combinatorial optimization and integer programming problems is surveyed. This paper discusses branch and cut methods for integer programming problems, a potential reduction method based on transforming an integer programming problem to an equivalent nonconvex quadratic programming problem, interior point methods for solving network flow problems, and methods for solving multicommodity flow problems, including an interior point column generation algorithm. *
2003
In this paper we present an infeasible path-following interiorpoint algorithm for solving linear programs using a relaxed notion of the central path, called quasicentral path, as a central region. The algorithm starts from an infeasible point which satisfies that the norm of the dual condition is less than the norm of the primal condition. We use weighted sets as proximity measures of the quasicentral path, and a new merit function for making progress toward this central region. We test the algorithm on a set of NETLIB problems obtaining promising numerical results.
European Journal of Operational Research, 2012
Interior point methods for optimization have been around for more than 25 years now. Their presence has shaken up the field of optimization. Interior point methods for linear and (convex) quadratic programming display several features which make them particularly attractive for very large scale optimization. Among the most impressive of them are their lowdegree polynomial worst-case complexity and an unrivalled ability to deliver optimal solutions in an almost constant number of iterations which depends very little, if at all, on the problem dimension. Interior point methods are competitive when dealing with small problems of dimensions below one million constraints and variables and are beyond competition when applied to large problems of dimensions going into millions of constraints and variables.
Progress in Mathematical Programming, 1989
This chapter presents an algorithm that works simultaneously on primal and dual linear programming problems and generates a sequence of pairs of their interior feasible solutions. Along the sequence generated, the duality gap converges to zero at least linearly with a global convergence ratio (1-Yf/n); each iteration reduces the duality gap by at least Yf/n. Here n denotes the size of the problems and Yf a positive number depending on initial interior feasible solutions of the problems. The algorithm is based on an application of the classical logarithmic barrier function method to primal and dual linear programs, which has recently been proposed and studied by Megiddo. N. Megiddo (ed.
Applied Mathematics and Computation, 2008
We propose in this study, a practical modification of Karmarkar's projective algorithm for linear programming problems. This modification leads to a considerable reduction of the cost and the number of iterations. This claim is confirmed by many interesting numerical experimentations.
Yugoslav Journal of Operations Research, 2009
The aim of this paper is to present a new simplex type algorithm for the Linear Programming Problem. The Primal-Dual method is a Simplex-type pivoting algorithm that generates two paths in order to converge to the optimal solution. The first path is primal feasible while the second one is dual feasible for the original problem. Specifically, we use a three-phase-implementation. The first two phases construct the required primal and dual feasible solutions, using the Primal Simplex algorithm. Finally, in the third phase the Primal-Dual algorithm is applied. Moreover, a computational study has been carried out, using randomly generated sparse optimal linear problems, to compare its computational efficiency with the Primal Simplex algorithm and also with MATLAB's Interior Point Method implementation. The algorithm appears to be very promising since it clearly shows its superiority to the Primal Simplex algorithm as well as its robustness over the IPM algorithm.
SIAM Journal on Optimization, 2003
In this paper we present a variant of Vavasis and Ye's layered-step path following primaldual interior-point algorithm for linear programming. Our algorithm is a predictor-corrector type algorithm which uses from time to time the least layered squares (LLS) direction in place of the affine scaling direction. It has the same iteration-complexity bound of Vavasis and Ye's algorithm, namely O(n 3.5 log(χ A + n)) where n is the number of nonnegative variables andχ A is a certain condition number associated with the constraint matrix A. Vavasis and Ye's algorithm requires explicit knowledge ofχ A (which is very hard to compute or even estimate) in order to compute the layers for the LLS direction. In contrast, our algorithm uses the affine scaling direction at the current iterate to determine the layers for the LLS direction, and hence does not require the knowledge ofχ A. A variant with similar properties and with the same complexity has been developed by Megiddo, Mizuno and Tsuchiya. However, their algorithm needs to compute n LLS directions on every iteration while ours computes at most one LLS direction on any given iteration.
1999
The paper studies numerical stability problems arising in the application of interior-point methods to primal degenerate linear programs. A stabilization procedure based on Gaussian elimination is proposed and it is shown that it stabilizes all path following methods, original and modified Dikin's method, Karmarkar's method, etc.
Mathematics of Operations Research, 1997
In the adaptive step primal-dual interior point method for linear programming, polynomial algorithms are obtained by computing Newton directions towards targets on the central path, and restricting the iterates to a neighborhood of this central path. In this paper, the adaptive step methodology is extended, by considering targets in a certain central region, which contains the usual central path, and subsequently generating iterates in a neighborhood of this region. The size of the central region can vary from the central path to the whole feasible region by choosing a certain parameter. An 𝒪(√nL) iteration bound is obtained under mild conditions on the choice of the target points. In particular, we leave plenty of room for experimentation with search directions. The practical performance of the new primal-dual interior point method is measured on part of the Netlib test set for various sizes of the central region.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.