Papers by Valeria Ruggiero

Communications in Nonlinear Science and Numerical Simulation, Jul 1, 2016
In this paper we address the numerical minimization of a variational approximation of the Blake-Z... more In this paper we address the numerical minimization of a variational approximation of the Blake-Zisserman functional given by Ambrosio, Faina and March. Our approach exploits a compact matricial formulation of the objective functional and its decomposition into quadratic sparse convex sub-problems. This structure is well suited for using a block-coordinate descent method that cyclically determines a descent direction with respect to a block of variables by few iterations of a preconditioned conjugate gradient algorithm. We prove that the computed search directions are gradient related and, with convenient stepsizes, we obtain that any limit point of the generated sequence is a stationary point of the objective functional. An extensive experimentation on different datasets including real and synthetic images and digital surface models, enables us to conclude that: (1) the numerical method has satisfying performance in terms of accuracy and computational time; (2) a minimizer of the proposed discrete functional preserves the expected good geometrical properties of the Blake-Zisserman functional, i.e., it is able to detect first and second order edgeboundaries in images; (3) the method allows the segmentation of large images.

Communications in Nonlinear Science and Numerical Simulation, 2016
In this paper we address the numerical minimization of a variational approximation of the Blake-Z... more In this paper we address the numerical minimization of a variational approximation of the Blake-Zisserman functional given by Ambrosio, Faina and March. Our approach exploits a compact matricial formulation of the objective functional and its decomposition into quadratic sparse convex sub-problems. This structure is well suited for using a block-coordinate descent method that cyclically determines a descent direction with respect to a block of variables by few iterations of a preconditioned conjugate gradient algorithm. We prove that the computed search directions are gradient related and, with convenient stepsizes, we obtain that any limit point of the generated sequence is a stationary point of the objective functional. An extensive experimentation on different datasets including real and synthetic images and digital surface models, enables us to conclude that: (1) the numerical method has satisfying performance in terms of accuracy and computational time; (2) a minimizer of the proposed discrete functional preserves the expected good geometrical properties of the Blake-Zisserman functional, i.e., it is able to detect first and second order edgeboundaries in images; (3) the method allows the segmentation of large images.

SIAM Journal on Scientific Computing, 2018
One of the most popular approaches for the minimization of a convex functional given by the sum o... more One of the most popular approaches for the minimization of a convex functional given by the sum of a differentiable term and a nondifferentiable one is the forward-backward method with extrapolation. The main reason making this method very appealing for a wide range of applications is that it achieves a O(1/k 2) convergence rate in the objective function values, which is optimal for a first order method. Recent contributions on this topic are related to the convergence of the iterates to a minimizer and the possibility of adopting a variable metric in the proximal step. Moreover, it has been also proved that the objective function convergence rate is actually o(1/k 2). However, these results are obtained under the assumption that the minimization subproblem involved in the backward step is computed exactly, which is clearly not realistic in a variety of relevant applications. In this paper, we analyze the convergence properties when both variable metric and inexact computation of the backward step are allowed. To do this, we adopt a suitable inexactness criterion and we devise implementable conditions on both the accuracy of the inexact backward step computation and the variable metric selection, so that the o(1/k 2) rate and the convergence of the iterates are preserved. The effectiveness of the proposed approach is also validated with a numerical experience showing the effects of the combination of inexactness with variable metric techniques.

arXiv (Cornell University), Jun 9, 2015
Forward-backward methods are a very useful tool for the minimization of a functional given by the... more Forward-backward methods are a very useful tool for the minimization of a functional given by the sum of a differentiable term and a nondifferentiable one and their investigation has experienced several efforts from many researchers in the last decade. In this paper we focus on the convex case and, inspired by recent approaches for accelerating first-order iterative schemes, we develop a scaled inertial forward-backward algorithm which is based on a metric changing at each iteration and on a suitable extrapolation step. Unlike standard forward-backward methods with extrapolation, our scheme is able to handle functions whose domain is not the entire space. Both an O(1/k 2) convergence rate estimate on the objective function values and the convergence of the sequence of the iterates are proved. Numerical experiments on several test problems arising from image processing, compressed sensing and statistical inference show the effectiveness of the proposed method in comparison to well performing state-of-the-art algorithms.
Journal of Scientific Computing
This file contains a revised version of the proofs of Theorems 3 and 4 of the paper [1]. In parti... more This file contains a revised version of the proofs of Theorems 3 and 4 of the paper [1]. In particular, a more correct argument is employed to obtain the inequality (A11) from (A10), provided that a stronger hypothesis on the sequence {ε k } is included. The practical implementation of the algorithm (Section 3) remains as it is and all the numerical experiments (Section 4) are still valid since the stronger hypothesis on {ε k } was already satisfied by the selected setting of the hyperparameters.

SIAM Journal on Scientific Computing, 2018
One of the most popular approaches for the minimization of a convex functional given by the sum o... more One of the most popular approaches for the minimization of a convex functional given by the sum of a differentiable term and a nondifferentiable one is the forward-backward method with extrapolation. The main reason making this method very appealing for a wide range of applications is that it achieves a O(1/k 2) convergence rate in the objective function values, which is optimal for a first order method. Recent contributions on this topic are related to the convergence of the iterates to a minimizer and the possibility of adopting a variable metric in the proximal step. Moreover, it has been also proved that the objective function convergence rate is actually o(1/k 2). However, these results are obtained under the assumption that the minimization subproblem involved in the backward step is computed exactly, which is clearly not realistic in a variety of relevant applications. In this paper, we analyze the convergence properties when both variable metric and inexact computation of the backward step are allowed. To do this, we adopt a suitable inexactness criterion and we devise implementable conditions on both the accuracy of the inexact backward step computation and the variable metric selection, so that the o(1/k 2) rate and the convergence of the iterates are preserved. The effectiveness of the proposed approach is also validated with a numerical experience showing the effects of the combination of inexactness with variable metric techniques.

Parallel Computing, 2003
This paper concerns a parallel inexact interior-point (IP) method for solving linear and quadrati... more This paper concerns a parallel inexact interior-point (IP) method for solving linear and quadratic programs with a special structure in the constraint matrix and in the objective function. In order to exploit these features, a preconditioned conjugate gradient (PCG) algorithm is used to approximately solve the normal equations or the reduced KKT system obtained from the linear inner system arising at each iteration of the IP method. A suitable adaptive termination rule for the PCG method enables to save computing time at the early steps of the outer scheme and, at the same time, it assures the global and the local superlinear convergence of the whole method. We analyse a parallel implementation of the method, referring some kinds of meaningful large-scale problems. In particular we discuss the data allocation and the workload distribution among the processors. The results of a numerical experimentation carried out on Cray T3E and SGI Origin 3800 show a good scalability of the parallel code and confirm the effectiveness of the method for problems with special structure.
Parallel Computing, 1989
This paper is concerned with the development, analysis and implementation on a cemputer consistin... more This paper is concerned with the development, analysis and implementation on a cemputer consisting of two vector processors of the arithmetic mean method for solving numerically large sparse sets of linear ordinary differential equations. This method has second-order accuracy in time and is stable. The special class of differential equations that arise in solving the diffusion problem by the method of lines is considered. In this case, the proposed method has been tested on the CRAY X-MP/48 utilizing two CPUs. The numerical results are largely in keeping with the theory; a speedup factor of nearly two is obtained.
Journal of Optimization Theory and Applications, 2004
The aim of this paper is to show that the theorem on the global convergence of the Newton interio... more The aim of this paper is to show that the theorem on the global convergence of the Newton interior-point (IP) method presented in Ref. 1 can be proved under weaker assumptions. Indeed, we assume the boundedness of the sequences of multipliers related to nontrivial constraints, instead of the hypothesis that the gradients of the inequality constraints corresponding to slack variables not bounded away from zero are linearly independent. By numerical examples, we show that, in the implementation of the Newton IP method, loss of boundedness in the iteration sequence of the multipliers detects when the algorithm does not converge from the chosen starting point.
Computational Optimization and Applications, 2014
In this work we analyze a first order method especially tailored for smooth saddle point problems... more In this work we analyze a first order method especially tailored for smooth saddle point problems, based on an alternating extragradient scheme. The proposed method is based on three successive projection steps, which can be computed also with respect to non Euclidean metrics. The stepsize parameter can be adaptively computed, so that the method can be considered as a black-box algorithm for general smooth saddle point problems. We develop the global convergence analysis in the framework of non Euclidean proximal distance functions, under mild local Lipschitz conditions, proving also the O(1 k) rate of convergence on the primal-dual gap. Finally, we analyze the practical behavior of the method and its effectiveness on some applications arising from different fields.

ANNALI DELL'UNIVERSITA' DI FERRARA
When Ilio Galligani passed away in December 2020, the Italian community of Numerical Analysis los... more When Ilio Galligani passed away in December 2020, the Italian community of Numerical Analysis lost one of its most outstanding exponents. Ilio was among the first Italian mathematicians to dedicate his studies to numerically solving mathematical problems. Over his numerous research and coordination appointments, he strongly supported all the multiple research themes and activities that have arisen within the Italian community of computational mathematicians, encouraging and valuing them with unfailing energy. His profound commitment, inexhaustible enthusiasm, and brilliant intuition have made a decisive contribution to the development of Numerical Analysis in Italy. This special issue is a tribute to his extraordinary figure as a researcher and leader. We want to mention the warm welcome of all the colleagues consulted on the opportunity of the initiative. The number of submitted manuscripts exceeded all our expectations, leaving us pleasantly surprised. The presence of contributions in B Giulio Casciola
A preconditioner for solving large scale variational inequality problems Un precondizionatore per... more A preconditioner for solving large scale variational inequality problems Un precondizionatore per risolvere disequazioni variazionali di grandi dimensioni
Journal of Computational and Applied Mathematics, 2021
Variable metric techniques are a crucial ingredient in many first order optimization algorithms. ... more Variable metric techniques are a crucial ingredient in many first order optimization algorithms. In practice, they consist in a rule for computing, at each iteration, a suitable symmetric, positive definite scaling matrix to be multiplied to the gradient vector. Besides quasi-Newton BFGS techniques, which represented the state-of-the-art since the 70's, new
Journal of Physics: Conference Series, 2017
In this paper we propose an −subgradient method for solving a constrained minimization problem ar... more In this paper we propose an −subgradient method for solving a constrained minimization problem arising in super-resolution imaging applications. The method, compared to the state-of-the-art methods for single image super-resolution on some test problems, proves to be very efficient, both for the reconstruction quality and the computational time.
Parametric imaging of nuclear medicine data exploits dynamic functional images in order to recons... more Parametric imaging of nuclear medicine data exploits dynamic functional images in order to reconstruct maps of kinetic parameters related to the metabolism of a specific tracer injected in the biological tissue. From a computational viewpoint, the realization of parametric images requires the pixel-wise numerical solution of compartmental inverse problems that are typically ill-posed and nonlinear. In the present paper we introduce a fast numerical optimization scheme for parametric imaging relying on a regularized version of the standard affine-scaling Trust Region method. The validation of this approach is realized in a simulation framework for brain imaging and comparison of performances is made with respect to a regularized Gauss-Newton scheme and a standard nonlinear least-squares algorithm.
Journal of Computational and Applied Mathematics, Mar 1, 2021
Variable metric techniques are a crucial ingredient in many first order optimization algorithms. ... more Variable metric techniques are a crucial ingredient in many first order optimization algorithms. In practice, they consist in a rule for computing, at each iteration, a suitable symmetric, positive definite scaling matrix to be multiplied to the gradient vector. Besides quasi-Newton BFGS techniques, which represented the state-of-the-art since the 70's, new
Applied Mathematics and Computation, 2017
The terms and conditions for the reuse of this version of the manuscript are specified in the pub... more The terms and conditions for the reuse of this version of the manuscript are specified in the publishing policy. For all terms of use and more information see the publisher's website.
ANNALI DELL UNIVERSITA DI FERRARA, 2003
SUNTO-In questo lavoro, descriviamo una variante del metodo di Punto Interno di Newton in [8] per... more SUNTO-In questo lavoro, descriviamo una variante del metodo di Punto Interno di Newton in [8] per problemi di programmazione nonlineari. In questo schema, il parametro di perturbazione può essere scelto in un range di valori e possiamo usare un metodo iterativo per risolvere approssimativamente il sistema lineare ridotto derivante ad ogni step. Abbiamo individuato la regola di terminazione interna che garantisce la convergenza globale di detto metodo di Punto Interno di Newton. Rileviamo che le ipotesi richieste sono più deboli di quelle stabilite in [8], come provato da alcuni esempi numerici.
Applied Mathematics and Computation, 1997
In several recent works, the Arithmetic Mean Method for solving large sparse linear systems has b... more In several recent works, the Arithmetic Mean Method for solving large sparse linear systems has been introduced and analysed. Each iteration of this method consists of solving two independent systems. When we obtain two approximate solutions of these systems by a prefLxed number of steps of an iterative scheme, we generate an inner/outer procedure, called Two-Stage Arithmetic Mean Method. General convergence theorems are proved for M-matrices and for symmetric positive definite matrices. In particular, we analyze a version of Two-Stage Arithmetic Mean Method for T(q, r) matrices, deriving the convergence conditions. The method is well suited for implementation on a parallel computer. Numerical experiments carried out on Cray-T3D permits to evaluate the effectiveness of the Two-Stage Arithmetic Mean Method.
Computational Optimization and Applications
Uploads
Papers by Valeria Ruggiero