On the regularizing behavior of recent gradient methods in the solution of linear ill-posed problems

We analyze the regularization properties of two recently proposed gradient methods applied to discrete linear inverse problems. By studying their filter factors, we show that the tendency of these methods to eliminate first the eigencomponents of the gradient corresponding to large singular values allows to reconstruct the most significant part of the solution, thus yielding … Read more

Distributed Optimization Methods for Large Scale Optimal Control

This thesis aims to develop and implement both nonlinear and linear distributed optimization methods that are applicable, but not restricted to the optimal control of distributed systems. Such systems are typically large scale, thus the well-established centralized solution strategies may be computationally overly expensive or impossible and the application of alternative control algorithms becomes necessary. … Read more

An efficient gradient method using the Yuan steplength

We propose a new gradient method for quadratic programming, named SDC, which alternates some SD iterates with some gradient iterates that use a constant steplength computed through the Yuan formula. The SDC method exploits the asymptotic spectral behaviour of the Yuan steplength to foster a selective elimination of the components of the gradient along the … Read more

A new and improved quantitative recovery analysis for iterative hard thresholding algorithms in compressed sensing

We present a new recovery analysis for a standard compressed sensing algorithm, Iterative Hard Thresholding (IHT) (Blumensath and Davies, 2008), which considers the fixed points of the algorithm. In the context of arbitrary measurement matrices, we derive a sufficient condition for convergence of IHT to a fixed point and a necessary condition for the existence … Read more

Trace-Penalty Minimization for Large-scale Eigenspace Computation

The Rayleigh-Ritz (RR) procedure, including orthogonalization, constitutes a major bottleneck in computing relatively high dimensional eigenspaces of large sparse matrices. Although operations involved in RR steps can be parallelized to a certain level, their parallel scalability, which is limited by some inherent sequential steps, is lower than dense matrix-matrix multiplications. The primary motivation of this … Read more

First-order Methods of Smooth Convex Optimization with Inexact Oracle

We introduce the notion of inexact first-order oracle and analyze the behaviour of several first-order methods of smooth convex optimization used with such an oracle. This notion of inexact oracle naturally appears in the context of smoothing techniques, Moreau-Yosida regularization, Augmented Lagrangians and many other situations. We derive complexity estimates for primal, dual and fast … Read more

Processor Speed Control with Thermal Constraints

We consider the problem of adjusting speeds of multiple computer processors sharing the same thermal environment, such as a chip or multi-chip package. We assume that the speed of processor (and associated variables, such as power supply voltage) can be controlled, and we model the dissipated power of a processor as a positive and strictly … Read more

New Adaptive Stepsize Selections in Gradient Methods

This paper deals with gradient methods for minimizing n-dimensional strictly convex quadratic functions. Two new adaptive stepsize selection rules are presented and some key properties are proved. Practical insights on the effectiveness of the proposed techniques are given by a numerical comparison with the Barzilai-Borwein (BB) method, the cyclic/adaptive BB methods and two recent monotone … Read more