A regularized limited-memory BFGS method for unconstrained minimization problems

The limited-memory BFGS (L-BFGS) algorithm is a popular method of solving large-scale unconstrained minimization problems. Since L-BFGS conducts a line search with the Wolfe condition, it may require many function evaluations for ill-posed problems. To overcome this difficulty, we propose a method that combines L-BFGS with the regularized Newton method. The computational cost for a … Read more

Robust Block Coordinate Descent

In this paper we present a novel randomized block coordinate descent method for the minimization of a convex composite objective function. The method uses (approximate) partial second-order (curvature) information, so that the algorithm performance is more robust when applied to highly nonseparable or ill conditioned problems. We call the method Robust Coordinate Descent (RCD). At … Read more

A Quasi-Newton Algorithm for Nonconvex, Nonsmooth Optimization with Global Convergence Guarantees

A line search algorithm for minimizing nonconvex and/or nonsmooth objective functions is presented. The algorithm is a hybrid between a standard Broyden–Fletcher–Goldfarb–Shanno (BFGS) and an adaptive gradient sampling (GS) method. The BFGS strategy is employed because it typically yields fast convergence to the vicinity of a stationary point, and together with the adaptive GS strategy … Read more

A Second-Order Method for Compressed Sensing Problems with Coherent and Redundant Dictionaries

In this paper we are interested in the solution of Compressed Sensing (CS) problems where the signals to be recovered are sparse in coherent and redundant dictionaries. CS problems of this type are convex with non-smooth and non-separable regularization term, therefore a specialized solver is required. We propose a primal-dual Newton Conjugate Gradients (pdNCG) method. … Read more

Generalized Inexact Proximal Algorithms: Habit’s/ Routine’s Formation with Resistance to Change, following Worthwhile Changes

This paper shows how, in a quasi metric space, an inexact proximal algorithm with a generalized perturbation term appears to be a nice tool for Behavioral Sciences (Psychology, Economics, Management, Game theory,…). More precisely, the new perturbation term represents an index of resistance to change, defined as a “curved enough” function of the quasi distance … Read more

On solving symmetric systems of linear equations in an unnormalized Krylov subspace framework

In an unnormalized Krylov subspace framework for solving symmetric systems of linear equations, the orthogonal vectors that are generated by a Lanczos process are not necessarily on the form of gradients. Associating each orthogonal vector with a triple, and using only the three-term recurrences of the triples, we give conditions on whether a symmetric system … Read more

A modified limited-memory BNS method for unconstrained minimization based on the conjugate directions idea

A modification of the limited-memory variable metric BNS method for large scale unconstrained optimization is proposed, which consist in corrections (derived from the idea of conjugate directions) of the used difference vectors for better satisfaction of previous quasi-Newton conditions. In comparison with [16], where a similar approach is used, correction vectors from more previous iterations … Read more

A globally convergent trust-region algorithm for unconstrained derivative-free optimization

In this work we explicit a derivative-free trust-region algorithm for unconstrained optimization based on the paper (Computational Optimization and Applications 53: 527-555, 2012) proposed by Powell. The objective function is approximated by quadratic models obtained by polynomial interpolation. The number of points of the interpolation set is fixed. In each iteration only one interpolation point … Read more

A Stochastic Quasi-Newton Method for Large-Scale Optimization

Abstract The question of how to incorporate curvature information in stochastic approximation methods is challenging. The direct application of classical quasi- Newton updating techniques for deterministic optimization leads to noisy curvature estimates that have harmful effects on the robustness of the iteration. In this paper, we propose a stochastic quasi-Newton method that is efficient, robust … Read more

On Efficiently Combining Limited Memory and Trust-Region Techniques

Limited memory quasi-Newton methods and trust-region methods represent two efficient approaches used for solving unconstrained optimization problems. A straightforward combination of them deteriorates the efficiency of the former approach, especially in the case of large-scale problems. For this reason, the limited memory methods are usually combined with a line search. We show how to efficiently … Read more