Low-Rank Matrix Completion using Nuclear Norm with Facial Reduction

Minimization of the nuclear norm is often used as a surrogate, convex relaxation, for finding the minimum rank completion (recovery) of a partial matrix. The minimum nuclear norm problem can be solved as a trace minimization semidefinite programming problem (\SDP). The \SDP and its dual are regular in the sense that they both satisfy strict … Read more

Backward Step Control for Global Newton-type Methods

We present and analyze a new damping approach called backward step control for the globalization of the convergence of Newton-type methods for the numerical solution of nonlinear root-finding problems. We provide and discuss reasonable assumptions that imply convergence of backward step control on the basis of generalized Newton paths in conjunction with a backward analysis … Read more

Robust Numerical Calibration for Implied Volatility Expansion Models

Implied volatility expansions allow calibration of sophisticated volatility models. They provide an accurate fit and parametrization of implied volatility surfaces that is consistent with empirical observations. Fine-grained higher order expansions offer a better fit but pose the challenge of finding a robust, stable and computationally tractable calibration procedure due to a large number of market … Read more

New Improved Penalty Methods for Sparse Reconstruction Based on Difference of Two Norms

In this paper, we further establish two types of DC (Difference of Convex functions) programming for $l_0$ sparse reconstruction. Our DC objective functions are specified to the difference of two norms. One is the difference of $l_1$ and $l_{\sigma_q}$ norms (DC $l_1$-$l_{\sigma_q}$ for short) where $l_{\sigma_q}$ is the $l_1$ norm of the $q$-term ($q\geq1$) best … Read more

Alternating direction methods for non convex optimization with applications to second-order least-squares and risk parity portfolio selection

In this paper we mainly focus on optimization of sums of squares of quadratic functions, which we refer to as second-order least-squares problems, subject to convex constraints. Our motivation arises from applications in risk parity portfolio selection. We generalize the setting further by considering a class of nonlinear, non convex functions which admit a (non … Read more

A Preconditioner for a Primal-Dual Newton Conjugate Gradients Method for Compressed Sensing Problems

In this paper we are concerned with the solution of Compressed Sensing (CS) problems where the signals to be recovered are sparse in coherent and redundant dictionaries. We extend a primal-dual Newton Conjugate Gradients (pdNCG) method for CS problems. We provide an inexpensive and provably effective preconditioning technique for linear systems using pdNCG. Numerical results … Read more

Levenberg-Marquardt methods based on probabilistic gradient models and inexact subproblem solution, with application to data assimilation

The Levenberg-Marquardt algorithm is one of the most popular algorithms for the solution of nonlinear least squares problems. Motivated by the problem structure in data assimilation, we consider in this paper the extension of the classical Levenberg-Marquardt algorithm to the scenarios where the linearized least squares subproblems are solved inexactly and/or the gradient model is … Read more

An Efficient Gauss-Newton Algorithm for Symmetric Low-Rank Product Matrix Approximations

We derive and study a Gauss-Newton method for computing a symmetric low-rank product that is the closest to a given symmetric matrix in Frobenius norm. Our Gauss-Newton method, which has a particularly simple form, shares the same order of iteration-complexity as a gradient method when the size of desired eigenspace is small, but can be … Read more

A Fast Active Set Block Coordinate Descent Algorithm for l1-regularized least squares

The problem of finding sparse solutions to underdetermined systems of linear equations arises in several real-world problems (e.g. signal and image processing, compressive sensing, statistical inference). A standard tool for dealing with sparse recovery is the l1-regularized least-squares approach that has been recently attracting the attention of many researchers. In this paper, we describe an … Read more

Quasi-Newton updates with weighted secant equations

We provide a formula for variational quasi-Newton updates with multiple weighted secant equations. The derivation of the formula leads to a Sylvester equation in the correction matrix. Examples are given. CitationReport naXys-09-2013, Namur Centre for Complex Systems, Unibersity of Namur, Namur (Belgium)ArticleDownload View PDF