Fixed point and Bregman iterative methods for matrix rank minimization

The linearly constrained matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. The linearly constrained nuclear norm minimization is a convex relaxation of this problem. Although it can be cast as a semidefinite programming problem, the nuclear norm minimization problem is expensive to solve when the … Read more

Incremental-like Bundle Methods with Application to Energy Planning

An important field of application of non-smooth optimization refers to decomposition of large-scale or complex problems by Lagrangian duality. In this setting, the dual problem consists in maximizing a concave non-smooth function that is defined as the sum of sub-functions. The evaluation of each sub-function requires solving a specific optimization sub-problem, with specific computational complexity. … Read more

Proximal-like contraction methods for monotone variational inequalities in a unified framework

Approximate proximal point algorithms (abbreviated as APPAs) are classical approaches for convex optimization problems and monotone variational inequalities. To solve the subproblems of these algorithms, the projection method takes the iteration in form of $u^{k+1} = P_{\Omega}[u^k-\alpha_k d^k]$. Interestingly, many of them can be paired such that $%\exists \tilde{u}^k, \tilde{u}^k = P_{\Omega}[u^k – \beta_kF(v^k)] = … Read more

A full-Newton step infeasible interior-point algorithm for linear programming based on a kernel function

This paper proposes an infeasible interior-point algorithm with full-Newton step for linear programming, which is an extension of the work of Roos (SIAM J. Optim., 16(4):1110–1136, 2006). We introduce a kernel function in the algorithm. For $p\in[0,1)$, the polynomial complexity can be proved and the result coincides with the best result for infeasible interior-point methods, … Read more

Proximal Methods for Nonlinear Programming: Double Regularization and Inexact Subproblems

This paper describes the first phase of a project attempting to construct an efficient general-purpose nonlinear optimizer using an augmented Lagrangian outer loop with a relative error criterion, and an inner loop employing a state-of-the art conjugate gradient solver. The outer loop can also employ double regularized proximal kernels, a fairly recent theoretical development that … Read more

Implementing Algorithms for Signal and Image Reconstruction on Graphical Processing Units

Several highly effective algorithms that have been proposed recently for compressed sensing and image processing applications can be implemented efficiently on commodity graphical processing units (GPUs). The properties of algorithms and application that make for efficient GPU implementation are discussed, and computational results for several algorithms are presented that show large speedups over CPU implementations. … Read more

Duality-Based Algorithms for Total-Variation-Regularized Image Restoration

Image restoration models based on total variation (TV) have become popular since their introduction by Rudin, Osher, and Fatemi (ROF) in 1992. The dual formulation of this model has a quadratic objective with separable constraints, making projections onto the feasible set easy to compute. This paper proposes application of gradient projection (GP) algorithms to the … Read more

Lipschitz behavior of the robust regularization

To minimize or upper-bound the value of a function “robustly”, we might instead minimize or upper-bound the “epsilon-robust regularization”, defined as the map from a point to the maximum value of the function within an epsilon-radius. This regularization may be easy to compute: convex quadratics lead to semidefinite-representable regularizations, for example, and the spectral radius … Read more

Chance-constrained optimization via randomization: feasibility and optimality

In this paper we study the link between a semi-infinite chance-constrained optimization problem and its randomized version, i.e. the problem obtained by sampling a finite number of its constraints. Extending previous results on the feasibility of randomized convex programs, we establish here the feasibility of the solution obtained after the elimination of a portion of … Read more

On Verifiable Sufficient Conditions for Sparse Signal Recovery via L1 Minimization

We propose novel necessary and sufficient conditions for a sensing matrix to be “s-good” — to allow for exact L1-recovery of sparse signals with s nonzero entries when no measurement noise is present. Then we express the error bounds for imperfect L1-recovery (nonzero measurement noise, nearly s-sparse signal, near-optimal solution of the optimization problem yielding … Read more