Computation of sparse low degree interpolating polynomials and their application to derivative-free optimization

Interpolation-based trust-region methods are an important class of algorithms for Derivative-Free Optimization which rely on locally approximating an objective function by quadratic polynomial interpolation models, frequently built from less points than there are basis components. Often, in practical applications, the contribution of the problem variables to the objective function is such that many pairwise correlations … Read more

On partially sparse recovery

In this paper we consider the problem of recovering a partially sparse solution of an underdetermined system of linear equations by minimizing the l1-norm of the part of the solution vector which is known to be sparse. Such a problem is closely related to the classical problem in Compressed Sensing where the l1-norm of the … Read more

Computation of sparse low degree interpolating polynomials and their application to derivative-free optimization

Interpolation-based trust-region methods are an important class of algorithms for Derivative-Free Optimization which rely on locally approximating an objective function by quadratic polynomial interpolation models, frequently built from less points than there are basis components. Often, in practical applications, the contribution of the problem variables to the objective function is such that many pairwise correlations … Read more

Penalty Decomposition Methods for hBcNorm Minimization

In this paper we consider general l0-norm minimization problems, that is, the problems with l0-norm appearing in either objective function or constraint. In particular, we first reformulate the l0-norm constrained problem as an equivalent rank minimization problem and then apply the penalty decomposition (PD) method proposed in [33] to solve the latter problem. By utilizing … Read more

A Linearly Convergent Algorithm for Solving a Class of Nonconvex/Affine Feasibility Problems

We introduce a class of nonconvex/affine feasibility problems, called (NCF), that consists of finding a point in the intersection of affine constraints with a nonconvex closed set. This class captures some interesting fundamental and NP hard problems arising in various application areas such as sparse recovery of signals and affine rank minimization that we briefly … Read more

A Unified Approach for Minimizing Composite Norms

We propose a first-order augmented Lagrangian algorithm (FALC) to solve the composite norm minimization problem min |sigma(F(X)-G)|_alpha + |C(X)- d|_beta subject to A(X)-b in Q; where sigma(X) denotes the vector of singular values of X, the matrix norm |sigma(X)|_alpha denotes either the Frobenius, the nuclear, or the L2-operator norm of X, the vector norm |.|_beta … Read more

L1 Minimization via Randomized First Order Algorithms

In this paper we propose randomized first-order algorithms for solving bilinear saddle points problems. Our developments are motivated by the need for sublinear time algorithms to solve large-scale parametric bilinear saddle point problems where cheap online assessment of solution quality is crucial. We present the theoretical efficiency estimates of our algorithms and discuss a number … Read more

A First-Order Augmented Lagrangian Method for Compressed Sensing

We propose a First-order Augmented Lagrangian algorithm (FAL) for solving the basis pursuit problem. FAL computes a solution to this problem by inexactly solving a sequence of L1-regularized least squares sub-problems. These sub-problems are solved using an infinite memory proximal gradient algorithm wherein each update reduces to “shrinkage” or constrained “shrinkage”. We show that FAL … Read more

Sparse optimization with least-squares constraints

The use of convex optimization for the recovery of sparse signals from incomplete or compressed data is now common practice. Motivated by the success of basis pursuit in recovering sparse vectors, new formulations have been proposed that take advantage of different types of sparsity. In this paper we propose an efficient algorithm for solving a … Read more

On Low Rank Matrix Approximations with Applications to Synthesis Problem in Compressed Sensing

We consider the synthesis problem of Compressed Sensing: given $s$ and an $M\times n$ matrix $A$, extract from $A$ an $m\times n$ submatrix $A_m$, with $m$ as small as possible, which is $s$-good, that is, every signal $x$ with at most $s$ nonzero entries can be recovered from observation $A_m x$ by $\ell_1$ minimization: $x … Read more