Group sparsity via linear-time projection

We present an efficient spectral projected-gradient algorithm for optimization subject to a group one-norm constraint. Our approach is based on a novel linear-time algorithm for Euclidean projection onto the one- and group one-norm constraints. Numerical experiments on large data sets suggest that the proposed method is substantially more efficient and scalable than existing methods. Citation … Read more

Branching proofs of infeasibility in low density subset sum problems

We prove that the subset sum problem has a polynomial time computable certificate of infeasibility for all $a$ weight vectors with density at most $1/(2n)$ and for almost all integer right hand sides. The certificate is branching on a hyperplane, i.e. by a methodology dual to the one explored by Lagarias and Odlyzko; Frieze; Furst … Read more

On Theory of Compressive Sensing via L1-Minimization:

Compressive (or compressed) sensing (CS) is an emerging methodology in computational signal processing that has recently attracted intensive research activities. At present, the basic CS theory includes recoverability and stability: the former quantifies the central fact that a sparse signal of length n can be exactly recovered from much less than n measurements via L1-minimization … Read more

An Infeasible Interior-Point Algorithm with full-Newton Step for Linear Optimization

In this paper we present an infeasible interior-point algorithm for solving linear optimization problems. This algorithm is obtained by modifying the search direction in the algorithm [C. Roos, A full-Newton step ${O}(n)$ infeasible interior-point algorithm for linear optimization, 16(4) 2006, 1110-1136.]. The analysis of our algorithm is much simpler than that of the Roos’s algorithm … Read more

Full Nesterov-Todd Step Primal-Dual Interior-Point Methods for Second-Order Cone Optimization

After a brief introduction to Jordan algebras, we present a primal-dual interior-point algorithm for second-order conic optimization that uses full Nesterov-Todd-steps; no line searches are required. The number of iterations of the algorithm is $O(\sqrt{N}\log ({N}/{\varepsilon})$, where $N$ stands for the number of second-order cones in the problem formulation and $\varepsilon$ is the desired accuracy. … Read more

Closed-form solutions to static-arbitrage upper bounds on basket options

We provide a closed-form solution to the problem of computing the sharpest static-arbitrage upper bound on the price of a European basket option, given the prices of vanilla call options in the underlying securities. Unlike previous approaches to this problem, our solution technique is entirely based on linear programming. This also allows us to obtain … Read more

Iteration-complexity of first-order penalty methods

This paper considers a special but broad class of convex programing (CP) problems whose feasible region is a simple compact convex set intersected with the inverse image of a closed convex cone under an affine transformation. We study two first-order penalty methods for solving the above class of problems, namely: the quadratic penalty method and … Read more

The Submodular Knapsack Polytope

The submodular knapsack set is the discrete lower level set of a submodular function. The modular case reduces to the classical linear 0-1 knapsack set. One motivation for studying the submodular knapsack polytope is to address 0-1 programming problems with uncertain coefficients. Under various assumptions, a probabilistic constraint on 0-1 variables can be modeled as … Read more

Nonlinear optimization for matroid intersection and extensions

We address optimization of nonlinear functions of the form $f(Wx)$~, where $f:\R^d\rightarrow \R$ is a nonlinear function, $W$ is a $d\times n$ matrix, and feasible $x$ are in some large finite set $\calF$ of integer points in $\R^n$~. Generally, such problems are intractable, so we obtain positive algorithmic results by looking at broad natural classes … Read more

On fast integration to steady state and earlier times

The integration to steady state of many initial value ODEs and PDEs using the forward Euler method can alternatively be considered as gradient descent for an associated minimization problem. Greedy algorithms such as steepest descent for determining the step size are as slow to reach steady state as is forward Euler integration with the best … Read more