Efficient Subgradient Methods for General Convex Optimization

A subgradient method is presented for solving general convex optimization problems, the main requirement being that a strictly-feasible point is known. A feasible sequence of iterates is generated, which converges to within user-specified error of optimality. Feasibility is maintained with a line-search at each iteration, avoiding the need for orthogonal projections onto the feasible region … Read more

Algorithms for stochastic optimization with expectation constraints

This paper considers the problem of minimizing an expectation function over a closed convex set, coupled with an expectation constraint on either decision variables or problem parameters. We first present a new stochastic approximation (SA) type algorithm, namely the cooperative SA (CSA), to handle problems with the expectation constraint on devision variables. We show that … Read more

An optimal subgradient algorithm with subspace search for costly convex optimization problems

This paper presents an acceleration of the optimal subgradient algorithm OSGA \cite{NeuO} for solving convex optimization problems, where the objective function involves costly affine and cheap nonlinear terms. We combine OSGA with a multidimensional subspace search technique, which leads to low-dimensional problem that can be solved efficiently. Numerical results concerning some applications are reported. A … Read more

A Framework for Applying Subgradient Methods to Conic Optimization Problems (version 2)

A framework is presented whereby a general convex conic optimization problem is transformed into an equivalent convex optimization problem whose only constraints are linear equations and whose objective function is Lipschitz continuous. Virtually any subgradient method can be applied to solve the equivalent problem. Two methods are analyzed. (In version 2, the development of algorithms … Read more

An optimal subgradient algorithm for large-scale bound-constrained convex optimization

This paper shows that the OSGA algorithm — which uses first-order information to solve convex optimization problems with optimal complexity — can be used to efficiently solve arbitrary bound-constrained convex optimization problems. This is done by constructing an explicit method as well as an inexact scheme for solving the bound-constrained rational subproblem required by OSGA. … Read more

Optimal subgradient algorithms with application to large-scale linear inverse problems

This study addresses some algorithms for solving structured unconstrained convex optimization problems using first-order information where the underlying function includes high-dimensional data. The primary aim is to develop an implementable algorithmic framework for solving problems with multi-term composite objective functions involving linear mappings using the optimal subgradient algorithm, OSGA, proposed by {\sc Neumaier} in \cite{NeuO}. … Read more

Primal-dual subgradient method for Huge-Scale Linear Conic Problems

In this paper we develop a {\em primal-dual} subgradient method for solving huge-scale Linear Conic Optimization Problems. Our main assumption is that the primal cone is formed as a direct product of many small-dimensional convex cones, and that the matrix $A$ of corresponding linear operator is {\em uniformly sparse}. In this case, our method can … Read more

Subgradient methods for huge-scale optimization problems

We consider a new class of huge-scale problems, the problems with {\em sparse subgradients}. The most important functions of this type are piece-wise linear. For optimization problems with uniform sparsity of corresponding linear operators, we suggest a very efficient implementation of subgradient iterations, which total cost depends {\em logarithmically} in the dimension. This technique is … Read more

Solving Basis Pursuit: Heuristic Optimality Check and Solver Comparison

The problem of finding a minimum l^1-norm solution to an underdetermined linear system is an important problem in compressed sensing, where it is also known as basis pursuit. We propose a heuristic optimality check as a general tool for l^1-minimization, which often allows for early termination by “guessing” a primal-dual optimal pair based on an … Read more

Hedge Algorithm and Subgradient Methods

We show that the Hedge Algorithm, a method that is widely used in Machine Learning, can be interpreted as a particular subgradient algorithm for minimizing a well-chosen convex function, namely as a Mirror Descent Scheme. Using this reformulation, we establish three modificitations and extensions of the Hedge Algorithm that are better or at least as … Read more