New Penalized Stochastic Gradient Methods for Linearly Constrained Strongly Convex Optimization

For minimizing a strongly convex objective function subject to linear inequality constraints, we consider a penalty approach that allows one to utilize stochastic methods for problems with a large number of constraints and/or objective function terms. We provide upper bounds on the distance between the solutions to the original constrained problem and the penalty reformulations, … Read more

An inexact ADMM with proximal-indefinite term and larger stepsize

In this paper, an inexact Alternating Direction Method of Multipliers (ADMM) has been proposed for solving the two-block separable convex optimization problem subject to linear equality constraints. The first resulting subproblem is solved inexactly under relative error criterion, while another subproblem called regularization problem is solved inexactly by introducing an indefinite proximal term. Meanwhile, the … Read more

Affine invariant convergence rates of the conditional gradient method

We show that the conditional gradient method for the convex composite problem \[\min_x\{f(x) + \Psi(x)\}\] generates primal and dual iterates with a duality gap converging to zero provided a suitable growth property holds and the algorithm makes a judicious choice of stepsizes. The rate of convergence of the duality gap to zero ranges from sublinear … Read more

Unmatched Preconditioning of the Proximal Gradient Algorithm

This works addresses the resolution of penalized least-squares problems using the proximal gradient algorithm (PGA). It is known that PGA can be accelerated by preconditioning strategies. However, typical effective choices of preconditioners may correspond to intricate matrices that are not easily inverted, and lead to an increased complexity in the computation of the proximity step. … Read more

A nested primal–dual FISTA-like scheme for composite convex optimization problems

We propose a nested primal–dual algorithm with extrapolation on the primal variable suited for minimizing the sum of two convex functions, one of which is continuously differentiable. The proposed algorithm can be interpreted as an inexact inertial forward–backward algorithm equipped with a prefixed number of inner primal–dual iterations for the proximal evaluation and a “warm–start” … Read more

Survey Descent: A Multipoint Generalization of Gradient Descent for Nonsmooth Optimization

For strongly convex objectives that are smooth, the classical theory of gradient descent ensures linear convergence relative to the number of gradient evaluations. An analogous nonsmooth theory is challenging. Even when the objective is smooth at every iterate, the corresponding local models are unstable and the number of cutting planes invoked by traditional remedies is … Read more

Nonlinear conjugate gradient for smooth convex functions

The method of nonlinear conjugate gradients (NCG) is widely used in practice for unconstrained optimization, but it satisfies weak complexity bounds at best when applied to smooth convex functions. In contrast, Nesterov’s accelerated gradient (AG) method is optimal up to constant factors for this class. However, when specialized to quadratic function, conjugate gradient is optimal … Read more

Duality aspects in convex conic programming

In this paper we study strong duality aspects in convex conic programming over general convex cones. It is known that the duality in convex optimization is linked with specific theorems of alternatives. We formulate and prove strong alternatives to the existence of the relative interior point in the primal (dual) feasible set. We analyze the … Read more

On Componental Operators in Hilbert Space

We consider a Hilbert space that is a product of a finite number of Hilbert spaces and operators that are represented by “componental operators” acting on the Hilbert spaces that form the product space. We attribute operatorial properties to the componental operators rather than to the full operators. The operatorial properties that we discuss include … Read more

Strong duality of a conic optimization problem with a single hyperplane and two cone constraints

Strong (Lagrangian) duality of general conic optimization problems (COPs) has long been studied and its profound and complicated results appear in different forms in a wide range of literatures. As a result, characterizing the known and unknown results can sometimes be difficult. The aim of this article is to provide a unified and geometric view … Read more