Optimal Primal-Dual Methods for a Class of Saddle Point Problems

We present a novel accelerated primal-dual (APD) method for solving a class of deterministic and stochastic saddle point problems (SPP). The basic idea of this algorithm is to incorporate a multi-step acceleration scheme into the primal-dual method without smoothing the objective function. For deterministic SPP, the APD method achieves the same optimal rate of convergence … Read more

Optimizing Placement of Stationary Monitors

We examine the problem of placing stationary monitors in a continuous space, with the goal of minimizing an adversary’s maximum probability of traversing an origin-destination route without being detected. The problem arises, for instance, in defending against the transport of illicit material through some area of interest. In particular, we consider the deployment of monitors … Read more

Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms

In this paper we study new stochastic approximation (SA) type algorithms, namely, the accelerated SA (AC-SA), for solving strongly convex stochastic composite optimization (SCO) problems. Specifically, by introducing a domain shrinking procedure, we significantly improve the large-deviation results associated with the convergence rate of a nearly optimal AC-SA algorithm presented by the authors. Moreover, we … Read more

Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming

In this paper, we introduce a new stochastic approximation (SA) type algorithm, namely the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming (SP) problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this … Read more

On the Convergence Properties of Non-Euclidean Extragradient Methods for Variational Inequalities with Generalized Monotone Operators

In this paper, we study a class of generalized monotone variational inequality (GMVI) problems whose operators are not necessarily monotone (e.g., pseudo-monotone). We present non-Euclidean extragradient (N-EG) methods for computing an approximate strong solution of these problems, and demonstrate how their iteration complexities depend on the global Lipschitz or H\”{o}lder continuity properties for their operators … Read more

Level methods uniformly optimal for composite and structured nonsmooth convex optimization

The main goal of this paper is to develop uniformly optimal first-order methods for large-scale convex programming (CP). By uniform optimality we mean that the first-order methods themselves do not require the input of any problem parameters, but can still achieve the best possible iteration complexity bounds. To this end, we provide a substantial generalization … Read more

Level methods uniformly optimal for composite and structured nonsmooth convex optimization

The main goal of this paper is to develop uniformly optimal first-order methods for large-scale convex programming (CP). By uniform optimality we mean that the first-order methods themselves do not require the input of any problem parameters, but can still achieve the best possible iteration complexity bounds. To this end, we provide a substantial generalization … Read more

Bundle-type methods uniformly optimal for smooth and nonsmooth convex optimization

The bundle-level method and their certain variants are known to exhibit an optimal rate of convergence, i.e., ${\cal O}(1/\sqrt{t})$, and also excellent practical performance for solving general non-smooth convex programming (CP) problems. However, this rate of convergence is significantly worse than the optimal one for solving smooth CP problems, i.e., ${\cal O}(1/t^2)$. In this paper, … Read more

Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: a Generic Algorithmic Framework

In this paper we present a generic algorithmic framework, namely, the accelerated stochastic approximation (AC-SA) algorithm, for solving strongly convex stochastic composite optimization (SCO) problems. While the classical stochastic approximation (SA) algorithms are asymptotically optimal for solving differentiable and strongly convex problems, the AC-SA algorithm, when employed with proper stepsize policies, can achieve optimal or … Read more

Iteration-complexity of first-order augmented Lagrangian methods for convex programming

This paper considers a special class of convex programming (CP) problems whose feasible regions consist of a simple compact convex set intersected with an affine manifold. We present first-order methods for this class of problems based on an inexact version of the classical augmented Lagrangian (AL) approach, where the subproblems are approximately solved by means … Read more