Limited-memory Common-directions Method for Large-scale Optimization: Convergence, Parallelization, and Distributed Optimization

In this paper, we present a limited-memory common-directions method for smooth optimization that interpolates between first- and second- order methods. At each iteration, a subspace of a limited dimension size is constructed using first-order information from previous iterations, and an ef- ficient Newton method is deployed to find an approximate minimizer within this subspace. With … Read more

Fast Bundle-Level Type Methods for unconstrained and ball-constrained convex optimization

It has been shown in \cite{Lan13-1} that the accelerated prox-level (APL) method and its variant, the uniform smoothing level (USL) method, have optimal iteration complexity for solving black-box and structured convex programming problems without requiring the input of any smoothness information. However, these algorithms require the assumption on the boundedness of the feasible set and … Read more

Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms

In this paper we study new stochastic approximation (SA) type algorithms, namely, the accelerated SA (AC-SA), for solving strongly convex stochastic composite optimization (SCO) problems. Specifically, by introducing a domain shrinking procedure, we significantly improve the large-deviation results associated with the convergence rate of a nearly optimal AC-SA algorithm presented by the authors. Moreover, we … Read more

Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: a Generic Algorithmic Framework

In this paper we present a generic algorithmic framework, namely, the accelerated stochastic approximation (AC-SA) algorithm, for solving strongly convex stochastic composite optimization (SCO) problems. While the classical stochastic approximation (SA) algorithms are asymptotically optimal for solving differentiable and strongly convex problems, the AC-SA algorithm, when employed with proper stepsize policies, can achieve optimal or … Read more

Optimal steepest descent algorithms for unconstrained convex problems: fine tuning Nesterov’s method

We modify the first order algorithm for convex programming proposed by Nesterov. The resulting algorithm keeps the optimal complexity obtained by Nesterov with no need of a known Lipschitz constant for the gradient, and performs better in practically all examples in a set of test problems. CitationTechnical Report, Federal University of Santa Catarina, 2008.ArticleDownload View … Read more

Efficient Methods for Stochastic Composite Optimization

This paper considers an important class of convex programming problems whose objective function $\Psi$ is given by the summation of a smooth and non-smooth component. Further, it is assumed that the only information available for the numerical scheme to solve these problems is the subgradient of $\Psi$ contaminated by stochastic noise. Our contribution mainly consists … Read more