Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms

In this paper we study new stochastic approximation (SA) type algorithms, namely, the accelerated SA (AC-SA), for solving strongly convex stochastic composite optimization (SCO) problems. Specifically, by introducing a domain shrinking procedure, we significantly improve the large-deviation results associated with the convergence rate of a nearly optimal AC-SA algorithm presented by the authors. Moreover, we … Read more

Newton-Like Methods for Sparse Inverse Covariance Estimation

We propose two classes of second-order optimization methods for solving the sparse inverse covariance estimation problem. The first approach, which we call the Newton-LASSO method, minimizes a piecewise quadratic model of the objective function at every iteration to generate a step. We employ the fast iterative shrinkage thresholding method (FISTA) to solve this subproblem. The … Read more

Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming

In this paper, we introduce a new stochastic approximation (SA) type algorithm, namely the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming (SP) problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this … Read more

Greedy approximation in convex optimization

We study sparse approximate solutions to convex optimization problems. It is known that in many engineering applications researchers are interested in an approximate solution of an optimization problem as a linear combination of elements from a given system of elements. There is an increasing interest in building such sparse approximate solutions using different greedy-type algorithms. … Read more

Greedy expansions in convex optimization

This paper is a follow up to the previous author’s paper on convex optimization. In that paper we began the process of adjusting greedy-type algorithms from nonlinear approximation for finding sparse solutions of convex optimization problems. We modified there three the most popular in nonlinear approximation in Banach spaces greedy algorithms — Weak Chebyshev Greedy … Read more

Convergence and Perturbation Resilience of Dynamic String-Averaging Projection Methods

We consider the convex feasibility problem (CFP) in Hilbert space and concentrate on the study of string-averaging projection (SAP) methods for the CFP, analyzing their convergence and their perturbation resilience. In the past, SAP methods were formulated with a single predetermined set of strings and a single predetermined set of weights. Here we extend the … Read more

Time Consistency Decisions and Temporal Decomposition of Coherent Risk Functionals

It is well known that most risk measures (risk functionals) are time inconsistent in the following sense: It may happen that today some loss distribution appears to be less risky than another, but looking at the conditional distribution at a later time, the opposite relation holds. In this article we demonstrate that this time inconsistency … Read more

Einstein-Hessian barriers on convex cones

On the interior of a regular convex cone $K \subset \mathbb R^n$ there exist two canonical Hessian metrics, the one generated by the logarithm of the characteristic function, and the Cheng-Yau metric. The former is associated with a self-concordant logarithmically homogeneous barrier on $K$ with parameter of order $O(n)$, the universal barrier. This barrier is … Read more

Sparse Approximation via Penalty Decomposition Methods

In this paper we consider sparse approximation problems, that is, general $l_0$ minimization problems with the $l_0$-“norm” of a vector being a part of constraints or objective function. In particular, we first study the first-order optimality conditions for these problems. We then propose penalty decomposition (PD) methods for solving them in which a sequence of … Read more

Tilt stability, uniform quadratic growth, and strong metric regularity of the subdifferential.

We prove that uniform second order growth, tilt stability, and strong metric regularity of the subdifferential — three notions that have appeared in entirely different settings — are all essentially equivalent for any lower-semicontinuous, extended-real-valued function. Citation Cornell University, School of Operations Research and Information Engineering, 206 Rhodes Hall Cornell University Ithaca, NY 14853. May … Read more