Fast Bundle-Level Type Methods for unconstrained and ball-constrained convex optimization

It has been shown in \cite{Lan13-1} that the accelerated prox-level (APL) method and its variant, the uniform smoothing level (USL) method, have optimal iteration complexity for solving black-box and structured convex programming problems without requiring the input of any smoothness information. However, these algorithms require the assumption on the boundedness of the feasible set and … Read more

Coordinate descent algorithms

Coordinate descent algorithms solve optimization problems by successively performing approximate minimization along coordinate directions or coordinate hyperplanes. They have been used in applications for many years, and their popularity continues to grow because of their usefulness in data analysis, machine learning, and other areas of current interest. This paper describes the fundamentals of the coordinate … Read more

Interior-point algorithms for convex optimization based on primal-dual metrics

We propose and analyse primal-dual interior-point algorithms for convex optimization problems in conic form. The families of algorithms we analyse are so-called short-step algorithms and they match the current best iteration complexity bounds for primal-dual symmetric interior-point algorithm of Nesterov and Todd, for symmetric cone programming problems with given self-scaled barriers. Our results apply to … Read more

Error Bounds and Holder Metric Subregularity

The Holder setting of the metric subregularity property of set-valued mappings between general metric or Banach/Asplund spaces is investigated in the framework of the theory of error bounds for extended real-valued functions of two variables. A classification scheme for the general Holder metric subregularity criteria is presented. The criteria are formulated in terms of several … Read more

Sequential Threshold Control in Descent Splitting Methods for Decomposable Optimization Problems

We suggest a modification of the descent splitting methods for decomposable composite optimization problems, which maintains the basic convergence properties, but enables one to reduce the computational expenses per iteration and to provide computations in a distributed manner. It consists in making coordinate-wise steps together with a special threshold control. CitationKazan Federal University, Kazan 420008, … Read more

On the optimal order of worst case complexity of direct search

The worst case complexity of direct-search methods has been recently analyzed when they use positive spanning sets and impose a sufficient decrease condition to accept new iterates. Assuming that the objective function is smooth, it is now known that such methods require at most O(n^2 epsilon^{-2}) function evaluations to compute a gradient of norm below … Read more

The direct extension of ADMM for three-block separable convex minimization models is convergent when one function is strongly convex

The alternating direction method of multipliers (ADMM) is a benchmark for solving a two-block linearly constrained convex minimization model whose objective function is the sum of two functions without coupled variables. Meanwhile, it is known that the convergence is not guaranteed if the ADMM is directly extended to a multiple-block convex minimization model whose objective … Read more

On proximal subgradient splitting method for minimizing the sum of two nonsmooth convex functions

In this paper we present a variant of the proximal forward-backward splitting method for solving nonsmooth optimization problems in Hilbert spaces, when the objective function is the sum of two nondifferentiable convex functions. The proposed iteration, which will be call the Proximal Subgradient Splitting Method, extends the classical projected subgradient iteration for important classes of … Read more

Stochastic Compositional Gradient Descent: Algorithms for Minimizing Compositions of Expected-Value Functions

Classical stochastic gradient methods are well suited for minimizing expected-value objective functions. However, they do not apply to the minimization of a nonlinear function involving expected values or a composition of two expected-value functions, i.e., problems of the form $\min_x \E_v\[f_v\big(\E_w [g_w(x)]\big) \]$. In order to solve this stochastic composition problem, we propose a class … Read more

On the Information-Adaptive Variants of the ADMM: an Iteration Complexity Perspective

Designing algorithms for an optimization model often amounts to maintaining a balance between the degree of information to request from the model on the one hand, and the computational speed to expect on the other hand. Naturally, the more information is available, the faster one can expect the algorithm to converge. The popular algorithm of … Read more