Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models

The evaluation complexity of general nonlinear, possibly nonconvex,constrained optimization is analyzed. It is shown that, under suitable smoothness conditions, an $\epsilon$-approximate first-order critical point of the problem can be computed in order $O(\epsilon^{1-2(p+1)/p})$ evaluations of the problem’s function and their first $p$ derivatives. This is achieved by using a two-phases algorithm inspired by Cartis, Gould, … Read more

An optimal randomized incremental gradient method

In this paper, we consider a class of finite-sum convex optimization problems whose objective function is given by the summation of $m$ ($\ge 1$) smooth components together with some other relatively simple terms. We first introduce a deterministic primal-dual gradient (PDG) method that can achieve the optimal black-box iteration complexity for solving these composite optimization … Read more

Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models

The worst-case evaluation complexity for smooth (possibly nonconvex) unconstrained optimization is considered. It is shown that, if one is willing to use derivatives of the objective function up to order $p$ (for $p\geq 1$) and to assume Lipschitz continuity of the $p$-th derivative, then an $\epsilon$-approximate first-order critical point can be computed in at most … Read more

New results on subgradient methods for strongly convex optimization problems with a unified analysis

We develop subgradient- and gradient-based methods for minimizing strongly convex functions under a notion which generalizes the standard Euclidean strong convexity. We propose a unifying framework for subgradient methods which yields two kinds of methods, namely, the Proximal Gradient Method (PGM) and the Conditional Gradient Method (CGM), unifying several existing methods. The unifying framework provides … Read more

Global convergence of the Heavy-ball method for convex optimization

This paper establishes global convergence and provides global bounds of the convergence rate of the Heavy-ball method for convex optimization problems. When the objective function has Lipschitz-continuous gradient, we show that the Cesa ́ro average of the iterates converges to the optimum at a rate of $O(1/k)$ where k is the number of iterations. When … Read more

On an open question about the complexity of a dynamic spectrum management problem

In this paper we discuss the complexity of a dynamic spectrum management problem within a multi-user communication system with K users and N available tones. In this problem a common utility function is optimized. In particular, so called min-rate, harmonic mean and geometric mean utility functions are considered. The complexity of the optimization problems with … Read more

Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization

In this paper we study stochastic quasi-Newton methods for nonconvex stochastic optimization, where we assume that only stochastic information of the gradients of the objective function is available via a stochastic first-order oracle (SFO). Firstly, we propose a general framework of stochastic quasi-Newton methods for solving nonconvex stochastic optimization. The proposed framework extends the classic … Read more

Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients

The worst-case behaviour of a general class of regularization algorithms is considered in the case where only objective function values and associated gradient vectors are evaluated. Upper bounds are derived on the number of such evaluations that are needed for the algorithm to produce an approximate first-order critical point whose accuracy is within a user-defined … Read more

A note on sample complexity of multistage stochastic programs

We derive a \emph{lower bound} for the \emph{sample complexity} of the Sample Average Approximation method for a certain class of multistage stochastic optimization problems. In previous works, \emph{upper bounds} for such problems were derived. We show that the dependence of the \emph{lower bound} with respect to the complexity parameters and the problem’s data are comparable … Read more

Conditional Gradient Sliding for Convex Optimization

In this paper, we present a new conditional gradient type method for convex optimization by utilizing a linear optimization (LO) oracle to minimize a series of linear functions over the feasible set. Different from the classic conditional gradient method, the conditional gradient sliding (CGS) algorithm developed herein can skip the computation of gradients from time … Read more