Iteration Complexity of Fixed-Step Methods by Nesterov and Polyak for Convex Quadratic Functions

This note considers the momentum method by Polyak and the accelerated gradient method by Nesterov, both without line search but with fixed step length applied to strictly convex quadratic functions assuming that exact gradients are used and appropriate upper and lower bounds for the extreme eigenvalues of the Hessian matrix are known. Simple 2-d-examples show … Read more

Robust Two-Stage Optimization with Covariate Data

We consider a generalization of two-stage decision problems in which the second-stage decision may be a function of a predictive signal but cannot adapt fully to the realized uncertainty. We will show how such problems can be learned from sample data by considering a family of regularized sample average formulations. Furthermore, our regularized data-driven formulations … Read more

Inertial Krasnoselskii-Mann Iterations

We establish the weak convergence of inertial Krasnoselskii-Mann iterations towards a common fixed point of a family of quasi-nonexpansive operators, along with worst case estimates for the rate at which the residuals vanish. Strong and linear convergence are obtained in the quasi-contractive setting. In both cases, we highlight the relationship with the non-inertial case, and … Read more

The projective exact penalty method for general constrained optimization

A new projective exact penalty function method is proposed for the equivalent reduction of constrained optimization problems to nonsmooth unconstrained ones. In the method, the original objective function is extended to infeasible points by summing its value at the projection of an infeasible point on the feasible set with the distance to the projection. The … Read more

Optimized convergence of stochastic gradient descent by weighted averaging

Under mild assumptions stochastic gradient methods asymptotically achieve an optimal rate of convergence if the arithmetic mean of all iterates is returned as an approximate optimal solution. However, in the absence of stochastic noise, the arithmetic mean of all iterates converges considerably slower to the optimal solution than the iterates themselves. And also in the … Read more

Convergence rate analysis of the gradient descent-ascent method for convex-concave saddle-point problems

In this paper, we study the gradient descent-ascent method for convex-concave saddle-point problems. We derive a new non-asymptotic global convergence rate in terms of distance to the solution set by using the semidefinite programming performance estimation method. The given convergence rate incorporates most parameters of the problem and it is exact for a large class … Read more

AN-SPS: Adaptive Sample Size Nonmonotone Line Search Spectral Projected Subgradient Method for Convex Constrained Optimization Problems

Article Download View AN-SPS: Adaptive Sample Size Nonmonotone Line Search Spectral Projected Subgradient Method for Convex Constrained Optimization Problems

Using Taylor-Approximated Gradients to Improve the Frank-Wolfe Method for Empirical Risk Minimization

\(\) The Frank-Wolfe method has become increasingly useful in statistical and machine learning applications, due to the structure-inducing properties of the iterates, and especially in settings where linear minimization over the feasible set is more computationally efficient than projection. In the setting of Empirical Risk Minimization — one of the fundamental optimization problems in statistical … Read more

Finite convergence of the inexact proximal gradient method to sharp minima

Attractive properties of subgradient methods, such as robust stability and linear convergence, has been emphasized when they are used to solve nonsmooth optimization problems with sharp minima [12, 13]. In this letter we extend the robustness results to the composite convex models and show that the basic proximal gradient algorithm under the presence of a … Read more

On Optimal Universal First-Order Methods for Minimizing Heterogeneous Sums

This work considers minimizing a convex sum of functions, each with potentially different structure ranging from nonsmooth to smooth, Lipschitz to non-Lipschitz. Nesterov’s universal fast gradient method provides an optimal black-box first-order method for minimizing a single function that takes advantage of any continuity structure present without requiring prior knowledge. In this paper, we show … Read more