Parameter-free proximal bundle methods with adaptive stepsizes for hybrid convex composite optimization problems

This paper develops a parameter-free adaptive proximal bundle method with two important features: 1) adaptive choice of variable prox stepsizes that “closely fits” the instance under consideration; and 2) adaptive criterion for making the occurrence of serious steps easier. Computational experiments show that our method performs substantially fewer consecutive null steps (i.e., a shorter cycle) … Read more

Adaptive Algorithms for Robust Phase Retrieval

This paper considers the robust phase retrieval, which can be cast as a nonsmooth and nonconvex optimization problem. We propose two first-order algorithms with adaptive step sizes: the subgradient algorithm (AdaSubGrad) and the inexact proximal linear algorithm (AdaIPL). Our contribution lies in the novel design of adaptive step sizes based on quantiles of the absolute … Read more

A Two Stepsize SQP Method for Nonlinear Equality Constrained Stochastic Optimization

We develop a Sequential Quadratic Optimization (SQP) algorithm for minimizing a stochastic objective function subject to deterministic equality constraints. The method utilizes two different stepsizes, one which exclusively scales the component of the step corrupted by the variance of the stochastic gradient estimates and a second which scales the entire step. We prove that this … Read more

Stochastic Gauss-Newton Algorithms for Online PCA

In this paper, we propose a stochastic Gauss-Newton (SGN) algorithm to study the online principal component analysis (OPCA) problem, which is formulated by using the symmetric low-rank product (SLRP) model for dominant eigenspace calculation. Compared with existing OPCA solvers, SGN is of improved robustness with respect to the varying input data and algorithm parameters. In … Read more

Non-stationary Douglas-Rachford and alternating direction method of multipliers: adaptive stepsizes and convergence

We revisit the classical Douglas-Rachford (DR) method for finding a zero of the sum of two maximal monotone operators. Since the practical performance of the DR method crucially depends on the stepsizes, we aim at developing an adaptive stepsize rule. To that end, we take a closer look at a linear case of the problem … Read more