Faster stochastic cubic regularized Newton methods with momentum

Cubic regularized Newton (CRN) methods have attracted significant research interest because they offer stronger solution guarantees and lower iteration complexity. With the rise of the big-data era, there is growing interest in developing stochastic cubic regularized Newton (SCRN) methods that do not require exact gradient and Hessian evaluations. In this paper, we propose faster SCRN … Read more

On the Acceleration of Proximal Bundle Methods

The proximal bundle method (PBM) is a fundamental and computationally effective algorithm for solving nonsmooth optimization problems. In this paper, we present the first variant of the PBM for smooth objectives, achieving an accelerated convergence rate of \(\frac{1}{\sqrt{\epsilon}}\log(\frac{1}{\epsilon})\), where \(\epsilon\) is the desired accuracy. Our approach addresses an open question regarding the convergence guarantee of … Read more

The Proximal Bundle Algorithm Under a Frank-Wolfe Perspective: an Improved Complexity Analysis

The proximal bundle algorithm (PBA) is a fundamental and computationally effective algorithm for solving optimization problems with nonsmooth components. We investigate its convergence rate, focusing on composite settings where one function is smooth and the other is piecewise linear. We interpret a sequence of null steps of the PBA as a Frank-Wolfe algorithm on the … Read more

Relaxed Proximal Point Algorithm: Tight Complexity Bounds and Acceleration without Momentum

In this paper, we focus on the relaxed proximal point algorithm (RPPA) for solving convex (possibly nonsmooth) optimization problems. We conduct a comprehensive study on three types of relaxation schedules: (i) constant schedule with relaxation parameter \(\alpha_k\equiv \alpha \in (0, \sqrt{2}]\), (ii) the dynamic schedule put forward by Teboulle and Vaisbourd [TV23], and (iii) the … Read more

Universal subgradient and proximal bundle methods for convex and strongly convex hybrid composite optimization

This paper develops two parameter-free methods for solving convex and strongly convex hybrid composite optimization problems, namely, a composite subgradient type method and a proximal bundle type method. Both functional and stationary complexity bounds for the two methods are established in terms of the unknown strong convexity parameter. To the best of our knowledge, the … Read more

An Adaptive Proximal ADMM for Nonconvex Linearly Constrained Composite Programs

This paper develops an adaptive proximal alternating direction method of multipliers (ADMM) for solving linearly constrained, composite optimization problems under the assumption that the smooth component of the objective is weakly convex, while the non-smooth component is convex and block-separable.  The proposed method is adaptive to all problem parameters, including smoothness and weak convexity constants, … Read more

Variance Reduction and Low Sample Complexity in Stochastic Optimization via Proximal Point Method

This paper proposes a stochastic proximal point method to solve a stochastic convex composite optimization problem. High probability results in stochastic optimization typically hinge on restrictive assumptions on the stochastic gradient noise, for example, sub-Gaussian distributions. Assuming only weak conditions such as bounded variance of the stochastic gradient, this paper establishes a low sample complexity … Read more

A low-rank augmented Lagrangian method for large-scale semidefinite programming based on a hybrid convex-nonconvex approach

This paper introduces HALLaR, a new first-order method for solving large-scale semidefinite programs (SDPs) with bounded domain. HALLaR is an inexact augmented Lagrangian (AL) method where the AL subproblems are solved by a novel hybrid low-rank (HLR) method. The recipe behind HLR is based on two key ingredients: 1) an adaptive inexact proximal point method … Read more

A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization

In this paper we consider finding an approximate second-order stationary point (SOSP) of general nonconvex conic optimization that minimizes a twice differentiable function subject to nonlinear equality constraints and also a convex conic constraint. In particular, we propose a Newton-conjugate gradient (Newton-CG) based barrier-augmented Lagrangian method for finding an approximate SOSP of this problem. Under … Read more

A Newton-CG based augmented Lagrangian method for finding a second-order stationary point of nonconvex equality constrained optimization with complexity guarantees

In this paper we consider finding a second-order stationary point (SOSP) of nonconvex equality constrained optimization when a nearly feasible point is known. In particular, we first propose a new Newton-CG method for finding an approximate SOSP of unconstrained optimization and show that it enjoys a substantially better complexity than the Newton-CG method [56]. We … Read more