Extended-Krylov-subspace methods for trust-region and norm-regularization subproblems

We consider an effective new method for solving trust-region and norm-regularization problems that arise as subproblems in many optimization applications. We show that the solutions to such subproblems lie on a manifold of approximately very low rank as a function of their controlling parameters (trust-region radius or regularization weight). Based on this, we build a … Read more

Lyapunov-based Analysis on First Order Method for Composite Strong-Weak Convex Functions

The Nesterov’s accelerated gradient (NAG) method generalizes the classical gradient descent algorithm by improving the convergence rate from $\mathcal{O}\left(\frac{1}{t}\right)$ to $\mathcal{O}\left(\frac{1}{t^2}\right)$ in convex optimization. This study examines the proximal gradient framework for additively separable composite functions with smooth and non-smooth components. We demonstrate that Nesterov’s accelerated proximal gradient (NAPG$_\alpha$) method attains a convergence rate of … Read more

Optimization in Theory and Practice

Algorithms for continuous optimization problems have a rich history of design and innovation over the past several decades, in which mathematical analysis of their convergence and complexity properties plays a central role. Besides their theoretical properties, optimization algorithms are interesting also for their practical usefulness as computational tools for solving real-world problems. There are often … Read more

Continuous-time Analysis of a Stochastic ADMM Method for Nonconvex Composite Optimization

In this paper, we focus on nonconvex composite optimization, whose objective is the sum of a smooth but possibly nonconvex function and a composition of a weakly convex function coupled with a linear operator. By leveraging a smoothing technique based on Moreau envelope, we propose a stochastic proximal linearized ADMM algorithm (SPLA). To understand its … Read more

Active-set Newton-MR methods for nonconvex optimization problems with bound constraints

This paper presents active-set methods for minimizing nonconvex twice-continuously differentiable functions subject to bound constraints. Within the faces of the feasible set, we employ descent methods with Armijo line search, utilizing approximated Newton directions obtained through the Minimum Residual (MINRES) method. To escape the faces, we investigate the use of the Spectral Projected Gradient (SPG) … Read more

A linesearch-based derivative-free method for noisy black-box problems

In this work we consider unconstrained optimization problems. The objective function is known through a zeroth order stochastic oracle that gives an estimate of the true objective function. To solve these problems, we propose a derivativefree algorithm based on extrapolation techniques. Under reasonable assumptions we are able to prove convergence properties for the proposed algorithms. … Read more

Faster stochastic cubic regularized Newton methods with momentum

Cubic regularized Newton (CRN) methods have attracted significant research interest because they offer stronger solution guarantees and lower iteration complexity. With the rise of the big-data era, there is growing interest in developing stochastic cubic regularized Newton (SCRN) methods that do not require exact gradient and Hessian evaluations. In this paper, we propose faster SCRN … Read more

Sub-sampled Trust-Region Methods with Deterministic Worst-Case Complexity Guarantees

In this paper, we develop and analyze sub-sampled trust-region methods for solving finite-sum optimization problems. These methods employ subsampling strategies to approximate the gradient and Hessian of the objective function, significantly reducing the overall computational cost. We propose a novel adaptive procedure for deterministically adjusting the sample size used for gradient (or gradient and Hessian) … Read more

Complexity of normalized stochastic first-order methods with momentum under heavy-tailed noise

In this paper, we propose practical normalized stochastic first-order methods with Polyak momentum, multi-extrapolated momentum, and recursive momentum for solving unconstrained optimization problems. These methods employ dynamically updated algorithmic parameters and do not require explicit knowledge of problem-dependent quantities such as the Lipschitz constant or noise bound. We establish first-order oracle complexity results for finding … Read more