Newtonian Methods with Wolfe Linesearch in Nonsmooth Optimization and Machine Learning

This paper introduces and develops coderivative-based Newton methods with Wolfe linesearch conditions to solve various classes of problems in nonsmooth optimization and machine learning. We first propose a generalized regularized Newton method with Wolfe linesearch (GRNM-W) for unconstrained $C^{1,1}$ minimization problems (which are second-order nonsmooth) and establish global as well as local superlinear convergence of … Read more

A novel approach for bilevel programs based on Wolfe duality

This paper considers a bilevel program, which has many applications in practice. To develop effective numerical algorithms, it is generally necessary to transform the bilevel program into a single-level optimization problem. The most popular approach is to replace the lower-level program by its KKT conditions and then the bilevel program can be transformed into a … Read more

Discrete Approximation Scheme in Distributionally Robust Optimization

Discrete approximation which is the prevailing scheme in stochastic programming in the past decade has been extended to distributionally robust optimization (DRO) recently. In this paper we conduct rigorous quantitative stability analysis of discrete approximation schemes for DRO, which measures the approximation error in terms of discretization sample size. For the ambiguity set defined through … Read more

Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems

We understand linear convergence of some first-order methods such as the proximal gradient method (PGM), the proximal alternating linearized minimization (PALM) algorithm and the randomized block coordinate proximal gradient method (R-BCPGM) for minimizing the sum of a smooth convex function and a nonsmooth convex function from a variational analysis perspective. We introduce a new analytic … Read more