Using Taylor-Approximated Gradients to Improve the Frank-Wolfe Method for Empirical Risk Minimization

\(\) The Frank-Wolfe method has become increasingly useful in statistical and machine learning applications, due to the structure-inducing properties of the iterates, and especially in settings where linear minimization over the feasible set is more computationally efficient than projection. In the setting of Empirical Risk Minimization — one of the fundamental optimization problems in statistical … Read more

Polynomial worst-case iteration complexity of quasi-Newton primal-dual interior point algorithms for linear programming

Quasi-Newton methods are well known techniques for large-scale numerical optimization. They use an approximation of the Hessian in optimization problems or the Jacobian in system of nonlinear equations. In the Interior Point context, quasi-Newton algorithms compute low-rank updates of the matrix associated with the Newton systems, instead of computing it from scratch at every iteration. … Read more

Stochastic nested primal-dual method for nonconvex constrained composition optimization

\(\) In this paper we study the nonconvex constrained composition optimization, in which the objective contains a composition of two expected-value functions whose accurate information is normally expensive to calculate. We propose a STochastic nEsted Primal-dual (STEP) method for such problems. In each iteration, with an auxiliary variable introduced to track the inner layer function … Read more

A momentum-based linearized augmented Lagrangian method for nonconvex constrained stochastic optimization

Nonconvex constrained stochastic optimization has emerged in many important application areas. Subject to general functional constraints it minimizes the sum of an expectation function and a nonsmooth regularizer. Main challenges arise due to the stochasticity in the random integrand and the possibly nonconvex functional constraints. To address these issues we propose a momentum-based linearized augmented … Read more

The Hyperbolic Augmented Lagrangian Algorithm

The hyperbolic augmented Lagrangian algorithm (HALA) is introduced in the area of continuous optimization for solving nonlinear programming problems. Under mild assumptions, such as: convexity, Slater’s qualification and differentiability, the convergence of the proposed algorithm is proved. We also study the duality theory for the case of the hyperbolic augmented Lagrangian function. Finally, in order … Read more

A Strengthened SDP Relaxation for Quadratic Optimization Over the Stiefel Manifold

We study semidefinite programming (SDP) relaxations for the NP-hard problem of globally optimizing a quadratic function over the Stiefel manifold. We introduce a strengthened relaxation based on two recent ideas in the literature: (i) a tailored SDP for objectives with a block-diagonal Hessian; (ii) and the use of the Kronecker matrix product to construct SDP relaxations. Using synthetic instances on … Read more

Superiorization as a novel strategy for linearly constrained inverse radiotherapy treatment planning

Objective: We apply the superiorization methodology to the intensity-modulated radiation therapy (IMRT) treatment planning problem. In superiorization, linear voxel dose inequality constraints are the fundamental modeling tool within which a feasibility-seeking projection algorithm will seek a feasible point. This algorithm is then perturbed with gradient descent steps to reduce a nonlinear objective function. Approach: Within … Read more

An Improved Unconstrained Approach for Bilevel Optimization

In this paper, we focus on the nonconvex-strongly-convex bilevel optimization problem (BLO). In this BLO, the objective function of the upper-level problem is nonconvex and possibly nonsmooth, and the lower-level problem is smooth and strongly convex with respect to the underlying variable $y$. We show that the feasible region of BLO is a Riemannian manifold. … Read more

An adaptive superfast inexact proximal augmented Lagrangian method for smooth nonconvex composite optimization problems

This work presents an adaptive superfast proximal augmented Lagrangian (AS-PAL) method for solving linearly-constrained smooth nonconvex composite optimization problems. Each iteration of AS-PAL inexactly solves a possibly nonconvex proximal augmented Lagrangian (AL) subproblem obtained by an aggressive/adaptive choice of prox stepsize with the aim of substantially improving its computational performance followed by a full Lagrangian … Read more

Asymptotic Consistency for Nonconvex Risk-Averse Stochastic Optimization with Infinite Dimensional Decision Spaces

Optimal values and solutions of empirical approximations of stochastic optimization problems can be viewed as statistical estimators of their true values. From this perspective, it is important to understand the asymptotic behavior of these estimators as the sample size goes to infinity, which is both of theoretical as well as practical interest. This area of … Read more