The Hyperbolic Augmented Lagrangian Algorithm

The hyperbolic augmented Lagrangian algorithm (HALA) is introduced in the area of continuous optimization for solving nonlinear programming problems. Under mild assumptions, such as: convexity, Slater’s qualification and differentiability, the convergence of the proposed algorithm is proved. We also study the duality theory for the case of the hyperbolic augmented Lagrangian function. Finally, in order … Read more

A Strengthened SDP Relaxation for Quadratic Optimization Over the Stiefel Manifold

We study semidefinite programming (SDP) relaxations for the NP-hard problem of globally optimizing a quadratic function over the Stiefel manifold. We introduce a strengthened relaxation based on two recent ideas in the literature: (i) a tailored SDP for objectives with a block-diagonal Hessian; (ii) and the use of the Kronecker matrix product to construct SDP relaxations. Using synthetic instances on … Read more

Superiorization as a novel strategy for linearly constrained inverse radiotherapy treatment planning

Objective: We apply the superiorization methodology to the intensity-modulated radiation therapy (IMRT) treatment planning problem. In superiorization, linear voxel dose inequality constraints are the fundamental modeling tool within which a feasibility-seeking projection algorithm will seek a feasible point. This algorithm is then perturbed with gradient descent steps to reduce a nonlinear objective function. Approach: Within … Read more

An Improved Unconstrained Approach for Bilevel Optimization

In this paper, we focus on the nonconvex-strongly-convex bilevel optimization problem (BLO). In this BLO, the objective function of the upper-level problem is nonconvex and possibly nonsmooth, and the lower-level problem is smooth and strongly convex with respect to the underlying variable $y$. We show that the feasible region of BLO is a Riemannian manifold. … Read more

An adaptive superfast inexact proximal augmented Lagrangian method for smooth nonconvex composite optimization problems

This work presents an adaptive superfast proximal augmented Lagrangian (AS-PAL) method for solving linearly-constrained smooth nonconvex composite optimization problems. At each iteration, AS-PAL inexactly solves a possibly nonconvex proximal augmented Lagrangian subproblem with prox stepsize chosen aggressively large so as to speed up its termination. An adaptive ACG variant of FISTA, namely R-FISTA, is then … Read more

Asymptotic Consistency for Nonconvex Risk-Averse Stochastic Optimization with Infinite Dimensional Decision Spaces

Optimal values and solutions of empirical approximations of stochastic optimization problems can be viewed as statistical estimators of their true values. From this perspective, it is important to understand the asymptotic behavior of these estimators as the sample size goes to infinity, which is both of theoretical as well as practical interest. This area of … Read more

Efficient composite heuristics for integer bound constrained noisy optimization

This paper discusses a composite algorithm for bound constrained noisy derivative-free optimization problems with integer variables. This algorithm is an integer variant of the matrix adaptation evolution strategy. An integer derivative-free line search strategy along affine scaling matrix directions is used to generate candidate points. Each affine scaling matrix direction is a product of the … Read more

Worst-case evaluation complexity of a derivative-free quadratic regularization method

This short paper presents a derivative-free quadratic regularization method for unconstrained minimization of a smooth function with Lipschitz continuous gradient. At each iteration, trial points are computed by minimizing a quadratic regularization of a local model of the objective function. The models are based on forward finite-difference gradient approximations. By using a suitable acceptance condition … Read more

Blessing of Nonconvexity in Deep Linear Models: Depth Flattens the Optimization Landscape Around the True Solution

This work characterizes the effect of depth on the optimization landscape of linear regression, showing that, despite their nonconvexity, deeper models have more desirable optimization landscape. We consider a robust and over-parameterized setting, where a subset of measurements are grossly corrupted with noise and the true linear model is captured via an $N$-layer linear neural … Read more

The superiorization method with restarted perturbations for split minimization problems with an application to radiotherapy treatment planning

In this paper we study the split minimization problem that consists of two constrained minimization problems in two separate spaces that are connected via a linear operator that maps one space into the other. To handle the data of such a problem we develop a superiorization approach that can reach a feasible point with reduced … Read more