Polynomial worst-case iteration complexity of quasi-Newton primal-dual interior point algorithms for linear programming

Quasi-Newton methods are well known techniques for large-scale numerical optimization. They use an approximation of the Hessian in optimization problems or the Jacobian in system of nonlinear equations. In the Interior Point context, quasi-Newton algorithms compute low-rank updates of the matrix associated with the Newton systems, instead of computing it from scratch at every iteration. … Read more

Stochastic nested primal-dual method for nonconvex constrained composition optimization

In this paper we study the nonconvex constrained composition optimization, in which the objective contains a composition of two expected-value functions whose accurate information is normally expensive to calculate. We propose a STochastic nEsted Primal-dual (STEP) method for such problems. In each iteration, with an auxiliary variable introduced to track inner layer function values we … Read more

A momentum-based linearized augmented Lagrangian method for nonconvex constrained stochastic optimization

    Nonconvex constrained stochastic optimization  has emerged in many important application areas. With general functional constraints it minimizes the sum of an expectation function and a convex  nonsmooth  regularizer. Main challenges  arise due to the stochasticity in the random integrand and the possibly nonconvex functional constraints. To cope with these issues we propose a … Read more

The Hyperbolic Augmented Lagrangian Algorithm

The hyperbolic augmented Lagrangian algorithm (HALA) is introduced in the area of continuous optimization for solving nonlinear programming problems. Under mild assumptions, such as: convexity, Slater’s qualification and differentiability, the convergence of the proposed algorithm is proved. We also study the duality theory for the case of the hyperbolic augmented Lagrangian function. Finally, in order … Read more

A Strengthened SDP Relaxation for Quadratic Optimization Over the Stiefel Manifold

We study semidefinite programming (SDP) relaxations for the NP-hard problem of globally optimizing a quadratic function over the Stiefel manifold. We introduce a strengthened relaxation based on two recent ideas in the literature: (i) a tailored SDP for objectives with a block-diagonal Hessian; (ii) and the use of the Kronecker matrix product to construct SDP relaxations. Using synthetic instances on … Read more

Superiorization as a novel strategy for linearly constrained inverse radiotherapy treatment planning

Objective: We apply the superiorization methodology to the intensity-modulated radiation therapy (IMRT) treatment planning problem. In superiorization, linear voxel dose inequality constraints are the fundamental modeling tool within which a feasibility-seeking projection algorithm will seek a feasible point. This algorithm is then perturbed with gradient descent steps to reduce a nonlinear objective function. Approach: Within … Read more

An Improved Unconstrained Approach for Bilevel Optimization

In this paper, we focus on the nonconvex-strongly-convex bilevel optimization problem (BLO). In this BLO, the objective function of the upper-level problem is nonconvex and possibly nonsmooth, and the lower-level problem is smooth and strongly convex with respect to the underlying variable $y$. We show that the feasible region of BLO is a Riemannian manifold. … Read more

An adaptive superfast inexact proximal augmented Lagrangian method for smooth nonconvex composite optimization problems

This work presents an adaptive superfast proximal augmented Lagrangian (AS-PAL) method for solving linearly-constrained smooth nonconvex composite optimization problems. At each iteration, AS-PAL inexactly solves a possibly nonconvex proximal augmented Lagrangian subproblem with prox stepsize chosen aggressively large so as to speed up its termination. An adaptive ACG variant of FISTA, namely R-FISTA, is then … Read more

Asymptotic Consistency for Nonconvex Risk-Averse Stochastic Optimization with Infinite Dimensional Decision Spaces

Optimal values and solutions of empirical approximations of stochastic optimization problems can be viewed as statistical estimators of their true values. From this perspective, it is important to understand the asymptotic behavior of these estimators as the sample size goes to infinity, which is both of theoretical as well as practical interest. This area of … Read more

Efficient composite heuristics for integer bound constrained noisy optimization

This paper discusses a composite algorithm for bound constrained noisy derivative-free optimization problems with integer variables. This algorithm is an integer variant of the matrix adaptation evolution strategy. An integer derivative-free line search strategy along affine scaling matrix directions is used to generate candidate points. Each affine scaling matrix direction is a product of the … Read more