A Framework of Inertial Alternating Direction Method of Multipliers for Non-Convex Non-Smooth Optimization

In this paper, we propose an algorithmic framework dubbed inertial alternating direction methods of multipliers (iADMM), for solving a class of nonconvex nonsmooth multiblock composite optimization problems with linear constraints. Our framework employs the general minimization-majorization (MM) principle to update each block of variables so as to not only unify the convergence analysis of previous … Read more

Cutting Plane Generation Through Sparse Principal Component Analysis

Quadratically-constrained quadratic programs (QCQPs) are optimization models whose remarkable expressiveness has made them a cornerstone of methodological research for nonconvex optimization problems. However, modern methods to solve a general QCQP fail to scale, encountering computational challenges even with just a few hundred variables. Specifically, a semidefinite programming (SDP) relaxation is typically employed, which provides strong … Read more

Implicit Regularization of Sub-Gradient Method in Robust Matrix Recovery: Don’t be Afraid of Outliers

It is well-known that simple short-sighted algorithms, such as gradient descent, generalize well in the over-parameterized learning tasks, due to their implicit regularization. However, it is unknown whether the implicit regularization of these algorithms can be extended to robust learning tasks, where a subset of samples may be grossly corrupted with noise. In this work, … Read more

A Matrix-Free Trust-Region Newton Algorithm for Convex-Constrained Optimization

We describe a matrix-free trust-region algorithm for solving convex-constrained optimization problems that uses the spectral projected gradient method to compute trial steps. To project onto the intersection of the feasible set and the trust region, we reformulate and solve the dual projection problem as a one-dimensional root finding problem. We demonstrate our algorithm’s performance on … Read more

Stochastic dual dynamic programming and its variants – a review

We provide a tutorial-type review on stochastic dual dynamic programming (SDDP), as one of the state-of-the-art solution methods for large-scale multistage stochastic programs. Since introduced about 30 years ago for solving large-scale multistage stochastic linear programming problems in energy planning, SDDP has been applied to practical problems from several fields and is enriched by various … Read more

Twenty years of continuous multiobjective optimization in the twenty-first century

The survey highlights some of the research topics which have attracted attention in the last two decades within the area of mathematical optimization of multiple objective functions. We give insights into topics where a huge progress can be seen within the last years. We give short introductions to the specific sub-fields as well as some … Read more

User manual of NewtBracket: “A Newton-Bracketing method for a simple conic optimization problem” with applications to QOPs in binary variables

We describe the Matlab package NewtBracket for solving a simple conic optimization problem that minimizes a linear objective function subject to a single linear equality constraint and a convex cone constraint. The problem is converted into the problem of finding the largest zero $y^*$ of a continuously differentiable (except at $y^*$) convex function $g : … Read more

Exterior-point Optimization for Nonconvex Learning

In this paper we present the nonconvex exterior-point optimization solver (NExOS)—a novel first-order algorithm tailored to constrained nonconvex learning problems. We consider the problem of minimizing a convex function over nonconvex constraints, where the projection onto the constraint set is single-valued around local minima. A wide range of nonconvex learning problems have this structure including … Read more

Minimization of L1 over L2 for sparse signal recovery with convergence guarantee

The ratio of the $L_1$ and $L_2$ norms, denoted by $L_1/L_2$, becomes attractive due to its scale-invariant property when approximating the $L_0$ norm to promote sparsity. In this paper, we incorporate the $L_1/L_2$ formalism into an unconstrained model in order to deal with both noiseless and noisy observations. To design an efficient algorithm, we derive … Read more

Stochastic Multi-level Composition Optimization Algorithms with Level-Independent Convergence Rates

In this paper, we study smooth stochastic multi-level composition optimization problems, where the objective function is a nested composition of $T$ functions. We assume access to noisy evaluations of the functions and their gradients, through a stochastic first-order oracle. For solving this class of problems, we propose two algorithms using moving-average stochastic estimates, and analyze … Read more