An inexact successive quadratic approximation method for a class of difference-of-convex optimization problems

In this paper, we propose a new method for a class of difference-of-convex (DC) optimization problems, whose objective is the sum of a smooth function and a possibly non-prox-friendly DC function. The method sequentially solves subproblems constructed from a quadratic approximation of the smooth function and a linear majorization of the concave part of the … Read more

Implicit Regularization of Sub-Gradient Method in Robust Matrix Recovery: Don’t be Afraid of Outliers

It is well-known that simple short-sighted algorithms, such as gradient descent, generalize well in the over-parameterized learning tasks, due to their implicit regularization. However, it is unknown whether the implicit regularization of these algorithms can be extended to robust learning tasks, where a subset of samples may be grossly corrupted with noise. In this work, … Read more

On Solving Elliptic Obstacle Problems by Compact Abs-Linearization

We consider optimal control problems governed by an elliptic variational inequality of the first kind, namely the obstacle problem. The variational inequality is treated by penalization which leads to optimization problems governed by a nonsmooth semi- linear elliptic PDE. The CALi algorithm is then applied for the efficient solution of these nonsmooth optimization problems. The … Read more

The structure of conservative gradient fields

The classical Clarke subdifferential alone is inadequate for understanding automatic differentiation in nonsmooth contexts. Instead, we can sometimes rely on enlarged generalized gradients called “conservative fields”, defined through the natural path-wise chain rule: one application is the convergence analysis of gradient-based deep learning algorithms. In the semi-algebraic case, we show that all conservative fields are … Read more

Polyhedral Separation via Difference of Convex (DC) Programming

We consider polyhedral separation of sets as a possible tool in supervised classification. In particular we focus on the optimization model introduced by Astorino and Gaudioso and adopt its reformulation in Difference of Convex (DC) form. We tackle the problem by adapting the algorithm for DC programming known as DCA. We present the results of … Read more

A Structure Exploiting Algorithm for Non-Smooth Semi-Linear Elliptic Optimal Control Problems

We investigate optimization problems with a non-smooth partial differential equation as constraint, where the non-smoothness is assumed to be caused by Nemytzkii operators generated by the functions abs, min and max. For the efficient as well as robust solution of such problems, we propose a new optimization method based on abs-linearization, i.e., a special handling … Read more

Moreau envelope of supremum functions with applications to infinite and stochastic programming

In this paper, we investigate the Moreau envelope of the supremum of a family of convex, proper, and lower semicontinuous functions. Under mild assumptions, we prove that the Moreau envelope of a supremum is the supremum of Moreau envelopes, which allows us to approximate possibly nonsmooth supremum functions by smooth functions that are also the … Read more

A Primal-Dual Algorithm for Risk Minimization

In this paper, we develop an algorithm to efficiently solve risk-averse optimization problems posed in reflexive Banach space. Such problems often arise in many practical applications as, e.g., optimization problems constrained by partial differential equations with uncertain inputs. Unfortunately, for many popular risk models including the coherent risk measures, the resulting risk-averse objective function is … Read more

Exterior-point Optimization for Nonconvex Learning

In this paper we present the nonconvex exterior-point optimization solver (NExOS)—a novel first-order algorithm tailored to constrained nonconvex learning problems. We consider the problem of minimizing a convex function over nonconvex constraints, where the projection onto the constraint set is single-valued around local minima. A wide range of nonconvex learning problems have this structure including … Read more

Faster Lagrangian-Based Methods in Convex Optimization

In this paper, we aim at unifying, simplifying, and improving the convergence rate analysis of Lagrangian-based methods for convex optimization problems. We first introduce the notion of nice primal algorithmic map, which plays a central role in the unification and in the simplification of the analysis of all Lagrangian-based methods. Equipped with a nice primal … Read more