An inexact successive quadratic approximation method for a class of difference-of-convex optimization problems

In this paper, we propose a new method for a class of difference-of-convex (DC) optimization problems, whose objective is the sum of a smooth function and a possibly non-prox-friendly DC function. The method sequentially solves subproblems constructed from a quadratic approximation of the smooth function and a linear majorization of the concave part of the … Read more

How do exponential size solutions arise in semidefinite programming?

Semidefinite programs (SDPs) are some of the most popular and broadly applicable optimization problems to emerge in the last thirty years. A curious pathology of SDPs, illustrated by a classical example of Khachiyan, is that their solutions may need exponential space to even write down. Exponential size solutions are the main obstacle to solve a … Read more

Implicit Regularization of Sub-Gradient Method in Robust Matrix Recovery: Don’t be Afraid of Outliers

It is well-known that simple short-sighted algorithms, such as gradient descent, generalize well in the over-parameterized learning tasks, due to their implicit regularization. However, it is unknown whether the implicit regularization of these algorithms can be extended to robust learning tasks, where a subset of samples may be grossly corrupted with noise. In this work, … Read more

Variants of the A-HPE and large-step A-HPE algorithms for strongly convex problems with applications to accelerated high-order tensor methods

For solving strongly convex optimization problems, we propose and study the global convergence of variants of the A-HPE and large-step A-HPE algorithms of Monteiro and Svaiter. We prove \emph{linear} and the \emph{superlinear} $\mathcal{O}\left(k^{\,-k\left(\frac{p-1}{p+1}\right)}\right)$ global rates for the proposed variants of the A-HPE and large-step A-HPE methods, respectively. The parameter $p\geq 2$ appears in the (high-order) … Read more

On Solving Elliptic Obstacle Problems by Compact Abs-Linearization

We consider optimal control problems governed by an elliptic variational inequality of the first kind, namely the obstacle problem. The variational inequality is treated by penalization which leads to optimization problems governed by a nonsmooth semi- linear elliptic PDE. The CALi algorithm is then applied for the efficient solution of these nonsmooth optimization problems. The … Read more

On convexity and quasiconvexity of extremal value functions in set optimization

We study different classes of convex and quasiconvex set-valued maps defined by means of the lower-less order relation and the upper-less order relation. The aim of this paper is to formulate necessary and especially sufficient conditions for the convexity/quasiconvexity of extremal value functions. CitationDOI: 10.23952/asvao.3.2021.3.04ArticleDownload View PDF

First-order algorithms for robust optimization problems via convex-concave saddle-point Lagrangian reformulation

Robust optimization (RO) is one of the key paradigms for solving optimization problems affected by uncertainty. Two principal approaches for RO, the robust counterpart method and the adversarial approach, potentially lead to excessively large optimization problems. For that reason, first order approaches, based on online-convex-optimization, have been proposed (Ben-Tal et al. (2015), Kilinc-Karzan and Ho-Nguyen … Read more

The structure of conservative gradient fields

The classical Clarke subdifferential alone is inadequate for understanding automatic differentiation in nonsmooth contexts. Instead, we can sometimes rely on enlarged generalized gradients called “conservative fields”, defined through the natural path-wise chain rule: one application is the convergence analysis of gradient-based deep learning algorithms. In the semi-algebraic case, we show that all conservative fields are … Read more

An (s^r)hBcResolution ODE Framework for Understanding Discrete-Time Algorithms and Applications to the Linear Convergence of Minimax Problems

There has been a long history of using ordinary differential equations (ODEs) to understand the dynamic of discrete-time algorithms (DTAs). Surprisingly, there are still two fundamental and unanswered questions: (i) it is unclear how to obtain a \emph{suitable} ODE from a given DTA, and (ii) it is unclear the connection between the convergence of a … Read more

Constrained and Composite Optimization via Adaptive Sampling Methods

The motivation for this paper stems from the desire to develop an adaptive sampling method for solving constrained optimization problems in which the objective function is stochastic and the constraints are deterministic. The method proposed in this paper is a proximal gradient method that can also be applied to the composite optimization problem min f(x) … Read more