The Landscape of the Proximal Point Method for Nonconvex-Nonconcave Minimax Optimization

Minimax optimization has become a central tool for modern machine learning with applications in generative adversarial networks, robust optimization, reinforcement learning, etc. These applications are often nonconvex-nonconcave, but the existing theory is unable to identify and deal with the fundamental difficulties posed by nonconvex-nonconcave structures. In this paper, we study the classic proximal point method … Read more

Secant acceleration of sequential residual methods for solving large-scale nonlinear systems of equations

Sequential Residual Methods try to solve nonlinear systems of equations $F(x)=0$ by iteratively updating the current approximate solution along a residual-related direction. Therefore, memory requirements are minimal and, consequently, these methods are attractive for solving large-scale nonlinear systems. However, the convergence of these algorithms may be slow in critical cases; therefore, acceleration procedures are welcome. … Read more

Twenty years of continuous multiobjective optimization in the twenty-first century

The survey highlights some of the research topics which have attracted attention in the last two decades within the area of mathematical optimization of multiple objective functions. We give insights into topics where a huge progress can be seen within the last years. We give short introductions to the specific sub-fields as well as some … Read more

On complexity and convergence of high-order coordinate descent algorithms

Coordinate descent methods with high-order regularized models for box-constrained minimization are introduced. High-order stationarity asymptotic convergence and first-order stationarity worst-case evaluation complexity bounds are established. The computer work that is necessary for obtaining first-order $\varepsilon$-stationarity with respect to the variables of each coordinate-descent block is $O(\varepsilon^{-(p+1)/p})$ whereas the computer work for getting first-order $\varepsilon$-stationarity with … Read more

ADMM and inexact ALM: the QP case

Embedding randomization procedures in the Alternating Direction Method of Multipliers (ADMM) has recently attracted an increasing amount of interest as a remedy to the fact that the direct n-block generalization of ADMM is not necessarily convergent ($n \geq 3$). Even if, in practice, the introduction of such techniques could mitigate the diverging behaviour of the … Read more

A Distributed and Secure Algorithm for Computing Dominant SVD Based on Projection Splitting

In this paper, we propose and study a distributed and secure algorithm for computing dominant (or truncated) singular value decompositions (SVD) of large and distributed data matrices. We consider the scenario where each node privately holds a subset of columns and only exchanges “safe” information with other nodes in a collaborative effort to calculate a … Read more

Accelerating Inexact Successive Quadratic Approximation for Regularized Optimization Through Manifold Identification

For regularized optimization that minimizes the sum of a smooth term and a regularizer that promotes structured solutions, inexact proximal-Newton-type methods, or successive quadratic approximation (SQA) methods, are widely used for their superlinear convergence in terms of iterations. However, unlike the counter parts in smooth optimization, they suffer from lengthy running time in solving regularized … Read more

On the Linear Convergence to Weak/Standard D-stationary Points of DCA-based Algorithms for Structured Nonsmooth DC Programming

We consider a class of structured nonsmooth difference-of-convex minimization. We allow nonsmoothness in both the convex and concave components in the objective function, with a finite max structure in the concave part. Our focus is on algorithms that compute a (weak or standard) d(irectional)-stationary point as advocated in a recent work of Pang et al. … Read more

Multipliers Correction Methods for Optimization Problems over the Stiefel Manifold

We propose a class of multipliers correction methods to minimize a differentiable function over the Stiefel manifold. The proposed methods combine a function value reduction step with a proximal correction step. The former one searches along an arbitrary descent direction in the Euclidean space instead of a vector in the tangent space of the Stiefel … Read more

Time-Domain Decomposition for Optimal Control Problems Governed by Semilinear Hyperbolic Systems

In this article, we extend the time-domain decomposition method described by Lagnese and Leugering (2003) to semilinear optimal control problems for hyperbolic balance laws with spatio-temporal varying coefficients. We provide the design of the iterative method applied to the global first-order optimality system, prove its convergence, and derive an a posteriori error estimate. The analysis … Read more