Proximal Point Algorithm on the Stiefel Manifold

In this paper, we consider the problem of minimizing a continuously differentiable function on the Stiefel manifold. To solve this problem, we develop a geodesic-free proximal point algorithm, which does not require the use of the Riemannian distance. The proposed method can be regarded as an iterative fixed-point method, which repeatedly applies a proximal operator … Read more

Algorithms for Difference-of-Convex (DC) Programs Based on Difference-of-Moreau-Envelopes Smoothing

In this paper we consider minimization of a difference-of-convex (DC) function with and without linear constraints. We first study a smooth approximation of a generic DC function, termed difference-of-Moreau-envelopes (DME) smoothing, where both components of the DC function are replaced by their respective Moreau envelopes. The resulting smooth approximation is shown to be Lipschitz differentiable, … Read more

Variants of the A-HPE and large-step A-HPE algorithms for strongly convex problems with applications to accelerated high-order tensor methods

For solving strongly convex optimization problems, we propose and study the global convergence of variants of the A-HPE and large-step A-HPE algorithms of Monteiro and Svaiter. We prove \emph{linear} and the \emph{superlinear} $\mathcal{O}\left(k^{\,-k\left(\frac{p-1}{p+1}\right)}\right)$ global rates for the proposed variants of the A-HPE and large-step A-HPE methods, respectively. The parameter $p\geq 2$ appears in the (high-order) … Read more

The Landscape of the Proximal Point Method for Nonconvex-Nonconcave Minimax Optimization

Minimax optimization has become a central tool for modern machine learning with applications in generative adversarial networks, robust optimization, reinforcement learning, etc. These applications are often nonconvex-nonconcave, but the existing theory is unable to identify and deal with the fundamental difficulties posed by nonconvex-nonconcave structures. In this paper, we study the classic proximal point method … Read more

New efficient approach in finding a zero of a maximal monotone operator

In the paper, we provide a new efficient approach to find a zero of a maximal monotone operator under very mild assumptions. Using a regularization technique and the proximal point algorithm, we can construct a sequence that converges strongly to a solution with at least linear convergence rate. ArticleDownload View PDF

A Modified Proximal Symmetric ADMM for Multi-Block Separable Convex Optimization with Linear Constraints

We consider the linearly constrained separable convex optimization problem whose objective function is separable w.r.t. $m$ blocks of variables. A bunch of methods have been proposed and well studied. Specifically, a modified strictly contractive Peaceman-Rachford splitting method (SC-PRCM) has been well studied in the literature for the special case of $m=3$. Based on the modified … Read more

An inexact scalarized proximal algorithm with quasi- distance for convex and quasiconvex multi-objective minimization

In the paper of Rocha et al., J Optim Theory Appl (2016) 171:964979, the authors introduced a proximal point algorithm with quasi-distances to solve unconstrained convex multi-objective minimization problems. They proved that all accumulation points are ecient solutions of the problem. In this pa- per we analyze an inexact proximal point algorithm to solve convex … Read more

A Partial PPa S-ADMM for Multi-Block for Separable Convex Optimization with Linear Constraints

The symmetric alternating direction method of multipliers (S-ADMM) is a classical effective method for solving two-block separable convex optimization. However, its convergence may not be guaranteed for multi-block case providing there is no additional assumptions. In this paper, we propose a partial PPa S-ADMM (referred as P3SADMM), which updates the Lagrange multiplier twice with suitable … Read more

A New Preconditioning Approach for an Interior Point-Proximal Method of Multipliers for Linear and Convex Quadratic Programming

In this paper, we address the efficient numerical solution of linear and quadratic programming problems, often of large scale. With this aim, we devise an infeasible interior point method, blended with the proximal method of multipliers, which in turn results in a primal-dual regularized interior point method. Application of this method gives rise to a … Read more

Robust stochastic optimization with the proximal point method

Standard results in stochastic convex optimization bound the number of samples that an algorithm needs to generate a point with small function value in expectation. In this work, we show that a wide class of such algorithms on strongly convex problems can be augmented with sub-exponential confidence bounds at an overhead cost that is only … Read more