Primal-dual extrapolation methods for monotone inclusions under local Lipschitz continuity with applications to variational inequality, conic constrained saddle point, and convex conic optimization problems

In this paper we consider a class of structured monotone inclusion (MI) problems that consist of finding a zero in the sum of two monotone operators, in which one is maximal monotone while another is locally Lipschitz continuous. In particular, we first propose a primal-dual extrapolation (PDE) method for solving a structured strongly MI problem … Read more

VARIATIONAL INEQUALITIES GOVERNED BY STRONGLY PSEUDOMONOTONE VECTOR FIELDS ON HADAMARD MANIFOLDS

We consider variational inequalities governed by strongly pseudomonotone vec- tor fields on Hadamard manifolds. The existence and uniqueness results of the solution, linear convergence, error estimates and finite convergence for sequences generated by a mod- ified projection method for solving variational inequalities are investigated. Some examples and numerical experiments are also given to illustrate our … Read more

On Solving Elliptic Obstacle Problems by Compact Abs-Linearization

We consider optimal control problems governed by an elliptic variational inequality of the first kind, namely the obstacle problem. The variational inequality is treated by penalization which leads to optimization problems governed by a nonsmooth semi- linear elliptic PDE. The CALi algorithm is then applied for the efficient solution of these nonsmooth optimization problems. The … Read more

Openness, Holder metric regularity and Holder continuity properties of semialgebraic set-valued maps

Given a semialgebraic set-valued map $F \colon \mathbb{R}^n \rightrightarrows \mathbb{R}^m$ with closed graph, we show that the map $F$ is Holder metrically subregular and that the following conditions are equivalent: (i) $F$ is an open map from its domain into its range and the range of $F$ is locally closed; (ii) the map $F$ is … Read more

Equilibrium selection for multi-portfolio optimization

This paper studies a Nash game arising in portfolio optimization. We introduce a new general multi-portfolio model and state sufficient conditions for the monotonicity of the underlying Nash game. This property allows us to treat the problem numerically and, for the case of nonunique equilibria, to solve hierarchical problems of equilibrium selection. We also give … Read more

A search direction inspired primal-dual method for saddle point problems

The primal-dual hybrid gradient algorithm (PDHG), which is indeed the Arrow-Hurwicz method, has been widely used in image processing areas. However, the convergence of PDHG was established only under some restrictive conditions in the literature, and it is still missing for the case without extra constraints. In this paper, from a perspective of the variational … Read more

Active-set Newton methods and partial smoothness

Diverse optimization algorithms correctly identify, in finite time, intrinsic constraints that must be active at optimality. Analogous behavior extends beyond optimization to systems involving partly smooth operators, and in particular to variational inequalities over partly smooth sets. As in classical nonlinear programming, such active-set structure underlies the design of accelerated local algorithms of Newton type. … Read more

An extragradient method for solving variational inequalities without monotonicity

A new extragradient projection method is devised in this paper, which does not obviously require generalized monotonicity and assumes only that the so-called dual variational inequality has a solution in order to ensure its global convergence. In particular, it applies to quasimonotone variational inequality having a nontrivial solution. ArticleDownload View PDF

Golden Ratio Algorithms for Variational Inequalities

The paper presents a fully explicit algorithm for monotone variational inequalities. The method uses variable stepsizes that are computed using two previous iterates as an approximation of the local Lipschitz constant without running a linesearch. Thus, each iteration of the method requires only one evaluation of a monotone operator $F$ and a proximal mapping $g$. … Read more

Convergent Prediction-Correction-based ADMM for multi-block separable convex programming

The direct extension of the classic alternating direction method with multipliers (ADMMe) to the multi-block separable convex optimization problem is not necessarily convergent, though it often performs very well in practice. In order to preserve the numerical advantages of ADMMe and obtain convergence, many modified ADMM were proposed by correcting the output of ADMMe or … Read more