Conditional Extragradient Algorithms for Solving Constrained Variational Inequalities

In this paper, we generalize the classical extragradient algorithm for solving variational inequality problems by utilizing non-null normal vectors of the feasible set. In particular, conceptual algorithms are proposed with two different linesearches. We then establish convergence results for these algorithms under mild assumptions. Our study suggests that non-null normal vectors may significantly improve convergence … Read more

A proximal ADMM with the Broyden family for Convex Optimization Problems

Alternating direction methods of multipliers (ADMM) have been well studied and effectively used in various application fields. The classical ADMM must solve two subproblems exactly at each iteration. To overcome the difficulty of computing the exact solution of the subproblems, some proximal terms are added to the subproblems. Recently, Gu and Yamashita studied a special … Read more

Asymptotic results of Stochastic Decomposition for Two-stage Stochastic Quadratic Programming

This paper presents stochastic decomposition (SD) algorithms for two classes of stochastic programming problems: 1) two-stage stochastic quadratic-linear programming (SQLP) in which a quadratic program defines the objective function in the first stage and a linear program defines the value function in the second stage; 2) two-stage stochastic quadratic-quadratic programming (SQQP) which has quadratic programming … Read more

The primal-dual hybrid gradient method reduces to a primal method for linearly constrained optimization problems

In this work, we show that for linearly constrained optimization problems the primal-dual hybrid gradient algorithm, analyzed by Chambolle and Pock [3], can be written as an entirely primal algorithm. This allows us to prove convergence of the iterates even in the degenerate cases when the linear system is inconsistent or when the strong duality … Read more

Bilevel optimization: theory, algorithms and applications

Bilevel optimization problems are hierarchical optimization problems where the feasible region of the so-called upper level problem is restricted by the graph of the solution set mapping of the lower level problem. Aim of this article is to collect a large number of references on this topic, to show the diversity of contributions and to … Read more

Asynchronous Sequential Inertial Iterations for Common Fixed Points Problems with an Application to Linear Systems

The common fixed points problem requires finding a point in the intersection of fixed points sets of a finite collection of operators. Quickly solving problems of this sort is of great practical importance for engineering and scientific tasks (e.g., for computed tomography). Iterative methods for solving these problems often employ a Krasnosel’skii-Mann type iteration. We … Read more

A fundamental proof to convergence analysis of alternating direction method of multipliers for weakly convex optimization

The convergence analysis of the alternating direction method of multipliers (ADMM) methods to convex/nonconvex combinational optimization have been well established in literature. Consider the extensive applications of weakly convex function in signal processing and machine learning(e.g. \textit{Special issue: DC-Theory, Algorithms and Applications, Mathematical Programming, 169(1):1-336,2018}), in this paper, we investigate the convergence analysis of ADMM … Read more

Accelerated Bregman Proximal Gradient Methods for Relatively Smooth Convex Optimization

We consider the problem of minimizing the sum of two convex functions: one is differentiable and relatively smooth with respect to a reference convex function, and the other can be nondifferentiable but simple to optimize. The relatively smooth condition is much weaker than the standard assumption of uniform Lipschitz continuity of the gradients, thus significantly … Read more

On the Linear Convergence of Difference-of-convex Algorithms for Nonsmooth DC Programming

In this paper we consider the linear convergence of algorithms for minimizing difference- of-convex functions with convex constraints. We allow nonsmoothness in both of the convex and concave components in the objective function, with a finite max structure in the concave compo- nent. Our focus is on algorithms that compute (weak and standard) d(irectional)-stationary points … Read more

Positive semidefinite matrix approximation with a trace constraint

We propose an efficient algorithm to solve positive a semidefinite matrix approximation problem with a trace constraint. Without constraints, it is well known that positive semidefinite matrix approximation problem can be easily solved by one-time eigendecomposition of a symmetric matrix. In this paper, we confirmed that one-time eigendecomposition is also sufficient even if a trace … Read more