Solving ill-posed bilevel programs

This paper deals with ill-posed bilevel programs, i.e., problems admitting multiple lower-level solutions for some upper-level parameters. Many publications have been devoted to the standard optimistic case of this problem, where the difficulty is essentially moved from the objective function to the feasible set. This new problem is simpler but there is no guaranty to … Read more

Global convergence of the Heavy-ball method for convex optimization

This paper establishes global convergence and provides global bounds of the convergence rate of the Heavy-ball method for convex optimization problems. When the objective function has Lipschitz-continuous gradient, we show that the Cesa ́ro average of the iterates converges to the optimum at a rate of $O(1/k)$ where k is the number of iterations. When … Read more

Variational principles with generalized distances and applications to behavioral sciences

This paper has a two-fold focus on proving that the quasimetric and the weak $\tau$-distance versions of the Ekeland variational principle are equivalent in the sense that one implies the other and on presenting the need of such extensions for possible applications in the formation and break of workers hiring and firing routines. Article Download … Read more

Activity Identification and Local Linear Convergence of Douglas-Rachford/ADMM under Partial Smoothness

Proximal splitting algorithms are becoming popular to solve convex optimization problems in variational image processing. Within this class, Douglas-Rachford (DR) and ADMM are designed to minimize the sum of two proper lower semicontinuous convex functions whose proximity operators are easy to compute. The goal of this work is to understand the local convergence behaviour of … Read more

An asymptotic inclusion speed for the Douglas-Rachford splitting method in Hilbert spaces

In this paper, we consider the Douglas-Rachford splitting method for monotone inclusion in Hilbert spaces. It can be implemented as follows: from the current iterate, first use forward-backward step to get the intermediate point, then to get the new iterate. Generally speaking, the sum operator involved in the Douglas-Rachford splitting takes the value of every … Read more

A Characterization of the Lagrange-Karush-Kuhn-Tucker Property

In this note, we revisit the classical first order necessary condition in mathematical programming in infinite dimension. We show that existence of Lagrange-Karush-Kuhn-Tucker multipliers is equivalent to the existence of an error bound for the constraint set, and is also equivalent to a generalized Abadie’s qualification condition. These results extend widely previous one like by … Read more

Fast Bundle-Level Type Methods for unconstrained and ball-constrained convex optimization

It has been shown in \cite{Lan13-1} that the accelerated prox-level (APL) method and its variant, the uniform smoothing level (USL) method, have optimal iteration complexity for solving black-box and structured convex programming problems without requiring the input of any smoothness information. However, these algorithms require the assumption on the boundedness of the feasible set and … Read more

Coordinate descent algorithms

Coordinate descent algorithms solve optimization problems by successively performing approximate minimization along coordinate directions or coordinate hyperplanes. They have been used in applications for many years, and their popularity continues to grow because of their usefulness in data analysis, machine learning, and other areas of current interest. This paper describes the fundamentals of the coordinate … Read more

Interior-point algorithms for convex optimization based on primal-dual metrics

We propose and analyse primal-dual interior-point algorithms for convex optimization problems in conic form. The families of algorithms we analyse are so-called short-step algorithms and they match the current best iteration complexity bounds for primal-dual symmetric interior-point algorithm of Nesterov and Todd, for symmetric cone programming problems with given self-scaled barriers. Our results apply to … Read more

Error Bounds and Holder Metric Subregularity

The Holder setting of the metric subregularity property of set-valued mappings between general metric or Banach/Asplund spaces is investigated in the framework of the theory of error bounds for extended real-valued functions of two variables. A classification scheme for the general Holder metric subregularity criteria is presented. The criteria are formulated in terms of several … Read more