An optimal subgradient algorithm for large-scale convex optimization in simple domains

This paper shows that the optimal subgradient algorithm, OSGA, proposed in \cite{NeuO} can be used for solving structured large-scale convex constrained optimization problems. Only first-order information is required, and the optimal complexity bounds for both smooth and nonsmooth problems are attained. More specifically, we consider two classes of problems: (i) a convex objective with a … Read more

Looking for strong polynomiality in Linear Programming : Arguments, conjectures, experiments, findings, and conclusion.

Until now it has been an open question whether the Linear Programming (LP) problem can be solved in strong polynomial time. The simplex algorithm with its combinatorial nature does not even offer a polynomial bound, whereas the complexity of the polynomial algorithms by Khachiyan and Karmarkar is based on the number of variables n, and … Read more

Communication-Efficient Distributed Optimization of Self-Concordant Empirical Loss

We consider distributed convex optimization problems originated from sample average approximation of stochastic optimization, or empirical risk minimization in machine learning. We assume that each machine in the distributed computing system has access to a local empirical loss function, constructed with i.i.d. data sampled from a common distribution. We propose a communication-efficient distributed algorithm to … Read more

A remark on accelerated block coordinate descent for computing the proximity operators of a sum of convex functions

We analyze alternating descent algorithms for minimizing the sum of a quadratic function and block separable non-smooth functions. In case the quadratic interactions between the blocks are pairwise, we show that the schemes can be accelerated, leading to improved convergence rates with respect to related accelerated parallel proximal descent. As an application we obtain very … Read more

Solving ill-posed bilevel programs

This paper deals with ill-posed bilevel programs, i.e., problems admitting multiple lower-level solutions for some upper-level parameters. Many publications have been devoted to the standard optimistic case of this problem, where the difficulty is essentially moved from the objective function to the feasible set. This new problem is simpler but there is no guaranty to … Read more

Variational principles with generalized distances and applications to behavioral sciences

This paper has a two-fold focus on proving that the quasimetric and the weak $\tau$-distance versions of the Ekeland variational principle are equivalent in the sense that one implies the other and on presenting the need of such extensions for possible applications in the formation and break of workers hiring and firing routines. ArticleDownload View … Read more

Global convergence of the Heavy-ball method for convex optimization

This paper establishes global convergence and provides global bounds of the convergence rate of the Heavy-ball method for convex optimization problems. When the objective function has Lipschitz-continuous gradient, we show that the Cesa ́ro average of the iterates converges to the optimum at a rate of $O(1/k)$ where k is the number of iterations. When … Read more

Activity Identification and Local Linear Convergence of Douglas-Rachford/ADMM under Partial Smoothness

Proximal splitting algorithms are becoming popular to solve convex optimization problems in variational image processing. Within this class, Douglas-Rachford (DR) and ADMM are designed to minimize the sum of two proper lower semicontinuous convex functions whose proximity operators are easy to compute. The goal of this work is to understand the local convergence behaviour of … Read more

An asymptotic inclusion speed for the Douglas-Rachford splitting method in Hilbert spaces

In this paper, we consider the Douglas-Rachford splitting method for monotone inclusion in Hilbert spaces. It can be implemented as follows: from the current iterate, first use forward-backward step to get the intermediate point, then to get the new iterate. Generally speaking, the sum operator involved in the Douglas-Rachford splitting takes the value of every … Read more

A Characterization of the Lagrange-Karush-Kuhn-Tucker Property

In this note, we revisit the classical first order necessary condition in mathematical programming in infinite dimension. We show that existence of Lagrange-Karush-Kuhn-Tucker multipliers is equivalent to the existence of an error bound for the constraint set, and is also equivalent to a generalized Abadie’s qualification condition. These results extend widely previous one like by … Read more