Full Stability in Finite-Dimensional Optimization

The paper is devoted to full stability of optimal solutions in general settings of finite-dimensional optimization with applications to particular models of constrained optimization problems including those of conic and specifically semidefinite programming. Developing a new technique of variational analysis and generalized differentiation, we derive second-order characterizations of full stability, in both Lipschitzian and H\”olderian … Read more

Full stability of locally optimal solutions in second-order cone programming

The paper presents complete characterizations of Lipschitzian full stability of locally optimal solutions to problems of second-order cone programming (SOCP) expressed entirely in terms of their initial data. These characterizations are obtained via appropriate versions of the quadratic growth and strong second-order sucient conditions under the corresponding constraint quali cations. We also establish close relationships between … Read more

On Equilibrium Problems Involving Strongly Pseudomonotone Bifunctions

We study equilibrium problems with strongly pseudomonotone bifunctions in real Hilbert spaces. We show the existence of a unique solution. We then propose a generalized strongly convergent projection method for equilibrium problems with strongly pseudomonotone bifunctions. The proposed method uses only one projection without requiring Lipschitz continuity. Application to variational inequalities is discussed. Citation Institute … Read more

A Generalized Proximal Point Algorithm and its Convergence Rate

We propose a generalized proximal point algorithm (PPA), in the generic setting of finding a zero point of a maximal monotone operator. In addition to the classical PPA, a number of benchmark operator splitting methods in PDE and optimization literatures such as the Douglas-Rachford splitting method, Peaceman-Rachford splitting method, alternating direction method of multipliers, generalized … Read more

A Parallel Bundle Framework for Asynchronous Subspace Optimisation of Nonsmooth Convex Functions

An algorithmic framework is presented for optimising general convex functions by non synchronised parallel processes. Each process greedily picks a suitable adaptive subset of coordinates and runs a bundle method on a corresponding restricted problem stopping whenever a descent step is encountered or predicted decrease is reduced sufficiently. No prior knowledge on the dependencies between … Read more

New Analysis and Results for the Conditional Gradient Method

We present new results for the conditional gradient method (also known as the Frank-Wolfe method). We derive computational guarantees for arbitrary step-size sequences, which are then applied to various step-size rules, including simple averaging and constant step-sizes. We also develop step-size rules and computational guarantees that depend naturally on the warm-start quality of the initial … Read more

Randomized Block Coordinate Non-Monotone Gradient Method for a Class of Nonlinear Programming

In this paper we propose a randomized block coordinate non-monotone gradient (RBCNMG) method for minimizing the sum of a smooth (possibly nonconvex) function and a block-separable (possibly nonconvex nonsmooth) function. At each iteration, this method randomly picks a block according to any prescribed probability distribution and typically solves several associated proximal subproblems that usually have … Read more

A Second-Order Method for Strongly Convex L1-Regularization Problems

In this paper a robust second-order method is developed for the solution of strongly convex l1-regularized problems. The main aim is to make the proposed method as inexpensive as possible, while even difficult problems can be efficiently solved. The proposed method is a primal-dual Newton Conjugate Gradients (pdNCG) method. Convergence properties of pdNCG are studied … Read more

A Deterministic Rescaled Perceptron Algorithm

The perceptron algorithm is a simple iterative procedure for finding a point in a convex cone $F$. At each iteration, the algorithm only involves a query of a separation oracle for $F$ and a simple update on a trial solution. The perceptron algorithm is guaranteed to find a feasible point in $F$ after $\Oh(1/\tau_F^2)$ iterations, … Read more

Some preconditioners for systems of linear inequalities

We show that a combination of two simple preprocessing steps would generally improve the conditioning of a homogeneous system of linear inequalities. Our approach is based on a comparison among three different but related notions of conditioning for linear inequalities. Article Download View Some preconditioners for systems of linear inequalities