Approximate Level Method

In this paper we propose and analyze a variant of the level method [4], which is an algorithm for minimizing nonsmooth convex functions. The main work per iteration is spent on 1) minimizing a piecewise-linear model of the objective function and on 2) projecting onto the intersection of the feasible region and a polyhedron arising … Read more

Simultaneously solving seven optimization problems in relative scale

In this paper we develop and analyze an efficient algorithm which solves seven related optimization problems simultaneously, in relative scale. Each iteration of our method is very cheap, with main work spent on matrix-vector multiplication. We prove that if a certain sequence generated by the algorithm remains bounded, then the method must terminate in $O(1/\delta)$ … Read more

THE EKELAND VARIATIONAL PRINCIPLE FOR HENIG PROPER MINIMIZERS AND SUPER MINIMIZERS

In this paper we consider, for the first time, approximate Henig proper minimizers and approximate super minimizers of a set-valued map F with values in a partially ordered vector space and formulate two versions of the Ekeland variational principle for these points involving coderivatives in the senses of Ioffe, Clarke and Mordukhovich. As applications we … Read more

Behavior of BFGS with an Exact Line Search on Nonsmooth Examples

We investigate the behavior of the BFGS algorithm with an exact line search on nonsmooth functions. We show that it may fail on a simple polyhedral example, but that it apparently always succeeds on the Euclidean norm function, spiraling into the origin with a Q-linear rate of convergence; we prove this in the case of … Read more

Nonsmooth Optimization via BFGS

We investigate the BFGS algorithm with an inexact line search when applied to nonsmooth functions, not necessarily convex. We define a suitable line search and show that it generates a sequence of nested intervals containing points satisfying the Armijo and weak Wolfe conditions, assuming only absolute continuity. We also prove that the line search terminates … Read more

A Randomized Cutting Plane Method with Probabilistic Geometric Convergence

We propose a randomized method for general convex optimization problems; namely, the minimization of a linear function over a convex body. The idea is to generate N random points inside the body, choose the best one and cut the part of the body defined by the linear constraint. We first analyze the convergence properties of … Read more

A proximal method for composite minimization

We consider minimization of functions that are compositions of convex or prox-regular functions (possibly extended-valued) with smooth vector functions. A wide variety of important optimization problems fall into this framework. We describe an algorithmic framework based on a subproblem constructed from a linearized approximation to the objective and a regularization term. Properties of local solutions … Read more

Generalized power method for sparse principal component analysis

In this paper we develop a new approach to sparse principal component analysis (sparse PCA). We propose two single-unit and two block optimization formulations of the sparse PCA problem, aimed at extracting a single sparse dominant principal component of a data matrix, or more components at once, respectively. While the initial formulations involve nonconvex functions, … Read more

Fixed point and Bregman iterative methods for matrix rank minimization

The linearly constrained matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. The linearly constrained nuclear norm minimization is a convex relaxation of this problem. Although it can be cast as a semidefinite programming problem, the nuclear norm minimization problem is expensive to solve when the … Read more

Incremental-like Bundle Methods with Application to Energy Planning

An important field of application of non-smooth optimization refers to decomposition of large-scale or complex problems by Lagrangian duality. In this setting, the dual problem consists in maximizing a concave non-smooth function that is defined as the sum of sub-functions. The evaluation of each sub-function requires solving a specific optimization sub-problem, with specific computational complexity. … Read more