A relaxed customized proximal point algorithm for separable convex programming

The alternating direction method (ADM) is classical for solving a linearly constrained separable convex programming problem (primal problem), and it is well known that ADM is essentially the application of a concrete form of the proximal point algorithm (PPA) (more precisely, the Douglas-Rachford splitting method) to the corresponding dual problem. This paper shows that an … Read more

Generalized Forward-Backward Splitting

This paper introduces the generalized forward-backward splitting algorithm for minimizing convex functions of the form $F + \sum_{i=1}^n G_i$, where $F$ has a Lipschitz-continuous gradient and the $G_i$’s are simple in the sense that their Moreau proximity operators are easy to compute. While the forward-backward algorithm cannot deal with more than $n = 1$ non-smooth … Read more

On smooth relaxations of obstacle sets

We present and discuss a method to relax sets described by finitely many smooth convex inequality constraints by the level set of a single smooth convex inequality constraint. Based on error bounds and Lipschitz continuity, special attention is paid to the maximal approximation error and a guaranteed safety margin. Our results allow to safely avoid … Read more

Accelerated and Inexact forward-backward algorithms

We propose a convergence analysis of accelerated forward-backward splitting methods for minimizing composite functions, when the proximity operator is not available in closed form, and is thus computed up to a certain precision. We prove that the $1/k^2$ convergence rate for the function values can be achieved if the admissible errors are of a certain … Read more

A Complementarity Partition Theorem for Multifold Conic Systems

Consider a homogeneous multifold convex conic system $$ Ax = 0, \; x\in K_1\times \cdots \times K_r $$ and its alternative system $$ A\transp y \in K_1^*\times \cdots \times K_r^*, $$ where $K_1,\dots, K_r$ are regular closed convex cones. We show that there is canonical partition of the index set $\{1,\dots,r\}$ determined by certain complementarity … Read more

Distributed Basis Pursuit

We propose a distributed algorithm for solving the optimization problem Basis Pursuit (BP). BP finds the least L1-norm solution of the underdetermined linear system Ax = b and is used, for example, in compressed sensing for reconstruction. Our algorithm solves BP on a distributed platform such as a sensor network, and is designed to minimize … Read more

Efficient Serial and Parallel Coordinate Descent Methods for Huge-Scale Truss Topology Design

In this work we propose solving huge-scale instances of the truss topology design problem with coordinate descent methods. We develop four efficient codes: serial and parallel implementations of randomized and greedy rules for the selection of the variable (potential bar) to be updated in the next iteration. Both serial methods enjoy an O(n/k) iteration complexity … Read more

Twice differentiable characterizations of convexity notions for functions on full dimensional convex sets

We derive $C^2-$characterizations for convex, strictly convex, as well as uniformly convex functions on full dimensional convex sets. In the cases of convex and uniformly convex functions this weakens the well-known openness assumption on the convex sets. We also show that, in a certain sense, the full dimensionality assumption cannot be weakened further. In the … Read more

A quadratically convergent Newton method for vector optimization

We propose a Newton method for solving smooth unconstrained vector optimization problems under partial orders induced by general closed convex pointed cones. The method extends the one proposed by Fliege, Grana Drummond and Svaiter for multicriteria, which in turn is an extension of the classical Newton method for scalar optimization. The steplength is chosen by … Read more

Manifold Identification in Dual Averaging for Regularized Stochastic Online Learning

Iterative methods that calculate their steps from approximate subgradient directions have proved to be useful for stochastic learning problems over large and streaming data sets. When the objective consists of a loss function plus a nonsmooth regularization term, the solution often lies on a low-dimensional manifold of parameter space along which the regularizer is smooth. … Read more