Inner Conditions for Error Bounds and Metric Subregulerity of Multifunctions

We introduce a new class of sets, functions and multifunctions which is shown to be large and to enjoy some nice common properties with the convex setting. Error bounds for objects attached to this class are characterized in terms of inner conditions of Abadie’s type, that is conditions bearing on normal cones and coderivatives at … Read more

An incremental mirror descent subgradient algorithm with random sweeping and proximal step

We investigate the convergence properties of incremental mirror descent type subgradient algorithms for minimizing the sum of convex functions. In each step we only evaluate the subgradient of a single component function and mirror it back to the feasible domain, which makes iterations very cheap to compute. The analysis is made for a randomized selection … Read more

Convergence Analysis of Processes with Valiant Projection Operators in Hilbert Space

Convex feasibility problems require to find a point in the intersection of a finite family of convex sets. We propose to solve such problems by performing set-enlargements and applying a new kind of projection operators called valiant projectors. A valiant projector onto a convex set implements a special relaxation strategy, proposed by Goffin in 1971, … Read more

Dynamic Scaling and Submodel Selection in Bundle Methods for Convex Optimization

Bundle methods determine the next candidate point as the minimizer of a cutting model augmented with a proximal term. We propose a dynamic approach for choosing a quadratic proximal term based on subgradient information from past evaluations. For the special case of convex quadratic functions, conditions are studied under which this actually reproduces the Hessian. … Read more

Computing closest stable non-negative matrices

Problem of finding the closest stable matrix for a dynamical system has many applications. It is well studied both for continuous and discrete-time systems, and the corresponding optimization problems are formulated for various matrix norms. As a rule, non-convexity of these formulations does not allow finding their global solutions. In this paper, we analyze positive … Read more

Proximal-Proximal-Gradient Method

In this paper, we present the proximal-proximal-gradient method (PPG), a novel optimization method that is simple to implement and simple to parallelize. PPG generalizes the proximal-gradient method and ADMM and is applicable to minimization problems written as a sum of many differentiable and many non-differentiable convex functions. The non-differentiable functions can be coupled. We furthermore … Read more

The Trimmed Lasso: Sparsity and Robustness

Nonconvex penalty methods for sparse modeling in linear regression have been a topic of fervent interest in recent years. Herein, we study a family of nonconvex penalty functions that we call the trimmed Lasso and that offers exact control over the desired level of sparsity of estimators. We analyze its structural properties and in doing … Read more

Clustering and Multifacility Location with Constraints via Distance Function Penalty Method and DC Programming

This paper is a continuation of our effort in using mathematical optimization involving DC programming in clustering and multifacility location. We study a penalty method based on distance functions and apply it particularly to a number of problems in clustering and multifacility location in which the centers to be found must lie in some given … Read more

Finding a best approximation pair of points for two polyhedra

Given two disjoint convex polyhedra, we look for a best approximation pair relative to them, i.e., a pair of points, one in each polyhedron, attaining the minimum distance between the sets. Cheney and Goldstein showed that alternating projections onto the two sets, starting from an arbitrary point, generate a sequence whose two interlaced subsequences converge … Read more

Implementing the ADMM to Big Datasets: A Case Study of LASSO

The alternating direction method of multipliers (ADMM) has been popularly used for a wide range of applications in the literature. When big datasets with high-dimensional variables are considered, subproblems arising from the ADMM must be solved inexactly even though theoretically they may have closed-form solutions. Such a scenario immediately poses mathematical ambiguities such as how … Read more