Clustering and Multifacility Location with Constraints via Distance Function Penalty Method and DC Programming

This paper is a continuation of our effort in using mathematical optimization involving DC programming in clustering and multifacility location. We study a penalty method based on distance functions and apply it particularly to a number of problems in clustering and multifacility location in which the centers to be found must lie in some given … Read more

Finding a best approximation pair of points for two polyhedra

Given two disjoint convex polyhedra, we look for a best approximation pair relative to them, i.e., a pair of points, one in each polyhedron, attaining the minimum distance between the sets. Cheney and Goldstein showed that alternating projections onto the two sets, starting from an arbitrary point, generate a sequence whose two interlaced subsequences converge … Read more

Implementing the ADMM to Big Datasets: A Case Study of LASSO

The alternating direction method of multipliers (ADMM) has been popularly used for a wide range of applications in the literature. When big datasets with high-dimensional variables are considered, subproblems arising from the ADMM must be solved inexactly even though theoretically they may have closed-form solutions. Such a scenario immediately poses mathematical ambiguities such as how … Read more

Randomized Similar Triangles Method: A Unifying Framework for Accelerated Randomized Optimization Methods (Coordinate Descent, Directional Search, Derivative-Free Method)

In this paper, we consider smooth convex optimization problems with simple constraints and inexactness in the oracle information such as value, partial or directional derivatives of the objective function. We introduce a unifying framework, which allows to construct different types of accelerated randomized methods for such problems and to prove convergence rate theorems for them. … Read more

Convergence of first-order methods via the convex conjugate

This paper gives a unified and succinct approach to the $O(1/\sqrt{k}), O(1/k),$ and $O(1/k^2)$ convergence rates of the subgradient, gradient, and accelerated gradient methods for unconstrained convex minimization. In the three cases the proof of convergence follows from a generic bound defined by the convex conjugate of the objective function. CitationWorking Paper, Carnegie Mellon UniversityArticleDownload … Read more

A Robust Multi-Batch L-BFGS Method for Machine Learning

This paper describes an implementation of the L-BFGS method designed to deal with two adversarial situations. The first occurs in distributed computing environments where some of the computational nodes devoted to the evaluation of the function and gradient are unable to return results on time. A similar challenge occurs in a multi-batch approach in which … Read more

Self-concordant inclusions: A unified framework for path-following generalized Newton-type algorithms

We study a class of monotone inclusions called “self-concordant inclusion” which covers three fundamental convex optimization formulations as special cases. We develop a new generalized Newton-type framework to solve this inclusion. Our framework subsumes three schemes: full-step, damped-step and path-following methods as specific instances, while allows one to use inexact computation to form generalized Newton … Read more

Measuring axial symmetry in convex cones

The problem of measuring the degree of central symmetry of a convex body has been treated by various authors since the early twentieth century. This work addresses the issue of measuring the degree of axial symmetry of a convex cone. Passing from central symmetry in convex bodies to axial symmetry in convex cones is not … Read more

Dynamic Stochastic Approximation for Multi-stage Stochastic Optimization

In this paper, we consider multi-stage stochastic optimization problems with convex objectives and conic constraints at each stage. We present a new stochastic first-order method, namely the dynamic stochastic approximation (DSA) algorithm, for solving these types of stochastic optimization problems. We show that DSA can achieve an optimal ${\cal O}(1/\epsilon^4)$ rate of convergence in terms … Read more

Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms

We propose a unifying algorithm for non-smooth non-convex optimization. The algorithm approximates the objective function by a convex model function and finds an approximate (Bregman) proximal point of the convex model. This approximate minimizer of the model function yields a descent direction, along which the next iterate is found. Complemented with an Armijo-like line search … Read more