Clustering and Multifacility Location with Constraints via Distance Function Penalty Method and DC Programming

This paper is a continuation of our effort in using mathematical optimization involving DC programming in clustering and multifacility location. We study a penalty method based on distance functions and apply it particularly to a number of problems in clustering and multifacility location in which the centers to be found must lie in some given … Read more

Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms

We propose a unifying algorithm for non-smooth non-convex optimization. The algorithm approximates the objective function by a convex model function and finds an approximate (Bregman) proximal point of the convex model. This approximate minimizer of the model function yields a descent direction, along which the next iterate is found. Complemented with an Armijo-like line search … Read more

Chambolle-Pock and Tseng’s methods: relationship and extension to the bilevel optimization

In the first part of the paper we focus on two problems: (a) regularized least squares and (b) nonsmooth minimization over an affine subspace. For these problems we establish the connection between the primal-dual method of Chambolle-Pock and Tseng’s proximal gradient method. For problem (a) it allows us to derive a nonergodic $O(1/k^2)$ convergence rate … Read more

Local Convergence of the Heavy-ball Method and iPiano for Non-convex Optimization

A local convergence result for abstract descent methods is proved. The sequence of iterates is attracted by a local (or global) minimum, stays in its neighborhood and converges within this neighborhood. This result allows algorithms to exploit local properties of the objective function. In particular, the abstract theory in this paper applies to the inertial … Read more

Inexact scalarization proximal methods for multiobjective quasiconvex minimization on Hadamard manifold

In this paper we extend naturally the scalarization proximal point method to solve multiobjective unconstrained minimization problems, proposed by Apolinario et al.(2016), from Euclidean spaces to Hadamard manifolds for locally Lipschitz and quasiconvex vector objective functions. Moreover, we present a convergence analysis, under some mild assumptions on the multiobjective function, for two inexact variants of … Read more

Distributed Block-diagonal Approximation Methods for Regularized Empirical Risk Minimization

Designing distributed algorithms for empirical risk minimization (ERM) has become an active research topic in recent years because of the practical need to deal with the huge volume of data. In this paper, we propose a general framework for training an ERM model via solving its dual problem in parallel over multiple machines. Our method … Read more

Iteration-complexity of a Jacobi-type non-Euclidean ADMM for multi-block linearly constrained nonconvex programs

This paper establishes the iteration-complexity of a Jacobi-type non-Euclidean proximal alternating direction method of multipliers (ADMM) for solving multi-block linearly constrained nonconvex programs. The subproblems of this ADMM variant can be solved in parallel and hence the method has great potential to solve large scale multi-block linearly constrained nonconvex programs. Moreover, our analysis allows the … Read more

Subdifferentiation and Smoothing of Nonsmooth Integral Functionals

The subdifferential calculus for the expectation of nonsmooth random integrands involves many fundamental and challenging problems in stochastic optimization. It is known that for Clarke regular integrands, the Clarke subdifferential equals the expectation of their Clarke subdifferential. In particular, this holds for convex integrands. However, little is known about calculation of Clarke subgradients for the … Read more

On the pointwise iteration-complexity of a dynamic regularized ADMM with over-relaxation stepsize

In this paper, we extend the improved pointwise iteration-complexity result of a dynamic regularized alternating direction method of multipliers (ADMM) for a new stepsize domain. In this complexity analysis, the stepsize parameter can even be chosen in the interval $(0,2)$ instead of interval $(0,(1+\sqrt{5})/2)$. As usual, our analysis is established by interpreting this ADMM variant … Read more

Local Linear Convergence Analysis of Primal–Dual Splitting Methods

In this paper, we study the local linear convergence properties of a versatile class of Primal–Dual splitting methods for minimizing composite non-smooth convex optimization problems. Under the assumption that the non-smooth components of the problem are partly smooth relative to smooth manifolds, we present a unified local convergence analysis framework for these Primal–Dual splitting methods. … Read more