Inexact scalarization proximal methods for multiobjective quasiconvex minimization on Hadamard manifold

In this paper we extend naturally the scalarization proximal point method to solve multiobjective unconstrained minimization problems, proposed by Apolinario et al.(2016), from Euclidean spaces to Hadamard manifolds for locally Lipschitz and quasiconvex vector objective functions. Moreover, we present a convergence analysis, under some mild assumptions on the multiobjective function, for two inexact variants of … Read more

Distributed Block-diagonal Approximation Methods for Regularized Empirical Risk Minimization

Designing distributed algorithms for empirical risk minimization (ERM) has become an active research topic in recent years because of the practical need to deal with the huge volume of data. In this paper, we propose a general framework for training an ERM model via solving its dual problem in parallel over multiple machines. Our method … Read more

Iteration-complexity of a Jacobi-type non-Euclidean ADMM for multi-block linearly constrained nonconvex programs

This paper establishes the iteration-complexity of a Jacobi-type non-Euclidean proximal alternating direction method of multipliers (ADMM) for solving multi-block linearly constrained nonconvex programs. The subproblems of this ADMM variant can be solved in parallel and hence the method has great potential to solve large scale multi-block linearly constrained nonconvex programs. Moreover, our analysis allows the … Read more

Subdifferentiation and Smoothing of Nonsmooth Integral Functionals

The subdifferential calculus for the expectation of nonsmooth random integrands involves many fundamental and challenging problems in stochastic optimization. It is known that for Clarke regular integrands, the Clarke subdifferential equals the expectation of their Clarke subdifferential. In particular, this holds for convex integrands. However, little is known about calculation of Clarke subgradients for the … Read more

On the pointwise iteration-complexity of a dynamic regularized ADMM with over-relaxation stepsize

In this paper, we extend the improved pointwise iteration-complexity result of a dynamic regularized alternating direction method of multipliers (ADMM) for a new stepsize domain. In this complexity analysis, the stepsize parameter can even be chosen in the interval $(0,2)$ instead of interval $(0,(1+\sqrt{5})/2)$. As usual, our analysis is established by interpreting this ADMM variant … Read more

Local Linear Convergence Analysis of Primal–Dual Splitting Methods

In this paper, we study the local linear convergence properties of a versatile class of Primal–Dual splitting methods for minimizing composite non-smooth convex optimization problems. Under the assumption that the non-smooth components of the problem are partly smooth relative to smooth manifolds, we present a unified local convergence analysis framework for these Primal–Dual splitting methods. … Read more

Partially separable convexly-constrained optimization with non-Lipschitz singularities and its complexity

An adaptive regularization algorithm using high-order models is proposed for partially separable convexly constrained nonlinear optimization problems whose objective function contains non-Lipschitzian $\ell_q$-norm regularization terms for $q\in (0,1)$. It is shown that the algorithm using an $p$-th order Taylor model for $p$ odd needs in general at most $O(\epsilon^{-(p+1)/p})$ evaluations of the objective function and … Read more

Iteration-Complexity of a Linearized Proximal Multiblock ADMM Class for Linearly Constrained Nonconvex Optimization Problems

This paper analyzes the iteration-complexity of a class of linearized proximal multiblock alternating direction method of multipliers (ADMM) for solving linearly constrained nonconvex optimization problems. The subproblems of the linearized ADMM are obtained by partially or fully linearizing the augmented Lagrangian with respect to the corresponding minimizing block variable. The derived complexity bounds do not … Read more

Linear Convergence of Proximal Incremental Aggregated Gradient Methods under Quadratic Growth Condition

Under the strongly convex assumption, several recent works studied the global linear convergence rate of the proximal incremental aggregated gradient (PIAG) method for minimizing the sum of a large number of smooth component functions and a non-smooth convex function. In this paper, under the quadratic growth condition{a strictly weaker condition than the strongly convex assumption, … Read more

Direct Search Methods on Reductive Homogeneous Spaces

Direct search methods are mainly designed for use in problems with no equality constraints. However, there are many instances where the feasible set is of measure zero in the ambient space and no mesh point lies within it. There are methods for working with feasible sets that are (Riemannian) manifolds, but not all manifolds are … Read more