Characterizations of explicitly quasiconvex vector functions w.r.t. polyhedral cones

The aim of this paper is to present new characterizations of explicitly cone-quasiconvex vector functions with respect to a polyhedral cone of a finite-dimensional Euclidean space. These characterizations are given in terms of classical explicit quasiconvexity of certain real-valued functions, defined by composing the vector-valued function with appropriate scalarization functions, namely the extreme directions of … Read more

Fast and Faster Convergence of SGD for Over-Parameterized Models and an Accelerated Perceptron

Modern machine learning focuses on highly expressive models that are able to fit or interpolate the data completely, resulting in zero training loss. For such models, we show that the stochastic gradients of common loss functions satisfy a strong growth condition. Under this condition, we prove that constant step-size stochastic gradient descent (SGD) with Nesterov … Read more

A fundamental proof to convergence analysis of alternating direction method of multipliers for weakly convex optimization

The convergence analysis of the alternating direction method of multipliers (ADMM) methods to convex/nonconvex combinational optimization have been well established in literature. Consider the extensive applications of weakly convex function in signal processing and machine learning(e.g. \textit{Special issue: DC-Theory, Algorithms and Applications, Mathematical Programming, 169(1):1-336,2018}), in this paper, we investigate the convergence analysis of ADMM … Read more

Pareto efficient solutions in multi-objective optimization involving forbidden regions

In this paper, the aim is to compute Pareto efficient solutions of multi-objective optimization problems involving forbidden regions. More precisely, we assume that the vector-valued objective function is componentwise generalized-convex and acts between a real topological linear pre-image space and a finite-dimensional image space, while the feasible set is given by the whole pre-image space … Read more

Inner Conditions for Error Bounds and Metric Subregulerity of Multifunctions

We introduce a new class of sets, functions and multifunctions which is shown to be large and to enjoy some nice common properties with the convex setting. Error bounds for objects attached to this class are characterized in terms of inner conditions of Abadie’s type, that is conditions bearing on normal cones and coderivatives at … Read more

A Hausdorff-type distance, a directional derivative of a set-valued map and applications in set optimization

In this paper, we follow Kuroiwa’s set approach in set optimization, which proposes to compare values of a set-valued objective map $F$ respect to various set order relations. We introduce a Hausdorff-type distance relative to an ordering cone between two sets in a Banach space and use it to define a directional derivative for $F$. … Read more

A Note on the Forward-Douglas–Rachford Splitting for Monotone Inclusion and Convex Optimization

We shed light on the structure of the “three-operator” version of the forward-Douglas–Rachford splitting algorithm for finding a zero of a sum of maximally monotone operators $A + B + C$, where $B$ is cocoercive, involving only the computation of $B$ and of the resolvent of $A$ and of $C$, separately. We show that it … Read more

On generalized-convex constrained multi-objective optimization

In this paper, we consider multi-objective optimization problems involving not necessarily convex constraints and componentwise generalized-convex (e.g., semi-strictly quasi-convex, quasi-convex, or explicitly quasi-convex) vector-valued objective functions that are acting between a real linear topological pre-image space and a finite dimensional image space. For these multi-objective optimization problems, we show that the set of (strictly, weakly) … Read more

An Inexact Proximal Method with Proximal Distances for Quasimonotone Equilibrium Problems

In this paper we propose an inexact proximal point method to solve equilibrium problem using proximal distances and the diagonal subdi erential. Under some natural assumptions on the problem and the quasimonotonicity condition on the bifunction, we prove that the sequence generated for the method converges to a solution point of the problem. CitationReport01-2016-PESC-COPPE-UFRJArticleDownload View PDF

Acceleration of the PDHGM on strongly convex subspaces

We propose several variants of the primal-dual method due to Chambolle and Pock. Without requiring full strong convexity of the objective functions, our methods are accelerated on subspaces with strong convexity. This yields mixed rates, $O(1/N^2)$ with respect to initialisation and $O(1/N)$ with respect to the dual sequence, and the residual part of the primal … Read more