An Interior-Point Method for Nonlinear Optimization Problems with Locatable and Separable Nonsmoothness

A lot of real-world optimization models comprise nonconvex and nonlinear as well as nonsmooth functions leading to very hard classes of optimization models. In this article a new interior-point method for the special but practically relevant class of optimization problems with locatable and separable nonsmooth aspects is presented. After motivating and formalizing the problems under … Read more

Forward – Backward Greedy Algorithms for Atomic – Norm Regularization

In many signal processing applications, one aims to reconstruct a signal that has a simple representation with respect to a certain basis or frame. Fundamental elements of the basis known as “atoms” allow us to define “atomic norms” that can be used to construct convex regularizers for the reconstruction problem. Efficient algorithms are available to … Read more

Minimal Points, Variational Principles, and Variable Preferences in Set Optimization

The paper is devoted to variational analysis of set-valued mappings acting from quasimetric spaces into topological spaces with variable ordering structures. Besides the mathematical novelty, our motivation comes from applications to adaptive dynamical models of behavioral sciences. We develop a unified dynamical approach to variational principles in such settings based on the new minimal point … Read more

Convergence Rates with Inexact Nonexpansive Operators

In this paper, we present a convergence rate analysis for the inexact Krasnosel’ski{\u{\i}}-Mann iteration built from nonexpansive operators. Our results include two main parts: we first establish global pointwise and ergodic iteration-complexity bounds, and then, under a metric subregularity assumption, we establish local linear convergence for the distance of the iterates to the set of … Read more

An inertial alternating direction method of multipliers

In the context of convex optimization problems in Hilbert spaces, we induce inertial effects into the classical ADMM numerical scheme and obtain in this way so-called inertial ADMM algorithms, the convergence properties of which we investigate into detail. To this aim we make use of the inertial version of the Douglas-Rachford splitting method for monotone … Read more

A note on Fejér-monotone sequences in product spaces and its applications to the dual convergence of augmented Lagrangian methods

In a recent Math. Program. paper, Eckstein and Silva proposed a new error criterion for the approximate solutions of augmented Lagrangian subproblems. Based on a saddle-point formulation of the primal and dual problems, they proved that dual sequences generated by augmented Lagrangians under this error criterion are bounded and that theirs limit points are dual … Read more

A Generalized Inexact Proximal Point Method for Nonsmooth Functions that Satisfies Kurdyka Lojasiewicz Inequality

In this paper, following the ideas presented in Attouch et al. (Math. Program. Ser. A, 137: 91-129, 2013), we present an inexact version of the proximal point method for nonsmoth functions, whose regularization is given by a generalized perturbation term. More precisely, the new perturbation term is defined as a “curved enough” function of the … Read more

Parallel Algorithms for Big Data Optimization

We propose a decomposition framework for the parallel optimization of the sum of a differentiable function and a (block) separable nonsmooth, convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. Our framework is very flexible and includes both fully parallel Jacobi schemes and Gauss-Seidel (i.e., sequential) ones, as … Read more

Dynamic scaling in the Mesh Adaptive Direct Search algorithm for blackbox optimization

Blackbox optimization deals with situations in which the objective function and constraints are typically computed by launching a time-consuming computer sim- ulation. The subject of this work is the Mesh Adaptive Direct Search (MADS) class of algorithms for blackbox optimization. We propose a way to dynamically scale the mesh, which is the discrete spatial structure … Read more

Generalized Inexact Proximal Algorithms: Habit’s/ Routine’s Formation with Resistance to Change, following Worthwhile Changes

This paper shows how, in a quasi metric space, an inexact proximal algorithm with a generalized perturbation term appears to be a nice tool for Behavioral Sciences (Psychology, Economics, Management, Game theory,…). More precisely, the new perturbation term represents an index of resistance to change, defined as a “curved enough” function of the quasi distance … Read more