Coordinate shadows of semi-definite and Euclidean distance matrices

We consider the projected semi-definite and Euclidean distance cones onto a subset of the matrix entries. These two sets are precisely the input data defining feasible semi-definite and Euclidean distance completion problems. We characterize when these sets are closed, and use the boundary structure of these two sets to elucidate the Krislock-Wolkowicz facial reduction algorithm. … Read more

Zero-Convex Functions, Perturbation Resilience, and Subgradient Projections for Feasibility-Seeking Methods

The convex feasibility problem (CFP) is at the core of the modeling of many problems in various areas of science. Subgradient projection methods are important tools for solving the CFP because they enable the use of subgradient calculations instead of orthogonal projections onto the individual sets of the problem. Working in a real Hilbert space, … Read more

Error Bounds and Metric Subregularity

Necessary and sufficient criteria for metric subregularity (or calmness) of set-valued mappings between general metric or Banach spaces are treated in the framework of the theory of error bounds for a special family of extended real-valued functions of two variables. A classification scheme for the general error bound and metric subregularity criteria is presented. The … Read more

An Interior-Point Method for Nonlinear Optimization Problems with Locatable and Separable Nonsmoothness

A lot of real-world optimization models comprise nonconvex and nonlinear as well as nonsmooth functions leading to very hard classes of optimization models. In this article a new interior-point method for the special but practically relevant class of optimization problems with locatable and separable nonsmooth aspects is presented. After motivating and formalizing the problems under … Read more

An accelerated HPE-type algorithm for a class of composite convex-concave saddle-point problems

This article proposes a new algorithm for solving a class of composite convex-concave saddle-point problems. The new algorithm is a special instance of the hybrid proximal extragradient framework in which a Nesterov’s accelerated variant is used to approximately solve the prox subproblems. One of the advantages of the new method is that it works for … Read more

Stochastic Quasi-Fejér Block-Coordinate Fixed Point Iterations with Random Sweeping

This work investigates the properties of stochastic quasi-Fejér monotone sequences in Hilbert spaces and emphasizes their pertinence in the study of the convergence of block-coordinate fixed point methods. The iterative methods under investigation feature random sweeping rules to select the blocks of variables that are activated over the course of the iterations and allow for … Read more

Forward – Backward Greedy Algorithms for Atomic – Norm Regularization

In many signal processing applications, one aims to reconstruct a signal that has a simple representation with respect to a certain basis or frame. Fundamental elements of the basis known as “atoms” allow us to define “atomic norms” that can be used to construct convex regularizers for the reconstruction problem. Efficient algorithms are available to … Read more

Minimal Points, Variational Principles, and Variable Preferences in Set Optimization

The paper is devoted to variational analysis of set-valued mappings acting from quasimetric spaces into topological spaces with variable ordering structures. Besides the mathematical novelty, our motivation comes from applications to adaptive dynamical models of behavioral sciences. We develop a unified dynamical approach to variational principles in such settings based on the new minimal point … Read more

Convergence Rates with Inexact Nonexpansive Operators

In this paper, we present a convergence rate analysis for the inexact Krasnosel’ski{\u{\i}}-Mann iteration built from nonexpansive operators. Our results include two main parts: we first establish global pointwise and ergodic iteration-complexity bounds, and then, under a metric subregularity assumption, we establish local linear convergence for the distance of the iterates to the set of … Read more

An inertial alternating direction method of multipliers

In the context of convex optimization problems in Hilbert spaces, we induce inertial effects into the classical ADMM numerical scheme and obtain in this way so-called inertial ADMM algorithms, the convergence properties of which we investigate into detail. To this aim we make use of the inertial version of the Douglas-Rachford splitting method for monotone … Read more