The perturbation analysis of nonconvex low-rank matrix robust recovery

In this paper, we bring forward a completely perturbed nonconvex Schatten $p$-minimization to address a model of completely perturbed low-rank matrix recovery. The paper that based on the restricted isometry property generalizes the investigation to a complete perturbation model thinking over not only noise but also perturbation, gives the restricted isometry property condition that guarantees … Read more

An Outer-approximation Guided Optimization Approach for Constrained Neural Network Inverse Problems

This paper discusses an outer-approximation guided optimization method for constrained neural network inverse problems with rectified linear units. The constrained neural network inverse problems refer to an optimization problem to find the best set of input values of a given trained neural network in order to produce a predefined desired output in presence of constraints … Read more

Convergence of Inexact Forward–Backward Algorithms Using the Forward–Backward Envelope

This paper deals with a general framework for inexact forward–backward algorithms aimed at minimizing the sum of an analytic function and a lower semicontinuous, subanalytic, convex term. Such framework relies on an implementable inexactness condition for the computation of the proximal operator, and a linesearch procedure which is possibly performed whenever a variable metric is … Read more

Zero Order Stochastic Weakly Convex Composite Optimization

In this paper we consider stochastic weakly convex composite problems, however without the existence of a stochastic subgradient oracle. We present a derivative free algorithm that uses a two point approximation for computing a gradient estimate of the smoothed function. We prove convergence at a similar rate as state of the art methods, however with … Read more

A bundle method for nonsmooth DC programming with application to chance-constrained problems

This work considers nonsmooth and nonconvex optimization problems whose objective and constraint functions are defined by difference-of-convex (DC) functions. We consider an infeasible bundle method based on the so-called improvement functions to compute critical points for problems of this class. Our algorithm neither employs penalization techniques nor solves subproblems with linearized constraints. The approach, which … Read more

Primal Space Necessary Characterizations of Transversality Properties

This paper continues the study of general nonlinear transversality properties of collections of sets and focuses on primal space necessary (in some cases also sufficient) characterizations of the properties. We formulate geometric, metric and slope characterizations, particularly in the convex setting. The Holder case is given a special attention. Quantitative relations between the nonlinear transversality … Read more

Proximal splitting algorithms: Relax them all!

Convex optimization problems, whose solutions live in very high dimensional spaces, have become ubiquitous. To solve them, proximal splitting algorithms are particularly adequate: they consist of simple operations, by handling the terms in the objective function separately. We present several existing proximal splitting algorithms and we derive new ones, within a unified framework, which consists … Read more

A Regularized Smoothing Method for Fully Parameterized Convex Problems with Applications to Convex and Nonconvex Two-Stage Stochastic Programming

We present an approach to regularize and approximate solution mappings of parametric convex optimization problems that combines interior penalty (log-barrier) solutions with Tikhonov regularization. Because the regularized mappings are single-valued and smooth under reasonable conditions, they can be used to build a computationally practical smoothing for the associated optimal value function. The value function in … Read more

Nearly optimal first-order methods for convex optimization under gradient norm measure: An adaptive regularization approach

In the development of first-order methods for smooth (resp., composite) convex optimization problems minimizing smooth functions, the gradient (resp., gradient mapping) norm is a fundamental optimality measure for which a regularization technique of first-order methods is known to be nearly optimal. In this paper, we report an adaptive regularization approach attaining this iteration complexity without … Read more

The Fermat Rule for Set Optimization Problems with Lipschitzian Set-Valued Mappings

n this paper, we consider set optimization problems with respect to the set approach. Specifically, we deal with the lower less and the upper less set relations. First, we derive properties of convexity and Lipschitzianity of suitable scalarizing functionals, under the same assumption on the set-valued objective mapping. We then obtain upper estimates of the … Read more