An analysis of the superiorization method via the principle of concentration of measure

The superiorization methodology is intended to work with input data of constrained minimization problems, i.e., a target function and a constraints set. However, it is based on an antipodal way of thinking to the thinking that leads constrained minimization methods. Instead of adapting unconstrained minimization algorithms to handling constraints, it adapts feasibility-seeking algorithms to reduce … Read more

On Sum of Squares Representation of Convex Forms and Generalized Cauchy-Schwarz Inequalities

A convex form of degree larger than one is always nonnegative since it vanishes together with its gradient at the origin. In 2007, Parrilo asked if convex forms are always sums of squares. A few years later, Blekherman answered the question in the negative by showing through volume arguments that for high enough number of … Read more

Substantiation of the Backpropagation Technique via the Hamilton-Pontryagin Formalism for Training Nonconvex Nonsmooth Neural Networks

The paper observes the similarity between the stochastic optimal control of discrete dynamical systems and the training multilayer neural networks. It focuses on contemporary deep networks with nonconvex nonsmooth loss and activation functions. In the paper, the machine learning problems are treated as nonconvex nonsmooth stochastic optimization problems. As a model of nonsmooth nonconvex dependences, … Read more

Probabilistic guarantees in Robust Optimization

We develop a general methodology to derive probabilistic guarantees for solutions of robust optimization problems. Our analysis applies broadly to any convex compact uncertainty set and to any constraint affected by uncertainty in a concave manner, under minimal assumptions on the underlying stochastic process. Namely, we assume that the coordinates of the noise vector are … Read more

Generalized Gradients in Problems of Dynamic Optimization, Optimal Control, and Machine Learning

In this work, nonconvex nonsmooth problems of dynamic optimization, optimal control in discrete time (including feedback control), and machine learning are considered from a common point of view. An analogy is observed between tasks of controlling discrete dynamic systems and training multilayer neural networks with nonsmooth target function and connections. Methods for calculating generalized gradients … Read more

An Average Curvature Accelerated Composite Gradient Method for Nonconvex Smooth Composite Optimization Problems

This paper presents an accelerated composite gradient (ACG) variant, referred to as the AC-ACG method, for solving nonconvex smooth composite minimization problems. As opposed to well-known ACG variants that are either based on a known Lipschitz gradient constant or a sequence of maximum observed curvatures, the current one is based on a sequence of average … Read more

Derivative-Free Superiorization: Principle and Algorithm

The superiorization methodology is intended to work with input data of constrained minimization problems, that is, a target function and a set of constraints. However, it is based on an antipodal way of thinking to what leads to constrained minimization methods. Instead of adapting unconstrained minimization algorithms to handling constraints, it adapts feasibility-seeking algorithms to … Read more

Domain-Driven Solver (DDS): a MATLAB-based Software Package for Convex Optimization Problems in Domain-Driven Form

Domain-Driven Solver (DDS) is a MATLAB-based software package for convex optimization problems in Domain-Driven form [11]. The current version of DDS accepts every combination of the following function/set constraints: (1) symmetric cones (LP, SOCP, and SDP); (2) quadratic constraints; (3) direct sums of an arbitrary collection of 2-dimensional convex sets defined as the epigraphs of … Read more

On the intrinsic core of convex cones in real linear spaces

Convex cones play an important role in nonlinear analysis and optimization theory. In particular, specific normal cones and tangent cones are known to be convex cones, and it is a crucial fact that they are useful geometric objects for describing optimality conditions. As important applications (especially, in the fields of optimal control with PDE constraints, … Read more

A New Sequential Updating Scheme of the Lagrange Multiplier for Multi-Block Linearly Constrained Separable Convex Optimization with Relaxed Step Sizes

In various applications such as signal/image processing, data mining, statistical learning and etc., the multi-block linearly constrained separable convex optimization is frequently used, where the objective function is the sum of multiple individual convex functions, and the major constraints are linear. A classical method for solving such kind of optimization problem could be the alternating … Read more