Expected complexity analysis of stochastic direct-search

This work presents the convergence rate analysis of stochastic variants of the broad class of direct-search methods of directional type. It introduces an algorithm designed to optimize differentiable objective functions $f$ whose values can only be computed through a stochastically noisy blackbox. The proposed stochastic directional direct-search (SDDS) algorithm accepts new iterates by imposing a … Read more

Constraint Qualifications for Karush-Kuhn-Tucker Conditions in Constrained Multiobjective Optimization

The notion of a normal cone of a given set is paramount in optimization and variational analysis. In this work, we give a definition of a multiobjective normal cone which is suitable for studying optimality conditions and constraint qualifications for multiobjective optimization problems. A detailed study of the properties of the multiobjective normal cone is … Read more

On optimality conditions for nonlinear conic programming

Sequential optimality conditions have played a major role in proving stronger global convergence results of numerical algorithms for nonlinear programming. Several extensions have been described in conic contexts, where many open questions have arisen. In this paper, we present new sequential optimality conditions in the context of a general nonlinear conic framework, which explains and … Read more

Properties of the delayed weighted gradient method

The delayed weighted gradient method, recently introduced in [13], is a low-cost gradient-type method that exhibits a surprisingly and perhaps unexpected fast convergence behavior that competes favorably with the well-known conjugate gradient method for the minimization of convex quadratic functions. In this work, we establish several orthogonality properties that add understanding to the practical behavior … Read more

Computing mixed strategies equilibria in presence of switching costs by the solution of nonconvex QP problems

In this paper we address a game theory problem arising in the context of network security. In traditional game theory problems, given a defender and an attacker, one searches for mixed strategies which minimize a linear payoff functional. In the problem addressed in this paper an additional quadratic term is added to the minimization problem. … Read more

Near-optimal analysis of univariate moment bounds for polynomial optimization

We consider a recent hierarchy of upper approximations proposed by Lasserre (arXiv:1907.097784, 2019) for the minimization of a polynomial f over a compact set K⊆ℝn. This hierarchy relies on using the push-forward measure of the Lebesgue measure on K by the polynomial f and involves univariate sums of squares of polynomials with growing degrees 2r. … Read more

An Outer-approximation Guided Optimization Approach for Constrained Neural Network Inverse Problems

This paper discusses an outer-approximation guided optimization method for constrained neural network inverse problems with rectified linear units. The constrained neural network inverse problems refer to an optimization problem to find the best set of input values of a given trained neural network in order to produce a predefined desired output in presence of constraints … Read more

On Standard Quadratic Programs with Exact and Inexact Doubly Nonnegative Relaxations

The problem of minimizing a (nonconvex) quadratic form over the unit simplex, referred to as a standard quadratic program, admits an exact convex conic formulation over the computationally intractable cone of completely positive matrices. Replacing the intractable cone in this formulation by the larger but tractable cone of doubly nonnegative matrices, i.e., the cone of … Read more

Convergence of Inexact Forward–Backward Algorithms Using the Forward–Backward Envelope

This paper deals with a general framework for inexact forward–backward algorithms aimed at minimizing the sum of an analytic function and a lower semicontinuous, subanalytic, convex term. Such framework relies on an implementable inexactness condition for the computation of the proximal operator, and a linesearch procedure which is possibly performed whenever a variable metric is … Read more

Zero Order Stochastic Weakly Convex Composite Optimization

In this paper we consider stochastic weakly convex composite problems, however without the existence of a stochastic subgradient oracle. We present a derivative free algorithm that uses a two point approximation for computing a gradient estimate of the smoothed function. We prove convergence at a similar rate as state of the art methods, however with … Read more