Properties of the delayed weighted gradient method

The delayed weighted gradient method, recently introduced in [13], is a low-cost gradient-type method that exhibits a surprisingly and perhaps unexpected fast convergence behavior that competes favorably with the well-known conjugate gradient method for the minimization of convex quadratic functions. In this work, we establish several orthogonality properties that add understanding to the practical behavior … Read more

Computing mixed strategies equilibria in presence of switching costs by the solution of nonconvex QP problems

In this paper we address a game theory problem arising in the context of network security. In traditional game theory problems, given a defender and an attacker, one searches for mixed strategies which minimize a linear payoff functional. In the problem addressed in this paper an additional quadratic term is added to the minimization problem. … Read more

Near-optimal analysis of univariate moment bounds for polynomial optimization

We consider a recent hierarchy of upper approximations proposed by Lasserre (arXiv:1907.097784, 2019) for the minimization of a polynomial f over a compact set K⊆ℝn. This hierarchy relies on using the push-forward measure of the Lebesgue measure on K by the polynomial f and involves univariate sums of squares of polynomials with growing degrees 2r. … Read more

An Outer-approximation Guided Optimization Approach for Constrained Neural Network Inverse Problems

This paper discusses an outer-approximation guided optimization method for constrained neural network inverse problems with rectified linear units. The constrained neural network inverse problems refer to an optimization problem to find the best set of input values of a given trained neural network in order to produce a predefined desired output in presence of constraints … Read more

Convergence of Inexact Forward–Backward Algorithms Using the Forward–Backward Envelope

This paper deals with a general framework for inexact forward–backward algorithms aimed at minimizing the sum of an analytic function and a lower semicontinuous, subanalytic, convex term. Such framework relies on an implementable inexactness condition for the computation of the proximal operator, and a linesearch procedure which is possibly performed whenever a variable metric is … Read more

On Standard Quadratic Programs with Exact and Inexact Doubly Nonnegative Relaxations

The problem of minimizing a (nonconvex) quadratic form over the unit simplex, referred to as a standard quadratic program, admits an exact convex conic formulation over the computationally intractable cone of completely positive matrices. Replacing the intractable cone in this formulation by the larger but tractable cone of doubly nonnegative matrices, i.e., the cone of … Read more

Zero Order Stochastic Weakly Convex Composite Optimization

In this paper we consider stochastic weakly convex composite problems, however without the existence of a stochastic subgradient oracle. We present a derivative free algorithm that uses a two point approximation for computing a gradient estimate of the smoothed function. We prove convergence at a similar rate as state of the art methods, however with … Read more

A Hybrid Gradient Method for Strictly Convex Quadratic Programming

In this paper, a reliable hybrid algorithm for solving convex quadratic minimization problems is presented. At each iteration, two points are computed: first, an auxiliary point $\dot{x}_k$ is generated by performing a gradient step equipped with an optimal steplength, then, the next iterate $x_{k+1}$ is obtained through a weighted sum of $\dot{x}_k$ with the penultimate … Read more

Sum theorems for maximal monotone operators under weak compactness conditions

This note presents a summary of our most recent results concerning the maximal monotonicity of the sum of two maximal monotone operators defined in a locally convex space under the classical interiority qualification condition when one of their domains or ranges has a weak relative compactness property. CitationNAArticleDownload View PDF

Coordinate Descent Without Coordinates: Tangent Subspace Descent on Riemannian Manifolds

We extend coordinate descent to manifold domains, and provide convergence analyses for geodesically convex and non-convex smooth objective functions. Our key insight is to draw an analogy between coordinate blocks in Euclidean space and tangent subspaces of a manifold. Hence, our method is called tangent subspace descent (TSD). The core principle behind ensuring convergence of … Read more