Projected-Search Methods for Bound-Constrained Optimization

Projected-search methods for bound-constrained minimization are based on performing a line search along a continuous piecewise-linear path obtained by projecting a search direction onto the feasible region. A potential benefit of a projected-search method is that many changes to the active set can be made at the cost of computing a single search direction. As … Read more

Implicit steepest descent algorithm for optimization with orthogonality constraints

Optimization with orthogonality constraints problems appear widely in applications from science and engineering. We address these types of problems from an numerical approach. Our new framework combines the steepest gradient descent using implicit information with and operator projection in order to construct a feasible sequence of points. In addition, we adopt an adaptive Barzilai–Borwein steplength … Read more

Variance Reduction of Stochastic Gradients Without Full Gradient Evaluation

A standard concept for reducing the variance of stochastic gradient approximations is based on full gradient evaluations every now and then. In this paper an approach is considered that — while approximating a local minimizer of a sum of functions — also generates approximations of the gradient and the function values without relying on full … Read more

Expected complexity analysis of stochastic direct-search

This work presents the convergence rate analysis of stochastic variants of the broad class of direct-search methods of directional type. It introduces an algorithm designed to optimize differentiable objective functions $f$ whose values can only be computed through a stochastically noisy blackbox. The proposed stochastic directional direct-search (SDDS) algorithm accepts new iterates by imposing a … Read more

Complexity iteration analysis for stongly convex multi-objective optimization using a Newton path-following procedure

In this note we consider the iteration complexity of solving strongly convex multi objective optimization. We discuss the precise meaning of this problem, and indicate it is loosely defined, but the most natural notion is to find a set of Pareto optimal points across a grid of scalarized problems. We derive that in most cases, … Read more

Constraint Qualifications for Karush-Kuhn-Tucker Conditions in Constrained Multiobjective Optimization

The notion of a normal cone of a given set is paramount in optimization and variational analysis. In this work, we give a definition of a multiobjective normal cone which is suitable for studying optimality conditions and constraint qualifications for multiobjective optimization problems. A detailed study of the properties of the multiobjective normal cone is … Read more

On optimality conditions for nonlinear conic programming

Sequential optimality conditions have played a major role in proving stronger global convergence results of numerical algorithms for nonlinear programming. Several extensions have been described in conic contexts, where many open questions have arisen. In this paper, we present new sequential optimality conditions in the context of a general nonlinear conic framework, which explains and … Read more

Computing mixed strategies equilibria in presence of switching costs by the solution of nonconvex QP problems

In this paper we address a game theory problem arising in the context of network security. In traditional game theory problems, given a defender and an attacker, one searches for mixed strategies which minimize a linear payoff functional. In the problem addressed in this paper an additional quadratic term is added to the minimization problem. … Read more

Properties of the delayed weighted gradient method

The delayed weighted gradient method, recently introduced in [13], is a low-cost gradient-type method that exhibits a surprisingly and perhaps unexpected fast convergence behavior that competes favorably with the well-known conjugate gradient method for the minimization of convex quadratic functions. In this work, we establish several orthogonality properties that add understanding to the practical behavior … Read more

Near-optimal analysis of univariate moment bounds for polynomial optimization

We consider a recent hierarchy of upper approximations proposed by Lasserre (arXiv:1907.097784, 2019) for the minimization of a polynomial f over a compact set K⊆ℝn. This hierarchy relies on using the push-forward measure of the Lebesgue measure on K by the polynomial f and involves univariate sums of squares of polynomials with growing degrees 2r. … Read more