A Unified Characterization of Nonlinear Scalarizing Functionals in Optimization

Over the years, several classes of scalarization techniques in optimization have been introduced and employed in deriving separation theorems, optimality conditions and algorithms. In this paper, we study the relationships between some of those classes in the sense of inclusion. We focus on three types of scalarizing functionals (by Hiriart-Urruty, Drummond and Svaiter, Gerstewitz) and … Read more

First-order methods for the impatient: support identification in finite time with convergent Frank-Wolfe variants

In this paper, we focus on the problem of minimizing a non-convex function over the unit simplex. We analyze two well-known and widely used variants of the Frank-Wolfe algorithm and first prove global convergence of the iterates to stationary points both when using exact and Armijo line search. Then we show that the algorithms identify … Read more

The Standard Pessimistic Bilevel Problem

Pessimistic bilevel optimization problems, as optimistic ones, possess a structure involving three interrelated optimization problems. Moreover, their finite infima are only attained under strong conditions. We address these difficulties within a framework of moderate assumptions and a perturbation approach which allow us to approximate such finite infima arbitrarily well by minimal values of a sequence … Read more

An inexact strategy for the projected gradient algorithm in vector optimization problems on variable ordered spaces

Variable order structures model situations in which the comparison between two points depends on a point-to-cone map. In this paper, an inexact projected gradient method for solving smooth constrained vector optimization problems on variable ordered spaces is presented. It is shown that every accumulation point of the generated sequence satisfies the first order necessary optimality … Read more

Quasi-Newton approaches to Interior Point Methods for quadratic problems

Interior Point Methods (IPM) rely on the Newton method for solving systems of nonlinear equations. Solving the linear systems which arise from this approach is the most computationally expensive task of an interior point iteration. If, due to problem’s inner structure, there are special techniques for efficiently solving linear systems, IPMs enjoy fast convergence and … Read more

New sequential optimality conditions for mathematical problems with complementarity constraints and algorithmic consequences

In recent years, the theoretical convergence of iterative methods for solving nonlinear constrained optimization problems has been addressed using sequential optimality conditions, which are satisfied by minimizers independently of constraint qualifications (CQs). Even though there is a considerable literature devoted to sequential conditions for standard nonlinear optimization, the same is not true for Mathematical Problems … Read more

On the Relation between MPECs and Optimization Problems in Abs-Normal Form

We show that the problem of unconstrained minimization of a function in abs-normal form is equivalent to identifying a certain stationary point of a counterpart Mathematical Program with Equilibrium Constraints (MPEC). Hence, concepts introduced for the abs-normal forms turn out to be closely related to established concepts in the theory of MPECs. We give a … Read more

Interior Point Methods and Preconditioning for PDE-Constrained Optimization Problems Involving Sparsity Terms

PDE-constrained optimization problems with control or state constraints are challenging from an analytical as well as numerical perspective. The combination of these constraints with a sparsity-promoting L1 term within the objective function requires sophisticated optimization methods. We propose the use of an Interior Point scheme applied to a smoothed reformulation of the discretized problem, and … Read more

On the Complexity of Detecting Convexity over a Box

It has recently been shown that the problem of testing global convexity of polynomials of degree four is {strongly} NP-hard, answering an open question of N.Z. Shor. This result is minimal in the degree of the polynomial when global convexity is of concern. In a number of applications however, one is interested in testing convexity … Read more

Selection of variables in parallel space decomposition for the mesh adaptive direct search algorithm

The parallel space decomposition of the Mesh Adaptive Direct Search algorithm (PSDMADS proposed in 2008) is an asynchronous parallel method for constrained derivative-free optimization with large number of variables. It uses a simple generic strategy to decompose a problem into smaller dimension subproblems. The present work explores new strategies for selecting subset of variables defining … Read more