A unified analysis of descent sequences in weakly convex optimization, including convergence rates for bundle methods

We present a framework for analyzing convergence and local rates of convergence of a class of descent algorithms, assuming the objective function is weakly convex. The framework is general, in the sense that it combines the possibility of explicit iterations (based on the gradient or a subgradient at the current iterate), implicit iterations (using a … Read more

Robot Dance: a mathematical optimization platform for intervention against Covid-19 in a complex network

Robot Dance is a computational platform developed in response to the coronavirus outbreak, to support the decision making on public policies at a regional level. The tool is suitable for understanding and suggesting levels of intervention needed to contain the spread of diseases when the mobility of inhabitants through a regional network is a concern. … Read more

On scaled stopping criteria for a safeguarded augmented Lagrangian method with theoretical guarantees

This paper discusses the use of a stopping criterion based on the scaling of the Karush-Kuhn-Tucker (KKT) conditions by the norm of the approximate Lagrange multiplier in the ALGENCAN implementation of a safeguarded augmented Lagrangian method. Such stopping criterion is already used in several nonlinear programming solvers, but it has not yet been considered in … Read more

A DISCUSSION ON ELECTRICITY PRICES, OR THE TWO SIDES OF THE COIN

We examine how different pricing frameworks deal with nonconvex features typical of day-ahead energy prices when the power system is hydro-dominated, like in Brazil. For the system operator, requirements of minimum generation translate into feasibility issues that are fundamental to carry the generated power through the network. When utilities are remunerated at a price depending … Read more

New sequential optimality conditions for mathematical problems with complementarity constraints and algorithmic consequences

In recent years, the theoretical convergence of iterative methods for solving nonlinear constrained optimization problems has been addressed using sequential optimality conditions, which are satisfied by minimizers independently of constraint qualifications (CQs). Even though there is a considerable literature devoted to sequential conditions for standard nonlinear optimization, the same is not true for Mathematical Problems … Read more

Accelerating block coordinate descent methods with identification strategies

This work is about active set identification strategies aimed at accelerating block-coordinate descent methods (BCDM) applied to large-scale problems. We start by devising an identification function tailored for bound-constrained composite minimization together with an associated version of the BCDM, called Active BCDM, that is also globally convergent. The identification function gives rise to an efficient … Read more

Convergence properties of a second order augmented Lagrangian method for mathematical programs with complementarity constraints

Mathematical Programs with Complementarity Constraints (MPCCs) are difficult optimization problems that do not satisfy the majority of the usual constraint qualifications (CQs) for standard nonlinear optimization. Despite this fact, classical methods behaves well when applied to MPCCs. Recently, Izmailov, Solodov and Uskov proved that first order augmented Lagrangian methods, under a natural adaption of the … Read more

Strict Constraint Qualifications and Sequential Optimality Conditions for Constrained Optimization

Sequential optimality conditions for constrained optimization are necessarily satisfied by local minimizers, independently of the fulfillment of constraint qualifications. These conditions support the employment of different stopping criteria for practical optimization algorithms. On the other hand, when an appropriate strict constraint qualification associated with some sequential optimality condition holds at a point that satisfies the … Read more

A second-order sequential optimality condition associated to the convergence of optimization algorithms

Sequential optimality conditions have recently played an important role on the analysis of the global convergence of optimization algorithms towards first-order stationary points and justifying their stopping criteria. In this paper we introduce the first sequential optimality condition that takes into account second-order information. We also present a companion constraint qualification that is less stringent … Read more

A cone-continuity constraint qualification and algorithmic consequences

Every local minimizer of a smooth constrained optimization problem satisfies the sequential Approximate Karush-Kuhn-Tucker (AKKT) condition. This optimality condition is used to define the stopping criteria of many practical nonlinear programming algorithms. It is natural to ask for conditions on the constraints under which AKKT implies KKT. These conditions will be called Strict Constraint Qualifications … Read more