Efficient Proximal Subproblem Solvers for a Nonsmooth Trust-Region Method

In [R. J. Baraldi and D. P. Kouri, Mathematical Programming, (2022), pp. 1-40], we introduced an inexact trust-region algorithm for minimizing the sum of a smooth nonconvex and nonsmooth convex function. The principle expense of this method is in computing a trial iterate that satisfies the so-called fraction of Cauchy decrease condition—a bound that ensures … Read more

A Novel Stepsize for Gradient Descent Method

In this paper, we propose a novel stepsize for the classical gradient descent scheme to solve unconstrained nonlinear optimization problems. We are concerned with the convex and smooth objective without the globally Lipschitz gradient condition. Our new method just needs the locally Lipschitz gradient but still gets the rate $O(\frac{1}{k})$ of $f(x^k)-f_*$ at most. By … Read more

On the fulfillment of the complementary approximate Karush-Kuhn-Tucker conditions and algorithmic applications

Focusing on smooth constrained optimization problems, and inspired by the complementary approximate Karush-Kuhn-Tucker (CAKKT) conditions, this work introduces the weighted complementary Approximate Karush-Kuhn-Tucker (WCAKKT) conditions. They are shown to be verified not only by safeguarded augmented Lagrangian methods, but also by inexact restoration methods, inverse and logarithmic barrier methods, and a penalized algorithm for constrained … Read more

A Reduced Jacobian Scheme with Full Convergence for Multicriteria Optimization

In this paper, we propose a variant of the reduced Jacobian method (RJM) introduced by El Maghri and Elboulqe in [JOTA, 179 (2018) 917–943] for multicriteria optimization under linear constraints. Motivation is that, contrarily to RJM which has only global convergence to Pareto KKT-stationary points in the classical sense of accumulation points, this new variant … Read more

A decomposition method for lasso problems with zero-sum constraint

In this paper, we consider lasso problems with zero-sum constraint, commonly required for the analysis of compositional data in high-dimensional spaces. A novel algorithm is proposed to solve these problems, combining a tailored active-set technique, to identify the zero variables in the optimal solution, with a 2-coordinate descent scheme. At every iteration, the algorithm chooses … Read more

A novel sequential optimality condition for smooth constrained optimization and algorithmic consequences

In the smooth constrained optimization setting, this work introduces the Domain Complementary Approximate Karush-Kuhn-Tucker (DCAKKT) condition, inspired by a sequential optimality condition recently devised for nonsmooth constrained optimization problems. It is shown that the augmented Lagrangian method can generate limit points satisfying DCAKKT, and it is proved that such a condition is not related to … Read more

An MISOCP-Based Decomposition Approach for the Unit Commitment Problem with AC Power Flows

Unit Commitment (UC) and Optimal Power Flow (OPF) are two fundamental problems in short-term electric power systems planning that are traditionally solved sequentially. The state-of-the-art mostly uses a direct current flow approximation of the power flow equations in the UC-level and the generator commitments obtained are sent as input to the OPF-level. However, such an … Read more

Modeling Design and Control Problems Involving Neural Network Surrogates

We consider nonlinear optimization problems that involve surrogate models represented by neural net-works. We demonstrate first how to directly embed neural network evaluation into optimization models, highlight a difficulty with this approach that can prevent convergence, and then characterize stationarity of such models. We then present two alternative formulations of these problems in the specific … Read more

Regularized Step Directions in Conjugate Gradient Minimization for Machine Learning

Conjugate gradient minimization methods (CGM) and their accelerated variants are widely used in machine learning applications. We focus on the use of cubic regularization to improve the CGM direction independent of the steplength (learning rate) computation. Using Shanno’s reformulation of CGM as a memoryless BFGS method, we derive new formulas for the regularized step direction, … Read more

Scalable Parallel Nonlinear Optimization with PyNumero and Parapint

We describe PyNumero, an open-source, object-oriented programming framework in Python that supports rapid development of performant parallel algorithms for structured nonlinear programming problems (NLP’s) using the Message Passing Interface (MPI). PyNumero provides three fundamental building blocks for developing NLP algorithms: a fast interface for calculating first and second derivatives with the AMPL Solver Library (ASL), … Read more