Unifying nonlinearly constrained nonconvex optimization

Derivative-based iterative methods for nonlinearly constrained non-convex optimization usually share common algorithmic components, such as strategies for computing a descent direction and mechanisms that promote global convergence. Based on this observation, we introduce an abstract framework based on four common ingredients that describes most derivative-based iterative methods and unifies their workflows. We then present Uno, … Read more

Composite optimization models via proximal gradient method with enhanced adaptive stepsizes

We first consider the convex composite optimization models (COM) with the locally Lipschitz condition imposed on the gradient of the differentiable term. The classical proximal gradient method (PG) will be studied with our new strategy of stepsize selection. This is conveniently computed via an enhanced adaptive and closed formula. The sequence of our new stepsizes … Read more

Efficient Proximal Subproblem Solvers for a Nonsmooth Trust-Region Method

In [R. J. Baraldi and D. P. Kouri, Mathematical Programming, (2022), pp. 1-40], we introduced an inexact trust-region algorithm for minimizing the sum of a smooth nonconvex and nonsmooth convex function. The principle expense of this method is in computing a trial iterate that satisfies the so-called fraction of Cauchy decrease condition—a bound that ensures … Read more

A Novel Stepsize for Gradient Descent Method

In this paper, we propose a novel stepsize for the classical gradient descent scheme to solve unconstrained nonlinear optimization problems. We are concerned with the convex and smooth objective without the globally Lipschitz gradient condition. Our new method just needs the locally Lipschitz gradient but still gets the rate $O(\frac{1}{k})$ of $f(x^k)-f_*$ at most. By … Read more

On the fulfillment of the complementary approximate Karush-Kuhn-Tucker conditions and algorithmic applications

Focusing on smooth constrained optimization problems, and inspired by the complementary approximate Karush-Kuhn-Tucker (CAKKT) conditions, this work introduces the weighted complementary Approximate Karush-Kuhn-Tucker (WCAKKT) conditions. They are shown to be verified not only by safeguarded augmented Lagrangian methods, but also by inexact restoration methods, inverse and logarithmic barrier methods, and a penalized algorithm for constrained … Read more

A Reduced Jacobian Scheme with Full Convergence for Multicriteria Optimization

In this paper, we propose a variant of the reduced Jacobian method (RJM) introduced by El Maghri and Elboulqe in [JOTA, 179 (2018) 917–943] for multicriteria optimization under linear constraints. Motivation is that, contrarily to RJM which has only global convergence to Pareto KKT-stationary points in the classical sense of accumulation points, this new variant … Read more

A decomposition method for lasso problems with zero-sum constraint

In this paper, we consider lasso problems with zero-sum constraint, commonly required for the analysis of compositional data in high-dimensional spaces. A novel algorithm is proposed to solve these problems, combining a tailored active-set technique, to identify the zero variables in the optimal solution, with a 2-coordinate descent scheme. At every iteration, the algorithm chooses … Read more

A novel sequential optimality condition for smooth constrained optimization and algorithmic consequences

In the smooth constrained optimization setting, this work introduces the Domain Complementary Approximate Karush-Kuhn-Tucker (DCAKKT) condition, inspired by a sequential optimality condition recently devised for nonsmooth constrained optimization problems. It is shown that the augmented Lagrangian method can generate limit points satisfying DCAKKT, and it is proved that such a condition is not related to … Read more

An MISOCP-Based Decomposition Approach for the Unit Commitment Problem with AC Power Flows

Unit Commitment (UC) and Optimal Power Flow (OPF) are two fundamental problems in short-term electric power systems planning that are traditionally solved sequentially. The state-of-the-art mostly uses a direct current flow approximation of the power flow equations in the UC-level and the generator commitments obtained are sent as input to the OPF-level. However, such an … Read more

Modeling Design and Control Problems Involving Neural Network Surrogates

We consider nonlinear optimization problems that involve surrogate models represented by neural net-works. We demonstrate first how to directly embed neural network evaluation into optimization models, highlight a difficulty with this approach that can prevent convergence, and then characterize stationarity of such models. We then present two alternative formulations of these problems in the specific … Read more