Inexact Restoration for Minimization with Inexact Evaluation both of the Objective Function and the Constraints

In a recent paper an Inexact Restoration method for solving continuous constrained optimization problems was analyzed from the point of view of worst-case functional complexity and convergence. On the other hand, the Inexact Restoration methodology was employed, in a different research, to handle minimization problems with inexact evaluation and simple constraints. These two methodologies are … Read more

Worst-Case Complexity of an SQP Method for Nonlinear Equality Constrained Stochastic Optimization

A worst-case complexity bound is proved for a sequential quadratic optimization (commonly known as SQP) algorithm that has been designed for solving optimization problems involving a stochastic objective function and deterministic nonlinear equality constraints. Barring additional terms that arise due to the adaptivity of the monotonically nonincreasing merit parameter sequence, the proved complexity bound is … Read more

A sequential adaptive regularisation using cubics algorithm for solving nonlinear equality constrained optimization

The adaptive regularisation algorithm using cubics (ARC) is initially proposed for unconstrained optimization. ARC has excellent convergence properties and complexity. In this paper, we extend ARC to solve nonlinear equality constrained optimization and propose a sequential adaptive regularisation using cubics algorithm inspired by sequential quadratic programming (SQP) methods. In each iteration of our method, the … Read more

A Globally Convergent Distributed Jacobi Scheme for Block-Structured Nonconvex Constrained Optimization Problems

Motivated by the increasing availability of high-performance parallel computing, we design a distributed parallel algorithm for linearly-coupled block-structured nonconvex constrained optimization problems. Our algorithm performs Jacobi-type proximal updates of the augmented Lagrangian function, requiring only local solutions of separable block nonlinear programming (NLP) problems. We provide a cheap and explicitly computable Lyapunov function that allows … Read more

Model-Based Derivative-Free Methods for Convex-Constrained Optimization

We present a model-based derivative-free method for optimization subject to general convex constraints, which we assume are unrelaxable and accessed only through a projection operator that is cheap to evaluate. We prove global convergence and a worst-case complexity of $O(\epsilon^{-2})$ iterations and objective evaluations for nonconvex functions, matching results for the unconstrained case. We introduce … Read more

Global Convergence of Algorithms Under Constant Rank Conditions for Nonlinear Second-Order Cone Programming

In [R. Andreani, G. Haeser, L. M. Mito, H. Ramírez C., Weak notions of nondegeneracy in nonlinear semidefinite programming, arXiv:2012.14810, 2020] the classical notion of nondegeneracy (or transversality) and Robinson’s constraint qualification have been revisited in the context of nonlinear semidefinite programming exploiting the structure of the problem, namely, its eigendecomposition. This allows formulating the … Read more

An Accelerated Inexact Dampened Augmented Lagrangian Method for Linearly-Constrained Nonconvex Composite Optimization Problems

This paper proposes and analyzes an accelerated inexact dampened augmented Lagrangian (AIDAL) method for solving linearly-constrained nonconvex composite optimization problems. Each iteration of the AIDAL method consists of: (i) inexactly solving a dampened proximal augmented Lagrangian (AL) subproblem by calling an accelerated composite gradient (ACG) subroutine; (ii) applying a dampened and under-relaxed Lagrange multiplier update; … Read more

Constrained Optimization in the Presence of Noise

The problem of interest is the minimization of a nonlinear function subject to nonlinear equality constraints using a sequential quadratic programming (SQP) method. The minimization must be performed while observing only noisy evaluations of the objective and constraint functions. In order to obtain stability, the classical SQP method is modified by relaxing the standard Armijo … Read more

Dual descent ALM and ADMM

\(\) Classical primal-dual algorithms attempt to solve $\max_{\mu}\min_{x} \mathcal{L}(x,\mu)$ by alternatively minimizing over the primal variable $x$ through primal descent and maximizing the dual variable $\mu$ through dual ascent. However, when $\mathcal{L}(x,\mu)$ is highly nonconvex with complex constraints in $x$, the minimization over $x$ may not achieve global optimality, and hence the dual ascent step … Read more

A Local MM Subspace Method for Solving Constrained Variational Problems in Image Recovery

This article introduces a new Penalized Majorization-Minimization Subspace algorithm (P-MMS) for solving smooth, constrained optimization problems. In short, our approach consists of embedding a subspace algorithm in an inexact exterior penalty procedure. The subspace strategy, combined with a Majoration-Minimization step-size search, takes great advantage of the smoothness of the penalized cost function, while the penalty … Read more