An augmented Lagrangian method exploiting an active-set strategy and second-order information

In this paper, we consider nonlinear optimization problems with nonlinear equality constraints and bound constraints on the variables. For the solution of such problems, many augmented Lagrangian methods have been defined in the literature. Here, we propose to modify one of these algorithms, namely ALGENCAN by Andreani et al., in such a way to incorporate … Read more

Penetration depth between two convex polyhedra: An efficient global optimization approach

During the detailed design phase of an aerospace program, one of the most important consistency checks is to ensure that no two distinct objects occupy the same physical space. Since exact geometrical modeling is usually intractable, geometry models are discretized, which often introduces small interferences not present in the fully detailed model. In this paper, … Read more

An Infeasible Interior-point Arc-search Algorithm for Nonlinear Constrained Optimization

In this paper, we propose an infeasible arc-search interior-point algorithm for solving nonlinear programming problems. Most algorithms based on interior-point methods are categorized as line search in the sense that they compute a next iterate on a straight line determined by a search direction which approximates the central path. The proposed arc-search interior-point algorithm uses … Read more

A Proximal Interior Point Algorithm with Applications to Image Processing

In this article, we introduce a new proximal interior point algorithm (PIPA). This algorithm is able to handle convex optimization problems involving various constraints where the objective function is the sum of a Lipschitz differentiable term and a possibly nonsmooth one. Each iteration of PIPA involves the minimization of a merit function evaluated for decaying … Read more

On the complexity of an Inexact Restoration method for constrained optimization

Recent papers indicate that some algorithms for constrained optimization may exhibit worst-case complexity bounds that are very similar to those of unconstrained optimization algorithms. A natural question is whether well established practical algorithms, perhaps with small variations, may enjoy analogous complexity results. In the present paper we show that the answer is positive with respect … Read more

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm

Interior-point methods have been shown to be very efficient for large-scale nonlinear programming. The combination with penalty methods increases their robustness due to the regularization of the constraints caused by the penalty term. In this paper a primal-dual penalty-interior-point algorithm is proposed, that is based on an augmented Lagrangian approach with an l2-exact penalty function. … Read more

On generalized-convex constrained multi-objective optimization

In this paper, we consider multi-objective optimization problems involving not necessarily convex constraints and componentwise generalized-convex (e.g., semi-strictly quasi-convex, quasi-convex, or explicitly quasi-convex) vector-valued objective functions that are acting between a real linear topological pre-image space and a finite dimensional image space. For these multi-objective optimization problems, we show that the set of (strictly, weakly) … Read more

Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary

In this paper we consider the minimization of a continuous function that is potentially not differentiable or not twice differentiable on the boundary of the feasible region. By exploiting an interior point technique, we present first- and second-order optimality conditions for this problem that reduces to classical ones when the derivative on the boundary is … Read more

On High-order Model Regularization for Constrained Optimization

In two recent papers regularization methods based on Taylor polynomial models for minimization were proposed that only rely on H\”older conditions on the higher order employed derivatives. Grapiglia and Nesterov considered cubic regularization with a sufficient descent condition that uses the current gradient and resembles the classical Armijo’s criterion. Cartis, Gould, and Toint used Taylor … Read more

Strict Constraint Qualifications and Sequential Optimality Conditions for Constrained Optimization

Sequential optimality conditions for constrained optimization are necessarily satisfied by local minimizers, independently of the fulfillment of constraint qualifications. These conditions support the employment of different stopping criteria for practical optimization algorithms. On the other hand, when an appropriate strict constraint qualification associated with some sequential optimality condition holds at a point that satisfies the … Read more