Optimality conditions for nonlinear semidefinite programming via squared slack variables

In this work, we derive second-order optimality conditions for nonlinear semidefinite programming (NSDP) problems, by reformulating it as an ordinary nonlinear programming problem using squared slack variables. We first consider the correspondence between Karush-Kuhn-Tucker points and regularity conditions for the general NSDP and its reformulation via slack variables. Then, we obtain a pair of “no-gap” … Read more

An external penalty-type method for multicriteria

We propose an extension of the classical real-valued external penalty method to the multicriteria optimization setting. As its single objective counterpart, it also requires an external penalty function for the constraint set, as well as an exogenous divergent sequence of nonnegative real numbers, the so-called penalty parameters, but, differently from the scalar procedure, the vector-valued … Read more

Differentiable exact penalty functions for nonlinear second-order cone programs

We propose a method to solve nonlinear second-order cone programs (SOCPs), based on a continuously differentiable exact penalty function. The construction of the penalty function is given by incorporating a multipliers estimate in the augmented Lagrangian for SOCPs. Under the nondegeneracy assumption and the strong second-order sufficient condition, we show that a generalized Newton method … Read more

Inexact projected gradient method for vector optimization

In this work, we propose an inexact projected gradient-like method for solving smooth constrained vector optimization problems. In the unconstrained case, we retrieve the steepest descent method introduced by Graña Drummond and Svaiter. In the constrained setting, the method we present extends the exact one proposed by Graña Drummond and Iusem, since it admits relative … Read more

A Gauss-Newton approach for solving constrained optimization problems using differentiable exact penalties

We propose a Gauss-Newton-type method for nonlinear constrained optimization using the exact penalty introduced recently by Andre and Silva for variational inequalities. We extend their penalty function to both equality and inequality constraints using a weak regularity assumption, and as a result, we obtain a continuously differentiable exact penalty function and a new reformulation of … Read more

On the convergence of the projected gradient method for vector optimization

In 2004, Graña Drummond and Iusem proposed an extension of the projected gradient method for constrained vector optimization problems. In that method, an Armijo-like rule, implemented with a backtracking procedure, was used in order to determine the steplengths. The authors just showed stationarity of all cluster points and, for another version of the algorithm (with … Read more