New results on subgradient methods for strongly convex optimization problems with a unified analysis

We develop subgradient- and gradient-based methods for minimizing strongly convex functions under a notion which generalizes the standard Euclidean strong convexity. We propose a unifying framework for subgradient methods which yields two kinds of methods, namely, the Proximal Gradient Method (PGM) and the Conditional Gradient Method (CGM), unifying several existing methods. The unifying framework provides … Read more

An extension of the projected gradient method to a Banach space setting with application in structural topology optimization

For the minimization of a nonlinear cost functional under convex constraints the relaxed projected gradient process is a well known method. The analysis is classically performed in a Hilbert space. We generalize this method to functionals which are differentiable in a Banach space. The search direction is calculated by a quadratic approximation of the cost … Read more

Partial Relaxation of Equality-constrained Programs

This paper presents a reformulation that is a natural “by-product” of the ‘variable endogenization’ process for equality-constrained programs. The method results a partial relaxation of the constraints which in turn confers some computational advantages. A fully-annotated example illustrates the technique and presents some comparative numerical results. Citation Siwale, I.: Partial Relaxation of Equality-constrained Programs. Technical … Read more

Optimality and complexity for constrained optimization problems with nonconvex regularization

In this paper, we consider a class of constrained optimization problems where the feasible set is a general closed convex set and the objective function has a nonsmooth, nonconvex regularizer. Such regularizer includes widely used SCAD, MCP, logistic, fraction, hard thresholding and non-Lipschitz $L_p$ penalties as special cases. Using the theory of the generalized directional … Read more

Copositivity for second-order optimality conditions in general smooth optimization problems

Second-order local optimality conditions involving copositivity of the Hessian of the Lagrangian on the reduced linearization cone have the advantage that there is only a small gap between sufficient (the Hessian is strictly copositive) and necessary (the Hessian is copositive) conditions. In this respect, this is a proper generalization of convexity of the Lagrangian. We … Read more

A cone-continuity constraint qualification and algorithmic consequences

Every local minimizer of a smooth constrained optimization problem satisfies the sequential Approximate Karush-Kuhn-Tucker (AKKT) condition. This optimality condition is used to define the stopping criteria of many practical nonlinear programming algorithms. It is natural to ask for conditions on the constraints under which AKKT implies KKT. These conditions will be called Strict Constraint Qualifications … Read more

An external penalty-type method for multicriteria

We propose an extension of the classical real-valued external penalty method to the multicriteria optimization setting. As its single objective counterpart, it also requires an external penalty function for the constraint set, as well as an exogenous divergent sequence of nonnegative real numbers, the so-called penalty parameters, but, differently from the scalar procedure, the vector-valued … Read more

On the Performance of SQP Methods for Nonlinear Optimization

This paper concerns some practical issues associated with the formulation of sequential quadratic programming (SQP) methods for large-scale nonlinear optimization. SQP methods find an approximate solution of a sequence of quadratic programming (QP) subproblems in which a quadratic model of the objective function is minimized subject to the linearized constraints. Extensive numerical results are given … Read more

A trust-funnel method for nonlinear optimization problems with general nonlinear constraints and its application to derivative-free optimization

A trust-funnel method is proposed for solving nonlinear optimization problems with general nonlinear constraints. It extends the one presented by Gould and Toint (Math. Prog., 122(1):155-196, 2010), originally proposed for equality-constrained optimization problems only, to problems with both equality and inequality constraints and where simple bounds are also considered. As the original one, our method … Read more