A Family of Subgradient-Based Methods for Convex Optimization Problems in a Unifying Framework

We propose a new family of subgradient- and gradient-based methods which converges with optimal complexity for convex optimization problems whose feasible region is simple enough. This includes cases where the objective function is non-smooth, smooth, have composite/saddle structure, or are given by an inexact oracle model. We unified the way of constructing the subproblems which … Read more

A derivative-free trust-funnel method for equality-constrained nonlinear optimization

In this work, we look into new derivative-free methods to solve equality-constrained optimization problems. Of particular interest, are the trust-region techniques, which have been investigated for the unconstrained and bound-constrained cases. For solving equality-constrained optimization problems, we introduce a derivative-free adaptation of the trust-funnel method combined with a self-correcting geometry scheme and present some encouraging … Read more

A trust-region derivative-free algorithm for constrained optimization

We propose a trust-region algorithm for constrained optimization problems in which the derivatives of the objective function are not available. In each iteration, the objective function is approximated by a model obtained by quadratic interpolation, which is then minimized within the intersection of the feasible set with the trust region. Since the constraints are handled … Read more

Problem Formulations for Simulation-based Design Optimization using Statistical Surrogates and Direct Search

Typical challenges of simulation-based design optimization include unavailable gradients and unreliable approximations thereof, expensive function evaluations, numerical noise, multiple local optima and the failure of the analysis to return a value to the optimizer. One possible remedy to alleviate these issues is to use surrogate models in lieu of the computational models or simulations and … Read more

On QPCCs, QCQPs and Completely Positive Programs

This paper studies several classes of nonconvex optimization problems defined over convex cones, establishing connections between them and demonstrating that they can be equivalently formulated as convex completely positive programs. The problems being studied include: a quadratically constrained quadratic program (QCQP), a quadratic program with complementarity constraints (QPCC), and rank constrained semidefinite programs. Our results … Read more

Characterization of properly optimal elements with variable ordering structures

In vector optimization with a variable ordering structure the partial ordering defined by a convex cone is replaced by a whole family of convex cones, one associated with each element of the space. In recent publications it was started to develop a comprehensive theory for these vector optimization problems. Thereby also notions of proper efficiency … Read more

New active set identification for general constrained optimization and minimax problems

The purpose of this paper is to discuss the problem of identifying the active constraints for general constrained nonlinear programming and constrained minimax problems at an isolated local solution. Facchinei et al. [F. Facchinei, A. Fischer, and C. Kanzow, On the accurate identification of active constraints, SIAM J. Optim., 9(1998), 14-32] proposed an effective technique … Read more

An Interior-Point Trust-Funnel Algorithm for Nonlinear Optimization

We present an interior-point trust-funnel algorithm for solving large-scale nonlinear optimization problems. The method is based on an approach proposed by Gould and Toint (Math Prog 122(1):155–196, 2010) that focused on solving equality constrained problems. Our method is similar in that it achieves global convergence guarantees by combining a trust-region methodology with a funnel mechanism, … Read more

Iterative Reweighted Linear Least Squares for Exact Penalty Subproblems on Product Sets

We present two matrix-free methods for solving exact penalty subproblems on product sets that arise when solving large-scale optimization problems. The first approach is a novel iterative reweighting algorithm (IRWA), which iteratively minimizes quadratic models of relaxed subproblems while automatically updating a relaxation vector. The second approach is based on alternating direction augmented Lagrangian (ADAL) … Read more