An adaptive cubic regularisation algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity

The adaptive cubic overestimation algorithm described in Cartis, Gould and Toint (2007) is adapted to the problem of minimizing a nonlinear, possibly nonconvex, smooth objective function over a convex domain. Convergence to first-order critical points is shown under standard assumptions, but without any Lipschitz continuity requirement on the objective’s Hessian. A worst-case complexity analysis in … Read more

A New Relaxation Scheme for Mathematical Programs with Equilibrium Constraints

We present a new relaxation scheme for mathematical programs with equilibrium constraints (MPEC), where the complementarity constraints are replaced by a reformulation that is exact for the complementarity conditions corresponding to sufficiently non-degenerate complementarity components and relaxes only the remaining complementarity conditions. A positive parameter determines to what extent the complementarity conditions are relaxed. The … Read more

A proximal method for composite minimization

We consider minimization of functions that are compositions of convex or prox-regular functions (possibly extended-valued) with smooth vector functions. A wide variety of important optimization problems fall into this framework. We describe an algorithmic framework based on a subproblem constructed from a linearized approximation to the objective and a regularization term. Properties of local solutions … Read more

Nonlinear Stepsize Control, Trust Regions and Regularizations for Unconstrained Optimization

A general class of algorithms for unconstrained optimization is introduced, which subsumes the classical trust-region algorithm and two of its newer variants, as well as the cubic and quadratic regularization methods. A unified theory of global convergence to first-order critical points is then described for this class. An extension to projection-based trust-region algorithms for nonlinear … Read more

A globally convergent primal-dual interior-point 3D filter method for nonlinear SDP

This paper proposes a primal-dual interior-point filter method for nonlinear semidefinite programming, which is the first multidimensional (three-dimensional) filter methods for interior-point methods, and of course for constrained optimization. A freshly new definition of filter entries is proposed, which is greatly different from those in all the current filter methods. A mixed norm is used … Read more

A globally convergent primal-dual interior-point filter method for nonlinear programming: new filter optimality measures and computational results

In this paper we modify the original primal-dual interior-point filter method proposed in [18] for the solution of nonlinear programming problems. We introduce two new optimality filter entries based on the objective function, and thus better suited for the purposes of minimization, and propose conditions for using inexact Hessians. We show that the global convergence … Read more

Primal interior point method for minimization of generalized minimax functions

In this report, we propose a primal interior-point method for large sparse generalized minimax optimization. After a short introduction, where the problem is stated, we introduce the basic equations of the Newton method applied to the KKT conditions and propose a primal interior-point method. Next we describe the basic algorithm and give more details concerning … Read more

A globally convergent trust-region SQP method without a penalty function for nonlinearly constrained optimization

In this paper, we propose a new trust-region SQP method, which uses no penalty function, for solving nonlinearly constrained optimization problem. Our method consists of alternate two phases. Specifically, we alternately proceed the feasibility restoration phase and the objective function minimization phase. The global convergence property of the proposed method is shown. CitationCooperative Research Report … Read more

A primal-dual interior point method for nonlinear semidefinite programming

In this paper, we consider a primal-dual interior point method for solving nonlinear semidefinite programming problems. By combining the primal barrier penalty function and the primal-dual barrier function, a new primal-dual merit function is proposed within the framework of the line search strategy. We show the global convergence property of our method. Finally some numerical … Read more

A Coordinate Gradient Descent Method for Linearly Constrained Smooth Optimization and Support Vector Machines Training

Support vector machines (SVMs) training may be posed as a large quadratic program (QP) with bound constraints and a single linear equality constraint. We propose a (block) coordinate gradient descent method for solving this problem and, more generally, linearly constrained smooth optimization. Our method is closely related to decomposition methods currently popular for SVM training. … Read more