Restarting nonlinear conjugate gradient methods

In unconstrained optimization, due to the nonlinearity of the objective function or rounding errors in finite precision arithmetic, it can happen that NaN or infinite step sizes appear in the nonlinear conjugate gradient (NCG) method, or otherwise the step violates the sufficient descent condition (SDC). In this case the conjugate gradient (CG) direction must often … Read more

A class of diagonal quasi-Newton penalty decomposition algorithms for sparse bound-constrained nonconvex optimization

This paper discusses an improved quasi-Newton penalty decomposition algorithm for the cardinality bound-constrained optimization problems whose simple bounds on the variables are assumed to be finite. Until an approximate stationary point is found, this algorithm approximates the solutions of a sequence of penalty subproblems by a two-block decomposition scheme. This scheme finds an approximate solution … Read more

A general merit function-based global convergent framework for nonlinear optimization

In this paper, we revisit the convergence theory of the inexact restoration paradigm for non-linear optimization. The paper first identifies the basic elements of a globally convergent method based on merit functions. Then, the inexact restoration method that employs a two-phase iteration is introduced as a special case. A specific implementation is presented that is … Read more

Global non-asymptotic super-linear convergence rates of regularized proximal quasi-Newton methods on non-smooth composite problems

In this paper, we propose two regularized proximal quasi-Newton methods with symmetric rank-1 update of the metric (SR1 quasi-Newton) to solve non-smooth convex additive composite problems. Both algorithms avoid using line search or other trust region strategies. For each of them, we prove a super-linear convergence rate that is independent of the initialization of the … Read more

A Line Search Filter Sequential Adaptive Cubic Regularisation Algorithm for Nonlinearly Constrained Optimization

In this paper, a sequential adaptive regularization algorithm using cubics (ARC) is presented to solve nonlinear equality constrained optimization. It is motivated by the idea of handling constraints in sequential quadratic programming methods. In each iteration, we decompose the new step into the sum of the normal step and the tangential step by using composite … Read more

A Unified Funnel Restoration SQP Algorithm

We consider nonlinearly constrained optimization problems and discuss a generic double-loop framework consisting of four algorithmic ingredients that unifies a broad range of nonlinear optimization solvers. This framework has been implemented in the open-source solver Uno, a Swiss-army knife-like C++ optimization framework that unifies many nonlinearly constrained nonconvex optimization solvers. We illustrate the framework with … Read more

A Subspace Minimization Barzilai-Borwein Method for Multiobjective Optimization Problems

Nonlinear conjugate gradient methods have recently garnered significant attention within the multiobjective optimization community. These methods aim to maintain consistency in conjugate parameters with their single-objective optimization counterparts. However, the preservation of the attractive conjugate property of search directions remains uncertain, even for quadratic cases, in multiobjective conjugate gradient methods. This loss of interpretability of … Read more

Strong global convergence properties of algorithms for nonlinear symmetric cone programming

Sequential optimality conditions have played a major role in proving strong global convergence properties of numerical algorithms for many classes of optimization problems. In particular, the way complementarity is dealt is fundamental to achieve a strong condition. Typically, one uses the inner product structure to measure complementarity, which gives a very general approach to a … Read more

Full-low evaluation methods for bound and linearly constrained derivative-free optimization

Derivative-free optimization (DFO) consists in finding the best value of an objective function without relying on derivatives. To tackle such problems, one may build approximate derivatives, using for instance finite-difference estimates. One may also design algorithmic strategies that perform space exploration and seek improvement over the current point. The first type of strategy often provides … Read more

Global convergence of a BFGS-type algorithm for nonconvex multiobjective optimization problems

We propose a modified BFGS algorithm for multiobjective optimization problems with global convergence, even in the absence of convexity assumptions on the objective functions. Furthermore, we establish the superlinear convergence of the method under usual conditions. Our approach employs Wolfe step sizes and ensures that the Hessian approximations are updated and corrected at each iteration … Read more