On High-order Model Regularization for Multiobjective Optimization

A p-order regularization method for finding weak stationary points of multiobjective optimization problems with constraints is introduced. Under Holder conditions on the derivatives of the objective functions, complexity results are obtained that generalize properties recently proved for scalar optimization. Article Download View On High-order Model Regularization for Multiobjective Optimization

On the complexity of solving feasibility problems

We consider feasibility problems defined by a set of constraints that exhibit gradient H\”older continuity plus additional constraints defined by the affordability of obtaining approximate minimizers of quadratic models onto the associated feasible set. Each iteration of the method introduced in this paper involves the approximate minimization of a two-norm regularized quadratic subject to the … Read more

On the complexity of an Inexact Restoration method for constrained optimization

Recent papers indicate that some algorithms for constrained optimization may exhibit worst-case complexity bounds that are very similar to those of unconstrained optimization algorithms. A natural question is whether well established practical algorithms, perhaps with small variations, may enjoy analogous complexity results. In the present paper we show that the answer is positive with respect … Read more

Cubic Regularization Method based on Mixed Factorizations for Unconstrained Minimization

Newton’s method for unconstrained optimization, subject to proper regularization or special trust-region procedures, finds first-order stationary points with precision $\varepsilon$ employing, at most, $O(\varepsilon^{-3/2})$ functional and derivative evaluations. However, the computer work per iteration of the best-known implementations may need several factorizations per iteration or may use rather expensive matrix decompositions. In this paper, we … Read more

On the use of third-order models with fourth-order regularization for unconstrained optimization

In a recent paper, it was shown that, for the smooth unconstrained optimization problem, worst-case evaluation complexity $O(\epsilon^{-(p+1)/p})$ may be obtained by means of algorithms that employ sequential approximate minimizations of p-th order Taylor models plus (p + 1)-th order regularization terms. The aforementioned result, which assumes Lipschitz continuity of the p-th partial derivatives, generalizes … Read more

On High-order Model Regularization for Constrained Optimization

In two recent papers regularization methods based on Taylor polynomial models for minimization were proposed that only rely on H\”older conditions on the higher order employed derivatives. Grapiglia and Nesterov considered cubic regularization with a sufficient descent condition that uses the current gradient and resembles the classical Armijo’s criterion. Cartis, Gould, and Toint used Taylor … Read more

Quadratic regularization with cubic descent for unconstrained optimization

Cubic-regularization and trust-region methods with worst case first-order complexity $O(\varepsilon^{-3/2})$ and worst-case second-order complexity $O(\varepsilon^{-3})$ have been developed in the last few years. In this paper it is proved that the same complexities are achieved by means of a quadratic regularization method with a cubic sufficient-descent condition instead of the more usual predicted-reduction based descent. … Read more

Under-relaxed Quasi-Newton acceleration for an inverse fixed-point problem coming from Positron-Emission Tomography

Quasi-Newton acceleration is an interesting tool to improve the performance of numerical methods based on the fixed-point paradigm. In this work the quasi-Newton technique will be applied to an inverse problem that comes from Positron Emission Tomography, whose fixed-point counterpart has been introduced recently. It will be shown that the improvement caused by the quasi-Newton … Read more

Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization

In a recent paper we introduced a trust-region method with variable norms for unconstrained minimization and we proved standard asymptotic convergence results. Here we will show that, with a simple modification with respect to the sufficient descent condition and replacing the trust-region approach with a suitable cubic regularization, the complexity of this method for finding … Read more

Strict Constraint Qualifications and Sequential Optimality Conditions for Constrained Optimization

Sequential optimality conditions for constrained optimization are necessarily satisfied by local minimizers, independently of the fulfillment of constraint qualifications. These conditions support the employment of different stopping criteria for practical optimization algorithms. On the other hand, when an appropriate strict constraint qualification associated with some sequential optimality condition holds at a point that satisfies the … Read more