Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization

In a recent paper we introduced a trust-region method with variable norms for unconstrained minimization and we proved standard asymptotic convergence results. Here we will show that, with a simple modification with respect to the sufficient descent condition and replacing the trust-region approach with a suitable cubic regularization, the complexity of this method for finding … Read more

hBcnorm regularization algorithms for optimization over permutation matrices

Optimization problems over permutation matrices appear widely in facility layout, chip design, scheduling, pattern recognition, computer vision, graph matching, etc. Since this problem is NP-hard due to the combinatorial nature of permutation matrices, we relax the variable to be the more tractable doubly stochastic matrices and add an $L_p$-norm ($0 < p < 1$) regularization ... Read more

Nonlinear chance constrained problems: optimality conditions, regularization and solvers

We deal with chance constrained problems (CCP) with differentiable nonlinear random functions and discrete distribution. We allow nonconvex functions both in the constraints and in the objective. We reformulate the problem as a mixed-integer nonlinear program, and relax the integer variables into continuous ones. We approach the relaxed problem as a mathematical problem with complementarity … Read more

Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models

The worst-case evaluation complexity for smooth (possibly nonconvex) unconstrained optimization is considered. It is shown that, if one is willing to use derivatives of the objective function up to order $p$ (for $p\geq 1$) and to assume Lipschitz continuity of the $p$-th derivative, then an $\epsilon$-approximate first-order critical point can be computed in at most … Read more

On an adaptive regularization for ill-posed nonlinear systems and its trust-region implementation

In this paper we address the stable numerical solution of nonlinear ill-posed systems by a trust-region method. We show that an appropriate choice of the trust-region radius gives rise to a procedure that has the potential to approach a solution of the unperturbed system. This regularizing property is shown theoretically and validated numerically. Citation Dipartimento … Read more

Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients

The worst-case behaviour of a general class of regularization algorithms is considered in the case where only objective function values and associated gradient vectors are evaluated. Upper bounds are derived on the number of such evaluations that are needed for the algorithm to produce an approximate first-order critical point whose accuracy is within a user-defined … Read more

Machine Learning and Portfolio Optimization

The portfolio optimization model has limited impact in practice due to estimation issues when applied with real data. To address this, we adapt two machine learning methods, regularization and cross-validation, for portfolio optimization. First, we introduce performance-based regularization (PBR), where the idea is to constrain the sample variances of the estimated portfolio risk and return, … Read more

Mathematical Programs with Cardinality Constraints: Reformulation by Complementarity-type Constraints and a Regularization Method

Optimization problems with cardinality constraints are very dicult mathematical programs which are typically solved by global techniques from discrete optimization. Here we introduce a mixed-integer formulation whose standard relaxation still has the same solutions (in the sense of global minima) as the underlying cardinality-constrained problem; the relation between the local minima is also discussed in … Read more

Levenberg-Marquardt methods based on probabilistic gradient models and inexact subproblem solution, with application to data assimilation

The Levenberg-Marquardt algorithm is one of the most popular algorithms for the solution of nonlinear least squares problems. Motivated by the problem structure in data assimilation, we consider in this paper the extension of the classical Levenberg-Marquardt algorithm to the scenarios where the linearized least squares subproblems are solved inexactly and/or the gradient model is … Read more

Algebraic rules for quadratic regularization of Newton’s method

In this work we propose a class of quasi-Newton methods to minimize a twice differentiable function with Lipschitz continuous Hessian. These methods are based on the quadratic regularization of Newton’s method, with algebraic explicit rules for computing the regularizing parameter. The convergence properties of this class of methods are analysed. We show that if the … Read more