A sequential adaptive regularisation using cubics algorithm for solving nonlinear equality constrained optimization

The adaptive regularisation algorithm using cubics (ARC) is initially proposed for unconstrained optimization. ARC has excellent convergence properties and complexity. In this paper, we extend ARC to solve nonlinear equality constrained optimization and propose a sequential adaptive regularisation using cubics algorithm inspired by sequential quadratic programming (SQP) methods. In each iteration of our method, the … Read more

A Globally Convergent Distributed Jacobi Scheme for Block-Structured Nonconvex Constrained Optimization Problems

Motivated by the increasing availability of high-performance parallel computing, we design a distributed parallel algorithm for linearly-coupled block-structured nonconvex constrained optimization problems. Our algorithm performs Jacobi-type proximal updates of the augmented Lagrangian function, requiring only local solutions of separable block nonlinear programming (NLP) problems. We provide a cheap and explicitly computable Lyapunov function that allows … Read more

Distributionally risk-receptive and risk-averse network interdiction problems with general ambiguity set

We introduce generalizations of stochastic network interdiction problem with distributional ambiguity. Specifically, we consider a distributionally risk-averse (or robust) network interdiction problem (DRA-NIP) and a distributionally risk-receptive network interdiction problem (DRR-NIP) where a leader maximizes a follower’s minimal expected objective value for either the worst-case or the best-case, respectively, probability distribution belonging to ambiguity set … Read more

A unified analysis of descent sequences in weakly convex optimization, including convergence rates for bundle methods

We present a framework for analyzing convergence and local rates of convergence of a class of descent algorithms, assuming the objective function is weakly convex. The framework is general, in the sense that it combines the possibility of explicit iterations (based on the gradient or a subgradient at the current iterate), implicit iterations (using a … Read more

An adaptive regularization algorithm for unconstrained optimization with inexact function and derivatives values

An adaptive regularization algorithm for unconstrained nonconvex optimization is proposed that is capable of handling inexact objective-function and derivative values, and also of providing approximate minimizer of arbitrary order. In comparison with a similar algorithm proposed in Cartis, Gould, Toint (2022), its distinguishing feature is that it is based on controlling the relative error between … Read more

Trust-region algorithms: probabilistic complexity and intrinsic noise with applications to subsampling techniques

A trust-region algorithm is presented for finding approximate minimizers of smooth unconstrained functions whose values and derivatives are subject to random noise. It is shown that, under suitable probabilistic assumptions, the new method finds (in expectation) an epsilon-approximate minimizer of arbitrary order q > 0 in at most O(epsilon^{-(q+1)}) inexact evaluations of the function and … Read more

OPM, a collection of Optimization Problems in Matlab

OPM is a small collection of CUTEst unconstrained and bound-constrained nonlinear optimization problems, which can be used in Matlab for testing optimization algorithms directly (i.e. without installing additional software). Article Download View OPM, a collection of Optimization Problems in Matlab

Some Strongly Polynomially Solvable Convex Quadratic Programs with Bounded Variables

This paper begins with a class of convex quadratic programs (QPs) with bounded variables solvable by the parametric principal pivoting algorithm with $\mbox{O}(n^3)$ strongly polynomial complexity, where $n$ is the number of variables of the problem. Extension of the Hessian class is also discussed. Our research is motivated by a recent reference [7] wherein the … Read more

Convergence Analysis of Block Majorize-Minimize Subspace Approaches

Majorization-Minimization (MM) consists of a class of efficient and effective optimization algorithms that benefit from solid theoretical foundations. MM methods have shown their great ability to tackle efficiently challenging optimization problems from signal processing, image processing, inverse problems and machine learning. When processing large amount of data/variable, as it may happen in 3D image processing, … Read more

Training Structured Neural Networks Through Manifold Identification and Variance Reduction

This paper proposes an algorithm, RMDA, for training neural networks (NNs) with a regularization term for promoting desired structures. RMDA does not incur computation additional to proximal SGD with momentum, and achieves variance reduction without requiring the objective function to be of the finite-sum form. Through the tool of manifold identification from nonlinear optimization, we … Read more