Nonlinear matrix recovery using optimization on the Grassmann manifold

We investigate the problem of recovering a partially observed high-rank matrix whose columns obey a nonlinear structure such as a union of subspaces, an algebraic variety or grouped in clusters. The recovery problem is formulated as the rank minimization of a nonlinear feature map applied to the original matrix, which is then further approximated by … Read more

Global optimization using random embeddings

We propose a random-subspace algorithmic framework for global optimization of Lipschitz-continuous objectives, and analyse its convergence using novel tools from conic integral geometry. X-REGO randomly projects, in a sequential or simultaneous manner, the high- dimensional original problem into low-dimensional subproblems that can then be solved with any global, or even local, optimization solver. We estimate … Read more

Hashing embeddings of optimal dimension, with applications to linear least squares

The aim of this paper is two-fold: firstly, to present subspace embedding properties for s-hashing sketching matrices, with $s\geq 1$, that are optimal in the projection dimension $m$ of the sketch, namely, $m=O(d)$, where $d$ is the dimension of the subspace. A diverse set of results are presented that address the case when the input … Read more

Scalable Subspace Methods for Derivative-Free Nonlinear Least-Squares Optimization

We introduce a general framework for large-scale model-based derivative-free optimization based on iterative minimization within random subspaces. We present a probabilistic worst-case complexity analysis for our method, where in particular we prove high-probability bounds on the number of iterations before a given optimality is achieved. This framework is specialized to nonlinear least-squares problems, with a … Read more

Strong Evaluation Complexity of An Inexact Trust-Region Algorithm for Arbitrary-Order Unconstrained Nonconvex Optimization

A trust-region algorithm using inexact function and derivatives values is introduced for solving unconstrained smooth optimization problems. This algorithm uses high-order Taylor models and allows the search of strong approximate minimizers of arbitrary order. The evaluation complexity of finding a $q$-th approximate minimizer using this algorithm is then shown, under standard conditions, to be $\mathcal{O}\big(\min_{j\in\{1,\ldots,q\}}\epsilon_j^{-(q+1)}\big)$ … Read more

Constrained global optimization of functions with low effective dimensionality using multiple random embeddings

We consider the bound-constrained global optimization of functions with low effective dimensionality, that are constant along an (unknown) linear subspace and only vary over the effective (complement) subspace. We aim to implicitly explore the intrinsic low dimensionality of the constrained landscape using feasible random embeddings, in order to understand and improve the scalability of algorithms … Read more

Strong Evaluation Complexity Bounds for Arbitrary-Order Optimization of Nonconvex Nonsmooth Composite Functions

We introduce the concept of strong high-order approximate minimizers for nonconvex optimization problems. These apply in both standard smooth and composite non-smooth settings, and additionally allow convex or inexpensive constraints. An adaptive regularization algorithm is then proposed to find such approximate minimizers. Under suitable Lipschitz continuity assumptions, whenever the feasible set is convex, it is … Read more

A dimensionality reduction technique for unconstrained global optimization of functions with low effective dimensionality

We investigate the unconstrained global optimization of functions with low effective dimensionality, that are constant along certain (unknown) linear subspaces. Extending the technique of random subspace embeddings in [Wang et al., Bayesian optimization in a billion dimensions via random embeddings. JAIR, 55(1): 361–387, 2016], we study a generic Random Embeddings for Global Optimization (REGO) framework … Read more

On monotonic estimates of the norm of the minimizers of regularized quadratic functions in Krylov spaces

We show that the minimizers of regularized quadratic functions restricted to their natural Krylov spaces increase in Euclidean norm as the spaces expand. Citation Technical Report RAL-TR-2019-005, STFC-Rutherford Appleton Laboratory, Oxfordshire, England, April 5th 2019 Article Download View On monotonic estimates of the norm of the minimizers of regularized quadratic functions in Krylov spaces

Escaping local minima with derivative-free methods: a numerical investigation

We apply a state-of-the-art, local derivative-free solver, Py-BOBYQA, to global optimization problems, and propose an algorithmic improvement that is beneficial in this context. Our numerical findings are illustrated on a commonly-used test set of global optimization problems and associated noisy variants, and on hyperparameter tuning for a machine learning test set. As Py-BOBYQA is a … Read more