Nonexpansive Markov Operators and Random Function Iterations for Stochastic Fixed Point Problems

We study the convergence of random function iterations for finding an invariant measure of the corresponding Markov operator. We call the problem of finding such an invariant mea- sure the stochastic fixed point problem. This generalizes earlier work studying the stochastic feasibility problem, namely, to find points that are, with probability 1, fixed points of … Read more

A Radial Basis Function Method for Noisy Global Optimisation

We present a novel response surface method for global optimisation of an expensive and noisy (black-box) objective function, where error bounds on the deviation of the observed noisy function values from their true counterparts are available. The method is based on the well-established RBF method by Gutmann (2001a,c) for minimising an expensive and deterministic objective … Read more

On Exact and Inexact RLT and SDP-RLT Relaxations of Quadratic Programs with Box Constraints

Quadratic programs with box constraints involve minimizing a possibly nonconvex quadratic function subject to lower and upper bounds on each variable. This is a well-known NP-hard problem that frequently arises in various applications. We focus on two convex relaxations, namely the RLT (Reformulation-Linearization Technique) relaxation and the SDP-RLT relaxation obtained by adding semidefinite constraints to … Read more

A Slightly Lifted Convex Relaxation for Nonconvex Quadratic Programming with Ball Constraints

\(\) Globally optimizing a nonconvex quadratic over the intersection of $m$ balls in $\mathbb{R}^n$ is known to be polynomial-time solvable for fixed $m$. Moreover, when $m=1$, the standard semidefinite relaxation is exact. When $m=2$, it has been shown recently that an exact relaxation can be constructed using a disjunctive semidefinite formulation based essentially on two … Read more

Optimality-Based Discretization Methods for the Global Optimization of Nonconvex Semi-Infinite Programs

We use sensitivity analysis to design optimality-based discretization (cutting-plane) methods for the global optimization of nonconvex semi-infinite programs (SIPs). We begin by formulating the optimal discretization of SIPs as a max-min problem and propose variants that are more computationally tractable. We then use parametric sensitivity theory to design an efficient method for solving these max-min … Read more

On the Optimization Landscape of Burer-Monteiro Factorization: When do Global Solutions Correspond to Ground Truth?

In low-rank matrix recovery, the goal is to recover a low-rank matrix, given a limited number of linear and possibly noisy measurements. Low-rank matrix recovery is typically solved via a nonconvex method called Burer-Monteiro factorization (BM). If the rank of the ground truth is known, BM is free of sub-optimal local solutions, and its true solutions … Read more

Force-Controlled Pose Optimization and Trajectory Planning for Chained Stewart Platforms

We study optimization methods applied to minimizing forces for poses and movements of chained Stewart platforms (SPs) that we call an “Assembler” Robot. These chained SPs are parallel mechanisms that are stronger, stiffer, and more precise, on average, than their serial counterparts at the cost of a smaller range of motion. Linking these units in … Read more

Strengthening SONC Relaxations with Constraints Derived from Variable Bounds

Nonnegativity certificates can be used to obtain tight dual bounds for polynomial optimization problems. Hierarchies of certificate-based relaxations ensure convergence to the global optimum, but higher levels of such hierarchies can become very computationally expensive, and the well-known sums of squares hierarchies scale poorly with the degree of the polynomials. This has motivated research into … Read more

Inexact reduced gradient methods in nonconvex optimization

This paper proposes and develops new linesearch methods with inexact gradient information for finding stationary points of nonconvex continuously differentiable functions on finite-dimensional spaces. Some abstract convergence results for a broad class of linesearch methods are established. A general scheme for inexact reduced gradient (IRG) methods is proposed, where the errors in the gradient approximation … Read more

Strong Partitioning and a Machine Learning Approximation for Accelerating the Global Optimization of Nonconvex QCQPs

We learn optimal instance-specific heuristics for the global minimization of nonconvex quadratically-constrained quadratic programs (QCQPs). Specifically, we consider partitioning-based convex mixed-integer programming relaxations for nonconvex QCQPs and propose the novel problem of strong partitioning to optimally partition variable domains without sacrificing global optimality. Since solving this max-min strong partitioning problem exactly can be very challenging, … Read more