Attraction of Newton method to critical Lagrange multipliers: fully quadratic case

All previously known results concerned with attraction of Newton-type iterations for optimality systems to critical Lagrange multipliers were a posteriori by nature: they were showing that in case of convergence, the dual limit is in a sense unlikely to be noncritical. This paper suggests the first a priori result in this direction, showing that critical … Read more

Hardness and Approximation Results for hBcBall Constrained Homogeneous Polynomial Optimization Problems

In this paper, we establish hardness and approximation results for various $L_p$-ball constrained homogeneous polynomial optimization problems, where $p \in [2,\infty]$. Specifically, we prove that for any given $d \ge 3$ and $p \in [2,\infty]$, both the problem of optimizing a degree-$d$ homogeneous polynomial over the $L_p$-ball and the problem of optimizing a degree-$d$ multilinear … Read more

A Reliable Affine Relaxation Method for Global Optimization

An automatic method for constructing linear relaxations of constrained global optimization problems is proposed. Such a construction is based on affine and interval arithmetics and uses operator overloading. These linear programs have exactly the same numbers of variables and of inequality constraints as the given problems. Each equality constraint is replaced by two inequalities. This … Read more

Reducing the Number of Function Evaluations in Mesh Adaptive Direct Search Algorithms

The mesh adaptive direct search (MADS) class of algorithms is designed for nonsmooth optimization, where the objective function and constraints are typically computed by launching a time-consuming computer simulation. Each iteration of a MADS algorithm attempts to improve the current best-known solution by launching the simulation at a finite number of trial points. Common implementations … Read more

Global convergence of trust-region algorithms for constrained minimization without derivatives

In this work we propose a trust-region algorithm for the problem of minimizing a function within a convex closed domain. We assume that the objective function is differentiable but no derivatives are available. The algorithm has a very simple structure and allows a great deal of freedom in the choice of the models. Under reasonable … Read more

Low-rank matrix completion via preconditioned optimization on the Grassmann manifold

We address the numerical problem of recovering large matrices of low rank when most of the entries are unknown. We exploit the geometry of the low-rank constraint to recast the problem as an unconstrained optimization problem on a single Grassmann manifold. We then apply second-order Riemannian trust-region methods (RTRMC 2) and Riemannian conjugate gradient methods … Read more

How much patience do you have? A worst-case perspective on smooth nonconvex optimization

The paper presents a survey of recent results in the field of worst-case complexity of algorithms for nonlinear (and possibly nonconvex) smooth optimization. Both constrained and unconstrained case are considered. Article Download View How much patience do you have? A worst-case perspective on smooth nonconvex optimization

Approximation of Matrix Rank Function and Its Application to Matrix Rank Minimization

Matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. However, the problem is in general NP-hard, and it is computationally hard to solve directly in practice. In this paper, we provide a new kind of approximation functions for the rank of matrix, and the corresponding approximation … Read more

A Block Coordinate Descent Method for Regularized Multi-Convex Optimization with Applications to Nonnegative Tensor Factorization and Completion

This paper considers regularized block multi-convex optimization, where the feasible set and objective function are generally non-convex but convex in each block of variables. We review some of its interesting examples and propose a generalized block coordinate descent method. (Using proximal updates, we further allow non-convexity over some blocks.) Under certain conditions, we show that … Read more

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization

We propose a first order interior point algorithm for a class of non-Lipschitz and nonconvex minimization problems with box constraints, which arise from applications in variable selection and regularized optimization. The objective functions of these problems are continuously differentiable typically at interior points of the feasible set. Our algorithm is easy to implement and the … Read more