A Subsampling Line-Search Method with Second-Order Results

In many contemporary optimization problems such as those arising in machine learning, it can be computationally challenging or even infeasible to evaluate an entire function or its derivatives. This motivates the use of stochastic algorithms that sample problem data, which can jeopardize the guarantees obtained through classical globalization techniques in optimization such as a trust … Read more

Non-monotone Inexact Restoration Method for nonlinear programming

This paper deals with a new variant of the Inexact Restoration Method of Fischer and Friedlander (COAP, 46, pp. 333-346, 2010). We propose an algorithm that replaces the monotone line-search performed in the tangent phase (with regard to the penalty function) by a non-monotone one. Con- vergence to feasible points satisfying the approximate gradient projection … Read more

Dynamic Optimization with Convergence Guarantees

We present a novel direct transcription method to solve optimization problems subject to nonlinear differential and inequality constraints. In order to provide numerical convergence guarantees, it is sufficient for the functions that define the problem to satisfy boundedness and Lipschitz conditions. Our assumptions are the most general to date; we do not require uniqueness, differentiability … Read more

Non-convex min-max fractional quadratic problems under quadratic constraints: copositive relaxations

In this paper we address a min-max problem of fractional quadratic (not necessarily convex) over linear functions on a feasible set described by linear and (not necessarily convex) quadratic functions. We propose a conic reformulation on the cone of completely positive matrices. By relaxation, a doubly non negative conic formulation is used to provide lower … Read more

Subset selection in sparse matrices

In subset selection we search for the best linear predictor that involves a small subset of variables. From a computational complexity viewpoint, subset selection is NP-hard and few classes are known to be solvable in polynomial time. Using mainly tools from discrete geometry, we show that some sparsity conditions on the original data matrix allow … Read more

Low-M-Rank Tensor Completion and Robust Tensor PCA

In this paper, we propose a new approach to solve low-rank tensor completion and robust tensor PCA. Our approach is based on some novel notion of (even-order) tensor ranks, to be called the M-rank, the symmetric M-rank, and the strongly symmetric M-rank. We discuss the connections between these new tensor ranks and the CP-rank and … Read more

Global Solutions of Nonconvex Standard Quadratic Programs via Mixed Integer Linear Programming Reformulations

A standard quadratic program is an optimization problem that consists of minimizing a (nonconvex) quadratic form over the unit simplex. We focus on reformulating a standard quadratic program as a mixed integer linear programming problem. We propose two alternative mixed integer linear programming formulations. Our first formulation is based on casting a standard quadratic program … Read more

Decision Diagram Decomposition for Quadratically Constrained Binary Optimization

In recent years the use of decision diagrams within the context of discrete optimization has proliferated. This paper continues this expansion by proposing the use of decision diagrams for modeling and solving binary optimization problems with quadratic constraints. The model proposes the use of multiple decision diagrams to decompose a quadratic matrix so that each … Read more

Parallelizable Algorithms for Optimization Problems with Orthogonality Constraints

To construct a parallel approach for solving optimization problems with orthogonality constraints is usually regarded as an extremely difficult mission, due to the low scalability of the orthonormalization procedure. However, such demand is particularly huge in some application areas such as materials computation. In this paper, we propose a proximal linearized augmented Lagrangian algorithm (PLAM) … Read more

Second-order Guarantees of Distributed gradient Algorithms

We consider distributed smooth nonconvex unconstrained optimization over networks, modeled as a connected graph. We examine the behavior of distributed gradient-based algorithms near strict saddle points. Specifically, we establish that (i) the renowned Distributed Gradient Descent (DGD) algorithm likely converges to a neighborhood of a Second-order Stationary (SoS) solution; and (ii) the more recent class … Read more