Subset selection in sparse matrices

In subset selection we search for the best linear predictor that involves a small subset of variables. From a computational complexity viewpoint, subset selection is NP-hard and few classes are known to be solvable in polynomial time. Using mainly tools from discrete geometry, we show that some sparsity conditions on the original data matrix allow … Read more

Low-M-Rank Tensor Completion and Robust Tensor PCA

In this paper, we propose a new approach to solve low-rank tensor completion and robust tensor PCA. Our approach is based on some novel notion of (even-order) tensor ranks, to be called the M-rank, the symmetric M-rank, and the strongly symmetric M-rank. We discuss the connections between these new tensor ranks and the CP-rank and … Read more

Global Solutions of Nonconvex Standard Quadratic Programs via Mixed Integer Linear Programming Reformulations

A standard quadratic program is an optimization problem that consists of minimizing a (nonconvex) quadratic form over the unit simplex. We focus on reformulating a standard quadratic program as a mixed integer linear programming problem. We propose two alternative mixed integer linear programming formulations. Our first formulation is based on casting a standard quadratic program … Read more

Decision Diagram Decomposition for Quadratically Constrained Binary Optimization

In recent years the use of decision diagrams within the context of discrete optimization has proliferated. This paper continues this expansion by proposing the use of decision diagrams for modeling and solving binary optimization problems with quadratic constraints. The model proposes the use of multiple decision diagrams to decompose a quadratic matrix so that each … Read more

Parallelizable Algorithms for Optimization Problems with Orthogonality Constraints

To construct a parallel approach for solving optimization problems with orthogonality constraints is usually regarded as an extremely difficult mission, due to the low scalability of the orthonormalization procedure. However, such demand is particularly huge in some application areas such as materials computation. In this paper, we propose a proximal linearized augmented Lagrangian algorithm (PLAM) … Read more

Second-order Guarantees of Distributed gradient Algorithms

We consider distributed smooth nonconvex unconstrained optimization over networks, modeled as a connected graph. We examine the behavior of distributed gradient-based algorithms near strict saddle points. Specifically, we establish that (i) the renowned Distributed Gradient Descent (DGD) algorithm likely converges to a neighborhood of a Second-order Stationary (SoS) solution; and (ii) the more recent class … Read more

Steplength selection in gradient projection methods for box-constrained quadratic programs

The role of the steplength selection strategies in gradient methods has been widely investigated in the last decades. Starting from the work of Barzilai and Borwein (1988), many efficient steplength rules have been designed, that contributed to make the gradient approaches an effective tool for the large-scale optimization problems arising in important real-world applications. Most … Read more

An inertial extrapolation method for convex simple bilevel optimization

We consider a scalar objective minimization problem over the solution set of another optimization problem. This problem is known as simple bilevel optimization problem and has drawn a significant attention in the last few years. Our inner problem consists of minimizing the sum of smooth and nonsmooth functions while the outer one is the minimization … Read more

On the complexity of an Inexact Restoration method for constrained optimization

Recent papers indicate that some algorithms for constrained optimization may exhibit worst-case complexity bounds that are very similar to those of unconstrained optimization algorithms. A natural question is whether well established practical algorithms, perhaps with small variations, may enjoy analogous complexity results. In the present paper we show that the answer is positive with respect … Read more

An Inexact First-order Method for Constrained Nonlinear Optimization

The primary focus of this paper is on designing inexact first-order methods for solving large-scale constrained nonlinear optimization problems. By controlling the inexactness of the subproblem solution, we can significantly reduce the computational cost needed for each iteration. A penalty parameter updating strategy during the subproblem solve enables the algorithm to automatically detect infeasibility. Global … Read more