Learning Dynamical Systems with Side Information

We present a mathematical and computational framework for the problem of learning a dynamical system from noisy observations of a few trajectories and subject to side information. Side information is any knowledge we might have about the dynamical system we would like to learn besides trajectory data. It is typically inferred from domain-specific knowledge or … Read more

On the Complexity of Finding a Local Minimizer of a Quadratic Function over a Polytope

We show that unless P=NP, there cannot be a polynomial-time algorithm that finds a point within Euclidean distance $c^n$ (for any constant $c \ge 0$) of a local minimizer of an $n$-variate quadratic function over a polytope. This result (even with $c=0$) answers a question of Pardalos and Vavasis that appeared in 1992 on a … Read more

Complexity Aspects of Local Minima and Related Notions

We consider the notions of (i) critical points, (ii) second-order points, (iii) local minima, and (iv) strict local minima for multivariate polynomials. For each type of point, and as a function of the degree of the polynomial, we study the complexity of deciding (1) if a given point is of that type, and (2) if … Read more

Iteration-complexity of a proximal augmented Lagrangian method for solving nonconvex composite optimization problems with nonlinear convex constraints

This paper proposes and analyzes a proximal augmented Lagrangian (NL-IAPIAL) method for solving smooth nonconvex composite optimization problems with nonlinear K-convex constraints, i.e., the constraints are convex with respect to the order given by a closed convex cone K. Each NL-IAPIAL iteration consists of inexactly solving a proximal augmented Lagrangian subproblem by an accelerated composite … Read more

Globally convergent Newton-type methods for multiobjective optimization

We propose two Newton-type methods for solving (possibly) nonconvex unconstrained multiobjective optimization problems. The first is directly inspired by the Newton method designed to solve convex problems, whereas  the second uses  second-order information of the objective functions with ingredients of the steepest descent method.  One of the key points of our approaches  is to impose … Read more

Iteration-complexity of an inner accelerated inexact proximal augmented Lagrangian method based on the classical Lagrangian function and a full Lagrange multiplier update

This paper establishes the iteration-complexity of an inner accelerated inexact proximal augmented Lagrangian (IAPIAL) method for solving linearly constrained smooth nonconvex composite optimization problems which is based on the classical Lagrangian function and, most importantly, performs a full Lagrangian multiplier update, i.e., no shrinking factor is incorporated on it. More specifically, each IAPIAL iteration consists … Read more

Computational advances in polynomial optimization: RAPOSa, a freely available global solver

In this paper we introduce RAPOSa, a global optimization solver specifically designed for (continuous) polynomial programming problems with box-constrained variables. Written entirely in C++, RAPOSa is based on the Reformulation-Linearization Technique developed by Sherali and Tuncbilek (1992) and subsequently improved in Sherali et al. (2012a), Sherali et al. (2012b) and Dalkiran and Sherali (2013). We … Read more

An Orthogonalization-free Parallelizable Framework for All-electron Calculations in Density Functional Theory

All-electron calculations play an important role in density functional theory, in which improving computational efficiency is one of the most needed and challenging tasks. In the model formulations, both nonlinear eigenvalue problem and total energy minimization problem pursue orthogonal solutions. Most existing algorithms for solving these two models invoke orthogonalization process either explicitly or implicitly … Read more

A Subspace Acceleration Method for Minimization Involving a Group Sparsity-Inducing Regularizer

We consider the problem of minimizing an objective function that is the sum of a convex function and a group sparsity-inducing regularizer. Problems that integrate such regularizers arise in modern machine learning applications, often for the purpose of obtaining models that are easier to interpret and that have higher predictive accuracy. We present a new … Read more

On the best achievable quality of limit points of augmented Lagrangian schemes

The optimization literature is vast in papers dealing with improvements on the global convergence of augmented Lagrangian schemes. Usually, the results are based on weak constraint qualifications, or, more recently, on sequential optimality conditions obtained via penalization techniques. In this paper we propose a somewhat different approach, in the sense that the algorithm itself is … Read more