Completely Positive Reformulations for Polynomial Optimization

Polynomial optimization encompasses a very rich class of problems in which both the objective and constraints can be written in terms of polynomials on the decision variables. There is a well stablished body of research on quadratic polynomial optimization problems based on reformulations of the original problem as a conic program over the cone of … Read more

Mathematical Programming: Turing completeness and applications to software analysis

Mathematical Programming is Turing complete, and can be used as a general-purpose declarative language. We present a new constructive proof of this fact, and showcase its usefulness by discussing an application to finding the hardest input of any given program running on a Minsky Register Machine. We also discuss an application of Mathematical Programming to … Read more

Convex Quadratic Relaxations for Mixed-Integer Nonlinear Programs in Power Systems

This paper presents a set of new convex quadratic relaxations for nonlinear and mixed-integer nonlinear programs arising in power systems. The considered models are motivated by hybrid discrete/continuous applications where existing approximations do not provide optimality guarantees. The new relaxations offer computational efficiency along with minimal optimality gaps, providing an interesting alternative to state-of-the-art semi-definite … Read more

Rounding on the standard simplex: regular grids for global optimization

Given a point on the standard simplex, we calculate a proximal point on the regular grid which is closest with respect to any norm in a large class, including all $\ell^p$-norms for $p\ge 1$. We show that the minimal $\ell^p$-distance to the regular grid on the standard simplex can exceed one, even for very fine … Read more

Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization

We introduce a proximal version of the stochastic dual coordinate ascent method and show how to accelerate the method using an inner-outer iteration procedure. We analyze the runtime of the framework and obtain rates that improve state-of-the-art results for various key machine learning optimization problems including SVM, logistic regression, ridge regression, Lasso, and multiclass SVM. … Read more

Global Optimization of Generalized Semi-Infinite Programs via Restriction of the Right Hand Side

The algorithm proposed in [Mitsos Optimization 2011] for the global optimization of semi-infinite programs is extended to the global optimization of generalized semi-infinite programs (GSIP). No convexity or concavity assumptions are made. The algorithm employs convergent lower and upper bounds which are based on regular (in general nonconvex) nonlinear programs (NLP) solved by a (black-box) … Read more

Rational sums of hermitian squares of free noncommutative polynomials

In this paper we consider polynomials in noncommuting variables that admit sum of hermitian squares and commutators decompositions. We recall algorithms for finding decompositions of this type that are based on semidefinite programming. The main part of the article investigates how to find such decomposition with rational coefficients if the original polynomial has rational coefficients. … Read more

A New Framework for Combining Global and Local Methods in Black Box Optimization

We propose a new framework for the optimization of computationally expensive black box problems, where neither closed-form expressions nor derivatives of the objective functions are available. The proposed framework consists of two procedures. The first constructs a global metamodel to approximate the underlying black box function and explores an unvisited area to search for a … Read more

RSP-Based Analysis for Sparsest and Least $\ell_1hBcNorm Solutions to Underdetermined Linear Systems

Recently, the worse-case analysis, probabilistic analysis and empirical justification have been employed to address the fundamental question: When does $\ell_1$-minimization find the sparsest solution to an underdetermined linear system? In this paper, a deterministic analysis, rooted in the classic linear programming theory, is carried out to further address this question. We first identify a necessary … Read more

An Inexact Proximal Method for Quasiconvex Minimization

In this paper we propose an inexact proximal point method to solve constrained minimization problems with locally Lipschitz quasiconvex objective functions. Assuming that the function is also bounded from below, lower semicontinuous and using proximal distances, we show that the sequence generated for the method converges to a stationary point of the problem. Citation July … Read more