Methods for Convex and General Quadratic Programming

Computational methods are considered for finding a point that satisfies the second-order necessary conditions for a general (possibly nonconvex) quadratic program (QP). The first part of the paper considers the formulation and analysis of an active-set method for a generic QP with both equality and inequality constraints. The method uses a search direction that is … Read more

Lower bounds for the Chvátal-Gomory rank in the 0/1 cube

Although well studied, important questions on the rank of the Chvátal-Gomory operator when restricting to polytopes contained in the n-dimensional 0/1 cube have not been answered yet. In particular, the question on the maximal rank of the Chvátal-Gomory procedure for this class of polytopes is still open. So far, the best-known upper bound is O(n^2 … Read more

Consistency of robust optimization

In recent years the robust counterpart approach, introduced and made popular by Ben-Tal, Nemirovski and El Ghaoui, gained more and more interest among both academics and practitioners. However, to the best of our knowledge, only very few results on the relationship between the original problem instance and the robust counterpart have been established. This exposition … Read more

An Introduction to a Class of Matrix Cone Programming

In this paper, we define a class of linear conic programming (which we call matrix cone programming or MCP) involving the epigraphs of five commonly used matrix norms and the well studied symmetric cone. MCP has recently found many important applications, for example, in nuclear norm relaxations of affine rank minimization problems. In order to … Read more

Templates for Convex Cone Problems with Applications to Sparse Signal Recovery

This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fi elds. The approach works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, apply smoothing; and fourth, solve using an optimal first-order method. A … Read more

Burer’s Key Assumption for Semidefinite and Doubly Nonnegative Relaxations

Burer has shown that completely positive relaxations of nonconvex quadratic programs with nonnegative and binary variables are exact when the binary variables satisfy a so-called key assumption. Here we show that introducing binary variables to obtain an equivalent problem that satisfies the key assumption will not improve the semidefinite relaxation, and only marginally improve the … Read more

Uniform bound on the 1-norm of the inverse of lower triangular Toeplitz matrices

The uniform bound of 1-norm is given for the inverse of lower triangular Toeplitz matrices with nonnegative monotonic decreasing entries whose limit is zero. The new bound is the sharpest under the given constraints. This result is then employed to resolve a long standing open problem posed by Brunner concerning the convergence of the one-point … Read more

The Split Variational Inequality Problem

We propose a new variational problem which we call the Split Variational Inequality Problem (SVIP). It entails finding a solution of one Variational Inequality Problem (VIP), the image of which under a given bounded linear transformation is a solution of another VIP. We construct iterative algorithms that solve such problems, under reasonable conditions, in Hilbert … Read more

An accelerated inexact proximal point algorithm for convex minimization

The proximal point algorithm (PPA) is classical and popular in the community of Optimization. In practice, inexact PPAs which solves the involved proximal subproblems approximately subject to certain inexact criteria are truly implementable. In this paper, we first propose an inexact PPA with a new inexact criterion for solving convex minimization, and show that the … Read more

Safe Feature Elimination in Sparse Supervised Learning

We investigate fast methods that allow to quickly eliminate variables (features) in supervised learning problems involving a convex loss function and a l1 -norm penalty, leading to a potentially substantial reduction in the number of variables prior to running the supervised learning algorithm. The methods are not heuristic: they only eliminate features that are guaranteed … Read more