Strong formulations for quadratic optimization with M-matrices and semi-continuous variables

We study quadratic optimization with semi-continuous variables and an M-matrix, i.e., PSD with non-positive off-diagonal entries. This structure arises in image segmentation, portfolio optimization, as well as a substructure of general quadratic optimization problems. We prove, under mild assumptions, that the minimization problem is solvable in polynomial time by showing its equivalence to a submodular … Read more

Efficient Convex Optimization for Linear MPC

Model predictive control (MPC) formulations with linear dynamics and quadratic objectives can be solved efficiently by using a primal-dual interior-point framework, with complexity proportional to the length of the horizon. An alternative, which is more able to exploit the similarity of the problems that are solved at each decision point of linear MPC, is to … Read more

Simplex QP-based methods for minimizing a conic quadratic objective over polyhedra

We consider minimizing a conic quadratic objective over a polyhedron. Such problems arise in parametric value-at-risk minimization, portfolio optimization, and robust optimization with ellipsoidal objective uncertainty; and they can be solved by polynomial interior point algorithms for conic quadratic optimization. However, interior point algorithms are not well-suited for branch-and-bound algorithms for the discrete counterparts of … Read more

On Glowinski’s Open Question of Alternating Direction Method of Multipliers

The alternating direction method of multipliers (ADMM) was proposed by Glowinski and Marrocco in 1975; and it has been widely used in a broad spectrum of areas, especially in some sparsity-driven application domains. In 1982, Fortin and Glowinski suggested to enlarge the range of the step size for updating the dual variable in ADMM from … Read more

Random projections for trust region subproblems

The trust region method is an algorithm traditionally used in the field of derivative free optimization. The method works by iteratively constructing surrogate models (often linear or quadratic functions) to approximate the true objective function inside some neighborhood of a current iterate. The neighborhood is called “trust region” in the sense that the model is … Read more

Robust Quadratic Programming with Mixed-Integer Uncertainty

We study robust convex quadratic programs where the uncertain problem parameters can contain both continuous and integer components. Under the natural boundedness assumption on the uncertainty set, we show that the generic problems are amenable to exact copositive programming reformulations of polynomial size. These convex optimization problems are NP-hard but admit a conservative semidefinite programming … Read more

Extending the Scope of Robust Quadratic Optimization

We derive computationally tractable formulations of the robust counterparts of convex quadratic and conic quadratic constraints that are concave in matrix-valued uncertain parameters. We do this for a broad range of uncertainty sets. In particular, we show how to reformulate the support functions of uncertainty sets represented in terms of matrix norms and cones. Our … Read more

A two-phase gradient method for quadratic programming problems with a single linear constraint and bounds on the variables

We propose a gradient-based method for quadratic programming problems with a single linear constraint and bounds on the variables. Inspired by the GPCG algorithm for bound-constrained convex quadratic programming [J.J. More’ and G. Toraldo, SIAM J. Optim. 1, 1991], our approach alternates between two phases until convergence: an identification phase, which performs gradient projection iterations … Read more

An Infeasible Active Set Method with Combinatorial Line Search for Convex Quadratic Problems with Bound Constraints

The minimization of a convex quadratic function under bound constraints is a fundamental building block for more complicated optimization problems. The active-set method introduced by [M. Bergounioux, K. Ito, and K. Kunisch. Primal-Dual Strategy for Constrained Optimal Control Problems. SIAM Journal on Control and Optimization, 37:1176–1194, 1999.] and [M. Bergounioux, M. Haddou, M. Hintermüller, and … Read more

Convex Variational Formulations for Learning Problems

Abstract—In this article, we introduce new techniques to solve the nonlinear regression problem and the nonlinear classification problem. Our benchmarks suggest that our method for regression is significantly more effective when compared to classical methods and our method for classification is competitive. Our list of classical methods includes least squares, random forests, decision trees, boosted … Read more