Deriving the convex hull of a polynomial partitioning set through lifting and projection

Relaxations of the bilinear term, $x_1x_2=x_3$, play a central role in constructing relaxations of factorable functions. This is because they can be used directly to relax products of functions with known relaxations. In this paper, we provide a compact, closed-form description of the convex hull of this and other more general bivariate monomial terms (which … Read more

A refined error analysis for fixed-degree polynomial optimization over the simplex

We consider fixed-degree polynomial optimization over the simplex. This problem is well known to be NP-hard, since it contains the maximum stable set problem in combinatorial optimization as a special case. In this paper, we consider a known upper bound by taking the minimum value on a regular grid, and a known lower bound based … Read more

A Revisit to Quadratic Programming with One Inequality Quadratic Constraint via Matrix Pencil

The quadratic programming over one inequality quadratic constraint (QP1QC) is a very special case of quadratically constrained quadratic programming (QCQP) and attracted much attention since early 1990’s. It is now understood that, under the primal Slater condition, (QP1QC) has a tight SDP relaxation (PSDP). The optimal solution to (QP1QC), if exists, can be obtained by … Read more

Trust Region Subproblem with a Fixed Number of Additional Linear Inequality Constraints has Polynomial Complexity

The trust region subproblem with a fixed number m additional linear inequality constraints, denoted by (T_m), have drawn much attention recently. The question as to whether Problem ( T_m) is in Class P or Class NP remains open. So far, the only affirmative general result is that (T_1) has an exact SOCP/SDP reformulation and thus … Read more

Copositive relaxation beats Lagrangian dual bounds in quadratically and linearly constrained QPs

We study non-convex quadratic minimization problems under (possibly non-convex) quadratic and linear constraints, and characterize both Lagrangian and Semi-Lagrangian dual bounds in terms of conic optimization. While the Lagrangian dual is equivalent to the SDP relaxation (which has been known for quite a while, although the presented form, incorporating explicitly linear constraints, seems to be … Read more

Narrowing the difficulty gap for the Celis-Dennis-Tapia problem

We study the {\em Celis-Dennis-Tapia (CDT) problem}: minimize a non-convex quadratic function over the intersection of two ellipsoids. In contrast to the well-studied trust region problem where the feasible set is just one ellipsoid, the CDT problem is not yet fully understood. Our main objective in this paper is to narrow the difficulty gap that … Read more

GLODS: Global and Local Optimization using Direct Search

Locating and identifying points as global minimizers is, in general, a hard and time-consuming task. Difficulties increase when the derivatives of the functions defining the problem are not available for use. In this work, we propose a new class of methods suited for global derivative-free constrained optimization. Using direct search of directional type, the algorithm … Read more

Completely Positive Reformulations for Polynomial Optimization

Polynomial optimization encompasses a very rich class of problems in which both the objective and constraints can be written in terms of polynomials on the decision variables. There is a well stablished body of research on quadratic polynomial optimization problems based on reformulations of the original problem as a conic program over the cone of … Read more

Rounding on the standard simplex: regular grids for global optimization

Given a point on the standard simplex, we calculate a proximal point on the regular grid which is closest with respect to any norm in a large class, including all $\ell^p$-norms for $p\ge 1$. We show that the minimal $\ell^p$-distance to the regular grid on the standard simplex can exceed one, even for very fine … Read more

Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization

In this paper we analyze several new methods for solving nonconvex optimization problems with the objective function formed as a sum of two terms: one is nonconvex and smooth, and another is convex but simple and its structure is known. Further, we consider both cases: unconstrained and linearly constrained nonconvex problems. For optimization problems of … Read more