In the rank-constrained optimization problem (RCOP), it minimizes a linear objective function over a prespecified closed rank-constrained domain set and $m$ generic two-sided linear matrix inequalities. Motivated by the Dantzig-Wolfe (DW) decomposition, a popular approach of solving many nonconvex optimization problems, we investigate the strength of DW relaxation (DWR) of the RCOP, which admits the same formulation as RCOP except replacing the domain set by its closed convex hull. Notably, our goal is to characterize conditions under which the DWR matches RCOP for any m two-sided linear matrix inequalities. From the primal perspective, we develop the first-known simultaneously necessary and sufficient conditions that achieve: (i) extreme point exactness--all the extreme points of the DWR feasible set belong to that of the RCOP; (ii) convex hull exactness-- the DWR feasible set is identical to the closed convex hull of RCOP feasible set; and (iii) objective exactness--the optimal values of the DWR and RCOP coincide. The proposed conditions unify, refine, and extend the existing exactness results in the quadratically constrained quadratic program (QCQP) and fair unsupervised learning.
Li, Y., Xie, W. (2022) On the Exactness of Dantzig-Wolfe Relaxation for Rank Constrained Optimization Problems. Available at Optimization Online.