The general formulation for finding the L1-norm best-fit subspace for a point set in $m$-dimensions is a nonlinear, nonconvex, nonsmooth optimization problem. In this paper we present a procedure to estimate the L1-norm best-fit one-dimensional subspace (a line through the origin) to data in $\Re^m$ based on an optimization criterion involving linear programming but which can be performed using simple ratios and sortings. The procedure has distinct advantages in that it does not depend on any initializations, is deterministic and replicable, and is scalable. The estimated line is sharp in that it satisfies a well-defined optimization criterion, and is often tight, meaning that it is sometimes a globally optimal solution. We show how the method can be extended to a procedure for an L1-norm principal component analysis by iteratively approximating higher-order best-fit subspaces. In a comprehensive computational study involving synthetic and real data, the procedure is shown to be more robust to outlier observations than competing approaches.
Citation
Virginia Commonwealth University, June, 2017