posted on 2012-08-01, 00:00authored byChinmay Hegde, Aswin C. Sankaranarayanan, Richard G. Baraniuk
<p>We propose a new method for linear dimensionality reduction of manifold-modeled data. Given a training set X of Q points belonging to a manifold M ⊂ ℝ<sup>N</sup>, we construct a linear operator P : ℝ<sup>N</sup> → ℝ<sup>M</sup> that approximately preserves the norms of all (2<sup>Q</sup>) pairwise difference vectors (or secants) of X. We design the matrix P via a trace-norm minimization that can be efficiently solved as a semi-definite program (SDP). When X comprises a sufficiently dense sampling of M, we prove that the optimal matrix P preserves all pairs of secants over M. We numerically demonstrate the considerable gains using our SDP-based approach over existing linear dimensionality reduction methods, such as principal components analysis (PCA) and random projections.</p>