posted on 2012-08-01, 00:00authored byChinmay Hegde, Aswin C. Sankaranarayanan, Richard G. Baraniuk
We propose a new method for linear dimensionality reduction of manifold-modeled data. Given a training set X of Q points belonging to a manifold M ⊂ ℝN, we construct a linear operator P : ℝN → ℝM that approximately preserves the norms of all (2Q) pairwise difference vectors (or secants) of X. We design the matrix P via a trace-norm minimization that can be efficiently solved as a semi-definite program (SDP). When X comprises a sufficiently dense sampling of M, we prove that the optimal matrix P preserves all pairs of secants over M. We numerically demonstrate the considerable gains using our SDP-based approach over existing linear dimensionality reduction methods, such as principal components analysis (PCA) and random projections.