I am trying to reduce dimensionality of a 2000-D data to 2D data. So I used the sample code here: http://scikit-learn.org/stable/auto_examples/decomposition/plot_pca_vs_lda.html But instead of a 2-D output I get a 1-D output from LDA. Does anyone have a suggestion for the cause? Here is my code:
lda = LinearDiscriminantAnalysis(n_components=2)
X_r2 = lda.fit(X, y).transform(X)
The shapes of X, y, and X_r2 are as follows, respectively:
(33139, 2000) (33139,) (33139, 1)
As you see X_r2 is 1-D, wherase I expect it to be 2-D, because I have set n_components=2
.
Aucun commentaire:
Enregistrer un commentaire