PCA of opposite sign than expected with Apache Spark

﹥>﹥吖頭↗ 提交于 2020-01-25 08:54:08

问题


I train myself with some exercices and examples from a classical statistics book, and I adapt what I learn on Apache Spark to ensure I am able to reproduce it.

In a chapter of the book, the author calculates a PCA from two variables. A principal component from HiCi and SCi values of some ranking of universities, that exists in the U.S.A.

The example starts with these values, and has 100 individuals :

| University | X1 (HiCi) | X2 (SCi)  
| Harvard    |    100    |    100  
| Stanford   |   86.1    |   70.3
| Berkeley   |   67.9    |   69.2
| Cambridge  |   54.0    |   65.4
| M.I.T.     |   65.9    |   61.7
...

Then the author standardizes these values :

| University | X1* (HiCi) | X2* (SCi)  
| Harvard    |    3.70    |   3.19  
| Stanford   |    2.74    |   0.81
| Berkeley   |    1.48    |   0.73
| Cambridge  |    0.51    |   0.42
| M.I.T.     |    1.34    |   0.13
...

and does a PCA calculation, to find the first pricipal component values that follows :

| University | X1* (HiCi) | X2* (SCi)  | Principal component
| Harvard    |    3.70    |   3.19     |        4.87
| Stanford   |    2.74    |   0.81     |        2.51
| Berkeley   |    1.48    |   0.73     |        1.56
| Cambridge  |    0.51    |   0.42     |        0.66
| M.I.T.     |    1.34    |   0.13     |        1.04
...

To reproduce it with Apache Spark 2.4.4, I wrote this test :

/**
 * Recherche d'une composante principale.
 */
@Test
@DisplayName("ACP : le classement des universités américaines")
public void composantePrincipale() {
   // Le classement (variables HiCi et SCi des universités américaines).
   Dataset<Row> dsHiCiSCi = denseRowWithVectors(
      d(100, 100), d(86.1, 70.3), d(67.9, 69.2), d(54.0, 65.4), d(65.9, 61.7),
      d(58.4, 50.3), d(56.5, 69.6), d(59.3, 46.5), d(50.8, 54.1), d(46.3, 65.4),
      d(57.9, 63.2), d(54.5, 65.1), d(57.4, 75.9), d(59.3, 64.6), d(59.9, 70.8),
      d(52.4, 74.1), d(52.9, 67.2), d(54.0, 59.8), d(41.3, 67.9), d(41.9, 80.9), 
      d(60.7, 77.1), d(35.1, 68.6), d(40.6, 62.2), d(39.2, 77.6), d(38.5, 63.2),

      d(44.5, 57.6), d(35.5, 38.4), d(39.2, 53.4), d(46.9, 57.0), d(41.3, 53.9),
      d(27.7, 23.2), d(46.9, 62.0), d(48.6, 67.0), d(39.9, 45.7), d(42.6, 42.7),
      d(31.4, 63.1), d(40.6, 53.3), d(46.9, 54.8), d(23.4, 54.2), d(30.6, 38.0),
      d(31.4, 51.0), d(27.7, 56.6), d(45.1, 58.0), d(46.9, 64.2), d(35.5, 48.9),
      d(25.7, 51.7), d(39.9, 44.8), d(24.6, 56.9), d(39.9, 65.6), d(37.1, 52.7));

   // Afficher les valeurs des universités.
   dsHiCiSCi.show(100, false);

   // Centrer réduire les valeurs.
   StandardScaler scaler = new StandardScaler().setInputCol("features").setOutputCol("scaledFeatures").setWithStd(true).setWithMean(true);
   Dataset<Row> transformedDF =  scaler.fit(dsHiCiSCi).transform(dsHiCiSCi);
   Dataset<Row> centreReduit = transformedDF.select("scaledFeatures");      

   // De nos deux variables, nous voulons extraire une seule composante principale.
   PCA acp = new PCA().setInputCol("scaledFeatures").setK(1);
   Dataset<Row> resultat = acp.fit(centreReduit).transform(centreReduit);

   resultat.show(100, false);
}

/**
 * Créer un Dataset Row par des vecteurs denses constitués à partir d'upplets de valeurs.
 * @param listeUppletsValeurs liste de n-upplets.
 * @return Dataset Dense.
 */
protected Dataset<Row> denseRowWithVectors(double[]... listeUppletsValeurs) {
   List<Row> data = new ArrayList<>();

   for(double[] upplet : listeUppletsValeurs) {
      Vector vecteur = Vectors.dense(upplet);
      data.add(RowFactory.create(vecteur));
   }

    StructType schema = new StructType(new StructField[] {
      new StructField("features", new VectorUDT(), false, Metadata.empty()),
    });

    return this.session.createDataFrame(data, schema);
}

/** double... => double[] **/
protected double[] d(double... valeurs) {
   return valeurs;
}

Values are well introduced :

+-------------+
|features     |
+-------------+
|[100.0,100.0]|
|[86.1,70.3]  |
|[67.9,69.2]  |
|[54.0,65.4]  |
|[65.9,61.7]  |
...

Standardization works, but PCA isn't the one expected : it has an opposite sign.

+-------------------------------------------+------------------------+
|scaledFeatures                             |pca_a632d8c48010__output|
+-------------------------------------------+------------------------+
|[3.6421736835169782,3.156450874585141]     |[-4.807353527775403]    |
|[2.693904143528853,0.8064410737434008]     |[-2.4751178396271087]   |
|[1.4522850336163453,0.7194036737122256]    |[-1.5356158115782794]   |
|[0.5040154936282204,0.4187290190590739]    |[-0.6524789022238617]   |
|[1.3158433731863994,0.12596685531784674]   |[-1.0195137897594777]   |
...

Why do I have such behavior ?

来源:https://stackoverflow.com/questions/59207448/pca-of-opposite-sign-than-expected-with-apache-spark

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!