Linear Discriminant Analysis inverse transform

前端 未结 2 1981
忘了有多久
忘了有多久 2020-12-21 12:10

I try to use Linear Discriminant Analysis from scikit-learn library, in order to perform dimensionality reduction on my data which has more than 200 features. But I could no

2条回答
  •  心在旅途
    2020-12-21 12:37

    The inverse of the LDA does not necessarily make sense beause it loses a lot of information.

    For comparison, consider the PCA. Here we get a coefficient matrix that is used to transform the data. We can do dimensionality reduction by stripping rows from the matrix. To get the inverse transform, we first invert the full matrix and then remove the columns corresponding to the removed rows.

    The LDA does not give us a full matrix. We only get a reduced matrix that cannot be directly inverted. It is possible to take the pseudo inverse, but this is much less efficient than if we had the full matrix at our disposal.

    Consider a simple example:

    C = np.ones((3, 3)) + np.eye(3)  # full transform matrix
    U = C[:2, :]  # dimensionality reduction matrix
    V1 = np.linalg.inv(C)[:, :2]  # PCA-style reconstruction matrix
    print(V1)
    #array([[ 0.75, -0.25],
    #       [-0.25,  0.75],
    #       [-0.25, -0.25]])
    
    V2 = np.linalg.pinv(U)  # LDA-style reconstruction matrix
    print(V2)
    #array([[ 0.63636364, -0.36363636],
    #       [-0.36363636,  0.63636364],
    #       [ 0.09090909,  0.09090909]])
    

    If we have the full matrix we get a different inverse transform (V1) than if we simple invert the transform (V2). That is because in the second case we lost all information about the discarded components.

    You have been warned. If you still want to do the inverse LDA transform, here is a function:

    import matplotlib.pyplot as plt
    
    from sklearn import datasets
    from sklearn.decomposition import PCA
    from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
    
    from sklearn.utils.validation import check_is_fitted
    from sklearn.utils import check_array, check_X_y
    
    import numpy as np
    
    
    def inverse_transform(lda, x):
        if lda.solver == 'lsqr':
            raise NotImplementedError("(inverse) transform not implemented for 'lsqr' "
                                      "solver (use 'svd' or 'eigen').")
        check_is_fitted(lda, ['xbar_', 'scalings_'], all_or_any=any)
    
        inv = np.linalg.pinv(lda.scalings_)
    
        x = check_array(x)
        if lda.solver == 'svd':
            x_back = np.dot(x, inv) + lda.xbar_
        elif lda.solver == 'eigen':
            x_back = np.dot(x, inv)
    
        return x_back
    
    
    iris = datasets.load_iris()
    
    X = iris.data
    y = iris.target
    target_names = iris.target_names
    
    lda = LinearDiscriminantAnalysis()
    Z = lda.fit(X, y).transform(X)
    
    Xr = inverse_transform(lda, Z)
    
    # plot first two dimensions of original and reconstructed data
    plt.plot(X[:, 0], X[:, 1], '.', label='original')
    plt.plot(Xr[:, 0], Xr[:, 1], '.', label='reconstructed')
    plt.legend()
    

    You see, the result of the inverse transform does not have much to do with the original data (well, it's possible to guess the direction of the projection). A considerable part of the variation is gone for good.

提交回复
热议问题