I\'m using scikit-learn to perform PCA on this dataset. The scikit-learn documentation states that
Due to implementation subtleties of the Singular Va
You're doing nothing wrong.
What the documentation is warning you about is that repeated calls to fit
may yield different principal components - not how they relate to another PCA implementation.
Having a flipped sign on all components doesn't make the result wrong - the result is right as long as it fulfills the definition (each component is chosen such that it captures the maximum amount of variance in the data). As it stands, it seems the projection you got is simply mirrored - it still fulfills the definition, and is, thus, correct.
If, beneath correctness, you're worried about consistency between implementations, you can simply multiply the components by -1, when it's necessary.