What do you mean by getting surprised by PCAs? Say you have some data, you compute the principal components (eigenvectors of the covariance matrix) and the corresponding eigenvalues. Were you surprised that a few principal components were enough to explain a large percentage of the variance of the data? Or were you surprised about what those vectors were?
I think this is not really PCA or even dimensionality reduction specific. It’s simply the idea of latent variables. You could gain the same intuition from studying probabilistic graphical models, for example generative models.
What do you mean by getting surprised by PCAs? Say you have some data, you compute the principal components (eigenvectors of the covariance matrix) and the corresponding eigenvalues. Were you surprised that a few principal components were enough to explain a large percentage of the variance of the data? Or were you surprised about what those vectors were?
I think this is not really PCA or even dimensionality reduction specific. It’s simply the idea of latent variables. You could gain the same intuition from studying probabilistic graphical models, for example generative models.
Surprised by either. Just finding a structure of causality that was very unexpected. I agree the intuition could be built from other sources.
PCA doesn’t tell much about causality though. It just gives you a “natural” coordinate system where the variables are not linearly correlated.
Right, one needs to use additional information to determine causality.