Have you considered trying to teach factor analysis as a fuzzy model (very useful when used loosely, not just rigorously)? It seems strongly related to this and imports some nice additional connotations about hypothesis search, which I think is a common blind spot.
I’m not familiar with factor analysis, so I have to say no, I haven’t considered this. Can you recommend me a good place to start looking to get a flavor of what you mean?
Many many models can be thought of as folk factor analyses whereby people try to reduce a complex output variable to a human readable model of a few dominant input variables. Why care?
Teaching factor analysis is basically an excuse to load some additional intuitions to make Fermi estimates(linear model generation for approximate answers) more flexible in representing a broader variety of problems. Good sources on fermi estimates (eg the first part of The Art of Insight in Science and Engineering) often explain some of the concepts used in factor analysis in layman terms. So for example instead of sensitivity analysis they’ll just talk about how to be scope sensitive as you go so that you drop non dominant terms.
It’s also handy for people to know that many ‘tricky’ problems are a bit more tractable if you think of them as having more degrees of freedom than the human brain is good at working with and that this indicates what sorts of tricks you might want to employ, eg finding the upstream constraint or some other method to reduce the search space first of which a good example is E-M theory of John Boyd Fame.
It also just generally helps in clarifying problems since it forces you to confront your choice of proxy measure for your output variable. Clarifying this generally raises awareness of possible failures (future goodheart’s law problems, selection effects, etc.).
Basically I think it is a fairly powerful unifying model for a lot of stuff. It seems like it might be closer to the metal so to speak in that it is something a bayesian net can implement.
Credit to Jonah Sinick for pointing out that learning this and a few other high level statistics concepts would cause a bunch of other models to simplify greatly.
I suspect IQ (the g factor) is the most well-known application of factor analysis.
Factor analysis is also a specific linear technique that’s basically matrix rotation. On a higher, more conceptual level I find talking about dimensionality reduction more useful than specifically about factor analysis.
Have you considered trying to teach factor analysis as a fuzzy model (very useful when used loosely, not just rigorously)? It seems strongly related to this and imports some nice additional connotations about hypothesis search, which I think is a common blind spot.
I’m not familiar with factor analysis, so I have to say no, I haven’t considered this. Can you recommend me a good place to start looking to get a flavor of what you mean?
Big five personality traits is likely the factor analysis most people have heard of. Worth reading the blurb here: https://en.wikipedia.org/wiki/Factor_analysis
Many many models can be thought of as folk factor analyses whereby people try to reduce a complex output variable to a human readable model of a few dominant input variables. Why care?
Additive linear models outperform or tie expert performance in the forecasting literature: http://repository.upenn.edu/cgi/viewcontent.cgi?article=1178&context=marketing_papers
Teaching factor analysis is basically an excuse to load some additional intuitions to make Fermi estimates(linear model generation for approximate answers) more flexible in representing a broader variety of problems. Good sources on fermi estimates (eg the first part of The Art of Insight in Science and Engineering) often explain some of the concepts used in factor analysis in layman terms. So for example instead of sensitivity analysis they’ll just talk about how to be scope sensitive as you go so that you drop non dominant terms.
It’s also handy for people to know that many ‘tricky’ problems are a bit more tractable if you think of them as having more degrees of freedom than the human brain is good at working with and that this indicates what sorts of tricks you might want to employ, eg finding the upstream constraint or some other method to reduce the search space first of which a good example is E-M theory of John Boyd Fame.
It also just generally helps in clarifying problems since it forces you to confront your choice of proxy measure for your output variable. Clarifying this generally raises awareness of possible failures (future goodheart’s law problems, selection effects, etc.).
Basically I think it is a fairly powerful unifying model for a lot of stuff. It seems like it might be closer to the metal so to speak in that it is something a bayesian net can implement.
Credit to Jonah Sinick for pointing out that learning this and a few other high level statistics concepts would cause a bunch of other models to simplify greatly.
http://lesswrong.com/lw/m6e/how_my_social_skills_went_from_horrible_to/ is Jonah Sinick’s post speaking about benefits he got through the prism of dimensionality reduction
I suspect IQ (the g factor) is the most well-known application of factor analysis.
Factor analysis is also a specific linear technique that’s basically matrix rotation. On a higher, more conceptual level I find talking about dimensionality reduction more useful than specifically about factor analysis.