The thing I’m trying to do is calibrate my model of the distribution of human intelligence. The actual distribution is way lower than my immediate environment makes it appear. Here’s another post I wrote which should provide some context on what I mean when I write about “human intelligence”. The basic idea is that things like “can fix a carburetor” and “understands genetics” are correlated, not anti-correlated.
The thing I’m trying to do is calibrate my model of the distribution of human intelligence. The actual distribution is way lower than my immediate environment makes it appear. Here’s another post I wrote which should provide some context on what I mean when I write about “human intelligence”. The basic idea is that things like “can fix a carburetor” and “understands genetics” are correlated, not anti-correlated.