I wonder how much might be explained by “error-like” environmental factors that persist for medium-range amounts of time—working in a cognitively demanding workplace, for instance, or a living situation amenable to regular sleep and exercise. The archetypal error term, after all, is (beyond the finite sample size of the questions) a product of environmental effects that last a day or so—a headcold, eating a healthy or unhealthy or no breakfast, receiving a compliment or insult earlier in the day, and so on. (We’d expect to see this same regression to the mean on the left side of the distribution, although not necessariy at the same level if test-retest covariance itself varies with score.)
Or it could indicate something about the “true distribution” of “absolute intelligence;” the scale we have, after all, is normal by definition, such that we have no reason to believe that differences between 100 and 115 and between 115 and 130 correspond to the same difference in absolute ability (however one might imagine that.) If we presuppose that everyone experiences the same absolute level of cognitive decline (plus an error term), then a greater fall in sigmas among the highest testers might imply a diminishing returns to numerical IQ. (The same effect might or might not occur on the left side of the distribution.)
This makes sense.
I wonder how much might be explained by “error-like” environmental factors that persist for medium-range amounts of time—working in a cognitively demanding workplace, for instance, or a living situation amenable to regular sleep and exercise. The archetypal error term, after all, is (beyond the finite sample size of the questions) a product of environmental effects that last a day or so—a headcold, eating a healthy or unhealthy or no breakfast, receiving a compliment or insult earlier in the day, and so on. (We’d expect to see this same regression to the mean on the left side of the distribution, although not necessariy at the same level if test-retest covariance itself varies with score.)
Or it could indicate something about the “true distribution” of “absolute intelligence;” the scale we have, after all, is normal by definition, such that we have no reason to believe that differences between 100 and 115 and between 115 and 130 correspond to the same difference in absolute ability (however one might imagine that.) If we presuppose that everyone experiences the same absolute level of cognitive decline (plus an error term), then a greater fall in sigmas among the highest testers might imply a diminishing returns to numerical IQ. (The same effect might or might not occur on the left side of the distribution.)