You can scale any distribution to have a mean of 100 and a standard deviation of 10, so I’d say it’s a type error to ask whether or not the population actually matches that.
The population isn’t necessarily normal, and the scaling is not necessarily done (accurately). That was my point.
Sure, but it’s a relatively meaningless description of a distribution without the caveat that it’s normal.
To the extent that your statement is true, it is only true by convention.
In full generality, a standard deviation is a less useful description of a distribution without the caveat that it belongs to a specific family. Knowing the mean and standard deviation tells us the location and the scale of a distribution. Normal distributions are an example of a family that is parametrized by location and scale, but this is not an exclusive club in the least. Uniform distributions, for instance, would also work.
However, statisticians model things as normal distributions by default, and they are often right to do so. This means that there’s a bunch of handy tables for tail probabilities and such for normal distributions, and these rely on knowing mean and standard deviation.
On the other hand, uniform distributions (for example) are not a very good description of most actual data. But if they were, then everyone would know that mean and standard deviation uniquely pick out a uniform distribution. And one might then object that scaling IQ to have a mean of 100 and a standard deviation of 15 is relatively meaningless: what if it turns out that IQ is not in fact the uniform distribution on the range [74, 126]?
But of course whatever the distribution of IQ is, it makes sense to fix the mean and standard deviation to our favorite numbers (I would have preferred 0 and 1 to 100 and 15, but I am in the minority), because the location and scale of the distributions are just an artifact of the way we quantify IQ. You could consider using another measure of scale instead of standard deviation, but what would you prefer? Interquartile range?
And finally, even if you don’t know anything about your distribution, standard deviation can in fact be put to good use, by means of Chebyshev’s inequality.
The population isn’t necessarily normal, and the scaling is not necessarily done (accurately). That was my point.
Of course, distributions don’t have to be normal to have a standard deviation.
Sure, but it’s a relatively meaningless description of a distribution without the caveat that it’s normal.
To the extent that your statement is true, it is only true by convention.
In full generality, a standard deviation is a less useful description of a distribution without the caveat that it belongs to a specific family. Knowing the mean and standard deviation tells us the location and the scale of a distribution. Normal distributions are an example of a family that is parametrized by location and scale, but this is not an exclusive club in the least. Uniform distributions, for instance, would also work.
However, statisticians model things as normal distributions by default, and they are often right to do so. This means that there’s a bunch of handy tables for tail probabilities and such for normal distributions, and these rely on knowing mean and standard deviation.
On the other hand, uniform distributions (for example) are not a very good description of most actual data. But if they were, then everyone would know that mean and standard deviation uniquely pick out a uniform distribution. And one might then object that scaling IQ to have a mean of 100 and a standard deviation of 15 is relatively meaningless: what if it turns out that IQ is not in fact the uniform distribution on the range [74, 126]?
But of course whatever the distribution of IQ is, it makes sense to fix the mean and standard deviation to our favorite numbers (I would have preferred 0 and 1 to 100 and 15, but I am in the minority), because the location and scale of the distributions are just an artifact of the way we quantify IQ. You could consider using another measure of scale instead of standard deviation, but what would you prefer? Interquartile range?
And finally, even if you don’t know anything about your distribution, standard deviation can in fact be put to good use, by means of Chebyshev’s inequality.