So in fact, even under Knightian uncertainty, we can still make estimates!
Nothing can stop you making subjective estimates: plenty of things can stop them being objectively meaningful.
And that is exactly what people who put hard-number estimates on the likelihood of AI doom are doing
What’s hard about their numbers? They are giving an exact figure, without an error bar, but that is a superficial appearance....they haven’t actually performed a calculation , and they don’t actually know anything within +/-1%.
I think there are ideas about “objectivity” and “meaningfulness” that I don’t agree with implicit in your definition.
For example, let’s say I’m a regional manager for Starbucks. I go and inspect all the stores, and then, based on my subjective assessment of how well-organized they seem to be, I give them all a number scoring them on “organization.” Those estimates seem to me to be “meaningful,” in the sense of being a shorthand way of representing qualitative observational information, and yet I would also not say they are “objective,” in the sense that “anybody in their right mind would have come to the same conclusion.”
These point estimates seem useful on their own, and if the scorer wanted to go further, they could add error bars. We could even add another scorer, normalize their scores, and then compare them and do all sorts of statistics.
On the other hand, I could have several scorers all rank the same Starbucks, then gather then in a room and have them tell me their subjective impressions. It’s the same raw data, but now I’m getting the information in the form of a narrative instead of a number.
In all these cases, I claim that we are getting meaningful estimates out of the process, whether represented in the form of a number or in the form of a narrative, and that these estimates of “how organized the regional Starbucks are” is not “Knightianly uncertain” but is just a normal estimate.
Semantically, you can have “meaningful” information that only means your own subjective impression, and “estimates” that estimate exactly the same thing, and so on.
That’s not addressing the actual point. The point is not to exploit the vagueness of the English language. You wouldn’t accept monopoly money as payment even though it says “money” in the name.
You are kind of implying that it’s unfair of Knightians to reject subjective estimates because they have greater than zero value...but why shouldn’t they be entitled to set the threshold somewhere above eta?
Here’s a quick argument: there’s eight billion people, they’ve all got opinions, and I have not got the time to listen to them all.
Nothing can stop you making subjective estimates: plenty of things can stop them being objectively meaningful.
What’s hard about their numbers? They are giving an exact figure, without an error bar, but that is a superficial appearance....they haven’t actually performed a calculation , and they don’t actually know anything within +/-1%.
https://www.johndcook.com/blog/2018/10/26/excessive-precision/
That’s a reasonable complaint to me! “You can’t use numbers to make estimates because this uncertainty is Knightian” is not.
Is it unreasonable to require estimates to be meaningful?
Define “meaningful” in a way that’s unambiguous and clear to a stranger like me, and I’ll be happy to give you my opinion/argument!
The numbers that go into the final estimate are themselves objective , and not pulled out of the air, or anything else beginning with “a’”.
I think there are ideas about “objectivity” and “meaningfulness” that I don’t agree with implicit in your definition.
For example, let’s say I’m a regional manager for Starbucks. I go and inspect all the stores, and then, based on my subjective assessment of how well-organized they seem to be, I give them all a number scoring them on “organization.” Those estimates seem to me to be “meaningful,” in the sense of being a shorthand way of representing qualitative observational information, and yet I would also not say they are “objective,” in the sense that “anybody in their right mind would have come to the same conclusion.”
These point estimates seem useful on their own, and if the scorer wanted to go further, they could add error bars. We could even add another scorer, normalize their scores, and then compare them and do all sorts of statistics.
On the other hand, I could have several scorers all rank the same Starbucks, then gather then in a room and have them tell me their subjective impressions. It’s the same raw data, but now I’m getting the information in the form of a narrative instead of a number.
In all these cases, I claim that we are getting meaningful estimates out of the process, whether represented in the form of a number or in the form of a narrative, and that these estimates of “how organized the regional Starbucks are” is not “Knightianly uncertain” but is just a normal estimate.
Semantically, you can have “meaningful” information that only means your own subjective impression, and “estimates” that estimate exactly the same thing, and so on.
That’s not addressing the actual point. The point is not to exploit the vagueness of the English language. You wouldn’t accept monopoly money as payment even though it says “money” in the name.
You are kind of implying that it’s unfair of Knightians to reject subjective estimates because they have greater than zero value...but why shouldn’t they be entitled to set the threshold somewhere above eta?
Here’s a quick argument: there’s eight billion people, they’ve all got opinions, and I have not got the time to listen to them all.
I’m not sure what you mean.