My beef here is when people supply you with a number and do tell you how they arrived at it, and the Knightian says “this is Knightian uncertainty we are dealing with here, so we have to just ignore your argument and estimate, say ‘we just don’t know,’ and leave it at that.” Sounds like a straw man, but it isn’t.
Surely that would depend on how they arrived at it? If it’s based on objective data, that’s not Knightian uncertainty, and if it’s based on subjective guesses, then Knightian ls can reject that.
If it’s based on objective data, that’s not Knightian uncertainty, and if it’s based on subjective guesses, the Knightian can reject that.
What I can say is that there appears to be room to interpret all the examples and definitions of “Knightian uncertainty” in two ways:
It’s where we move from a well-defined probabilistic model (i.e. “what’s the chance of obtaining a 7 as the sum of two fair die rolls”) to having to make judgment calls about how to model the world in order to make forecasts (i.e. “forecasting”).
It’s where we move from what I’m calling “forecasting” to having to make decisions without any information at all.
Knightian-1 is not exotic, and the examples of Knightian uncertainty I have encountered (things like forecasting the economic state of airline companies 30 years out, or the Ellsburg paradox) seem to be examples of this kind. Knightians can argue with these models, but they can’t reject the activity of forecasting as a valid form of endeavor.
Knightian-2 is more exotic, but I have yet to find a clear real-world example of it. It’s a theoretical case where it might be proper to reject the applicability of forecasting, but a Knightian would have to make a case that a particular situation is of this kind. I can’t even find or invent a hypothetical situation that matches it, and I am unconvinced this is a meaningful real-world concept.
No. Guesses and intuitions are ways of interpreting or synthesizing data. Data is a way of measuring the world. However, there are subjective/intuitive types of qualitative data. If I am a regional manager for Starbucks, go into one of the shops I’m managing, and come away with the qualitative judgment that “it looks like a shitshow,” there is observational data that that judgment is based on, even if I haven’t written it down or quantified it.
A hard number based on literally nothing is not data and is not an interpretation. But that’s not an interesting or realistic case—it doesn’t even fit the idea of “ass numbers,” a person’s best intuitive guess. At least in that case, we can hope that there’s some unconscious aggregation of memory, models of how the world works, and so on coming together to inform the number. It’s a valid estimate, although not particularly trustworthy in most cases. It’s not fundamentally different from the much more legible predictions of people like superforecasters.
I’m encouraging crisp distinctions between having a high standard of evidence, an explicit demonstration of a specific limit to our ability to forecast, and an unsubstantiated declaration that an entire broad topic is entirely beyond forecasting.
In the case of AGI, this would mean distinguishing between:
“I’d need a stronger argument and evidence for predicting AGI doom to update my credence any further.”
“Even an AGI can’t predict more than n pinball bounces out into the future given atomic-resolution data from only one moment in time.”
“Nobody can predict what will happen with AGI, it’s a case of Knightian uncertainty and simply incalculable! There are just too many possibilities!”
The first two cases are fine, the third one I think is an invalid form of argument.
My beef here is when people supply you with a number and do tell you how they arrived at it, and the Knightian says “this is Knightian uncertainty we are dealing with here, so we have to just ignore your argument and estimate, say ‘we just don’t know,’ and leave it at that.” Sounds like a straw man, but it isn’t.
Surely that would depend on how they arrived at it? If it’s based on objective data, that’s not Knightian uncertainty, and if it’s based on subjective guesses, then Knightian ls can reject that.
What I can say is that there appears to be room to interpret all the examples and definitions of “Knightian uncertainty” in two ways:
It’s where we move from a well-defined probabilistic model (i.e. “what’s the chance of obtaining a 7 as the sum of two fair die rolls”) to having to make judgment calls about how to model the world in order to make forecasts (i.e. “forecasting”).
It’s where we move from what I’m calling “forecasting” to having to make decisions without any information at all.
Knightian-1 is not exotic, and the examples of Knightian uncertainty I have encountered (things like forecasting the economic state of airline companies 30 years out, or the Ellsburg paradox) seem to be examples of this kind. Knightians can argue with these models, but they can’t reject the activity of forecasting as a valid form of endeavor.
Knightian-2 is more exotic, but I have yet to find a clear real-world example of it. It’s a theoretical case where it might be proper to reject the applicability of forecasting, but a Knightian would have to make a case that a particular situation is of this kind. I can’t even find or invent a hypothetical situation that matches it, and I am unconvinced this is a meaningful real-world concept.
Do you count guesses and intuitions as data?
No. Guesses and intuitions are ways of interpreting or synthesizing data. Data is a way of measuring the world. However, there are subjective/intuitive types of qualitative data. If I am a regional manager for Starbucks, go into one of the shops I’m managing, and come away with the qualitative judgment that “it looks like a shitshow,” there is observational data that that judgment is based on, even if I haven’t written it down or quantified it.
Not exclusively: they can be pretty random.
What we were discussing was the opposite...a hard number based on nothing.
A hard number based on literally nothing is not data and is not an interpretation. But that’s not an interesting or realistic case—it doesn’t even fit the idea of “ass numbers,” a person’s best intuitive guess. At least in that case, we can hope that there’s some unconscious aggregation of memory, models of how the world works, and so on coming together to inform the number. It’s a valid estimate, although not particularly trustworthy in most cases. It’s not fundamentally different from the much more legible predictions of people like superforecasters.
And having said all that, it is not unreasonable to set the bar somewhere higher.
I’m encouraging crisp distinctions between having a high standard of evidence, an explicit demonstration of a specific limit to our ability to forecast, and an unsubstantiated declaration that an entire broad topic is entirely beyond forecasting.
In the case of AGI, this would mean distinguishing between:
“I’d need a stronger argument and evidence for predicting AGI doom to update my credence any further.”
“Even an AGI can’t predict more than n pinball bounces out into the future given atomic-resolution data from only one moment in time.”
“Nobody can predict what will happen with AGI, it’s a case of Knightian uncertainty and simply incalculable! There are just too many possibilities!”
The first two cases are fine, the third one I think is an invalid form of argument.