Is the pinball problem really Knightian Uncertainty? I think we can form a model of the problem that tells us what we can know about the path of the ball, and what we can’t. We can calculate how our uncertainty grows with each bounce. I thought Knightian Uncertainty was more related to questions like “what if there is a multiball bonus if you hit the bumpers in the right order?”
Let me say a little more about the “is this Knightian uncertainty” question.
Here are some statements about Knightian uncertainty from the Wikipedia page:
In economics, Knightian uncertainty is a lack of any quantifiable knowledge about some possible occurrence, as opposed to the presence of quantifiable risk (e.g., that in statistical noise or a parameter’s confidence interval). The concept acknowledges some fundamental degree of ignorance, a limit to knowledge, and an essential unpredictability of future events...
However, the concept is largely informal and there is no single best formal system of probability and belief to represent Knightian uncertainty...
Taleb asserts that Knightian risk does not exist in the real world, and instead finds gradations of computable risk.
Qualitatively, we can say that there is no widely accepted formal definition of Knightian uncertainty, and it’s disputed whether it is actually a meaningful concept at all.
The Ellsberg paradox is taken to illustrate Knightian uncertainty—a barrel either holds 2⁄3 yellow and 1⁄3 black balls, or 2⁄3 black and 1⁄3 yellow balls, but you don’t know which.
Personally, I just don’t see a paradox here. You start with probability uniformly distributed, and in this case, you have no other evidence to update with, so you assign an equal 50% chance to the possibility that the barrel is majority-black or majority-yellow. If I had some psychological insight into what the barrel-filler would do, then I can update based on that information.
An airline might forecast that the risk of an accident involving one of its planes is exactly one per 20 million takeoffs. But the economic outlook for airlines 30 years from now involves so many unknown factors as to be incalculable.
First of all, this doesn’t seem entirely incalculable (assuming we can come up with a definition of ‘economic outlook’). If we want to know, say, the airline miles per year, we can pick a range from 0 to an arbitrarily high number X and say “I’m at least 99% sure it’s between 0 and X.” And maybe we are 70% confident that the economy will grow between now and then, and so we can say we’re even more confident that it’s between [current airline miles per year, X]. And so once again, while our error bars are wide, there’s nothing literally incalculable here.
The same article also acknowledges the controversy with a reverse spin in which almost everything is Knightian, and non-Knightian risk is only when risks are precisely calculable:
Some economists have argued that this distinction is overblown. In the real business world, this objection goes, all events are so complex that forecasting is always a matter of grappling with “true uncertainty,” not risk; past data used to forecast risk may not reflect current conditions, anyway. In this view, “risk” would be best applied to a highly controlled environment, like a pure game of chance in a casino, and “uncertainty” would apply to nearly everything else.
Knight distinguished between three different types of probability, which he termed: “a priori probability;” “statistical probability” and “estimates”. The first type “is on the same logical plane as the propositions of mathematics;” the canonical example is the odds of rolling any number on a die. “Statistical probability” depends upon the “empirical evaluation of the frequency of association between predicates” and on “the empirical classification of instances”. When “there is no valid basis of any kind for classifying instances”, only “estimates” can be made.
So in fact, even under Knightian uncertainty, we can still make estimates! We don’t have to throw up our hands and say “I just don’t know, we’re in a separate magisterium because this uncertainty is Knightian!” We are just saying ’I can’t deduce the probabilities from mathematical argument, I don’t have a precise definition of the probability distribution, and so I must estimate what the outcomes might be and how likely they are.”
And that is exactly what people who put hard-number estimates on the likelihood of AI doom are doing. When Scott Alexander says “33% risk of AI doom” or Eliezer puts it at 90%, they are making estimates, and that is clearly a display of Knightian uncertainty as Knight would have defined it himself.
When others say “no, you can’t put any sort of hard probability on it, don’t even make an estimate!” they are not displaying Knightian uncertainty, they’re just rejecting the debate topic entirely.
Overall, as I delve into this, the examples of uncertainty purported to be Knightian just seem to be the sort of thing superforecasters have to estimate. Everything on Metaculus is an exercise in dealing with Knightian uncertainty. Every score on Metaculus results from forecasters establishing base rates, updating based on inside view considerations and the passage of time, and then turning that into a hard number estimate which gets aggregated. Nothing incalculable or mysterious there.
It’s possible to get meaningful results by system 2 processes like explicit calculation...Knights apriori probability …. and also by system 1 processes . But system.1 needs feedback to be accurate … that makes the difference between educated guesswork and guesswork...and feedback isnt always available.
So in fact, even under Knightian uncertainty, we can still make estimates!
Nothing can stop you making subjective estimates: plenty of things can stop them being objectively meaningful.
And that is exactly what people who put hard-number estimates on the likelihood of AI doom are doing
What’s hard about their numbers? They are giving an exact figure, without an error bar, but that is a superficial appearance....they haven’t actually performed a calculation , and they don’t actually know anything within +/-1%.
I think there are ideas about “objectivity” and “meaningfulness” that I don’t agree with implicit in your definition.
For example, let’s say I’m a regional manager for Starbucks. I go and inspect all the stores, and then, based on my subjective assessment of how well-organized they seem to be, I give them all a number scoring them on “organization.” Those estimates seem to me to be “meaningful,” in the sense of being a shorthand way of representing qualitative observational information, and yet I would also not say they are “objective,” in the sense that “anybody in their right mind would have come to the same conclusion.”
These point estimates seem useful on their own, and if the scorer wanted to go further, they could add error bars. We could even add another scorer, normalize their scores, and then compare them and do all sorts of statistics.
On the other hand, I could have several scorers all rank the same Starbucks, then gather then in a room and have them tell me their subjective impressions. It’s the same raw data, but now I’m getting the information in the form of a narrative instead of a number.
In all these cases, I claim that we are getting meaningful estimates out of the process, whether represented in the form of a number or in the form of a narrative, and that these estimates of “how organized the regional Starbucks are” is not “Knightianly uncertain” but is just a normal estimate.
Semantically, you can have “meaningful” information that only means your own subjective impression, and “estimates” that estimate exactly the same thing, and so on.
That’s not addressing the actual point. The point is not to exploit the vagueness of the English language. You wouldn’t accept monopoly money as payment even though it says “money” in the name.
You are kind of implying that it’s unfair of Knightians to reject subjective estimates because they have greater than zero value...but why shouldn’t they be entitled to set the threshold somewhere above eta?
Here’s a quick argument: there’s eight billion people, they’ve all got opinions, and I have not got the time to listen to them all.
Knightian uncertainty is not a well-defined concept, which is part of my problem with it. If you give a hard number probability I at least know what you mean.
My beef here is when people supply you with a number and do tell you how they arrived at it, and the Knightian says “this is Knightian uncertainty we are dealing with here, so we have to just ignore your argument and estimate, say ‘we just don’t know,’ and leave it at that.” Sounds like a straw man, but it isn’t.
Surely that would depend on how they arrived at it? If it’s based on objective data, that’s not Knightian uncertainty, and if it’s based on subjective guesses, then Knightian ls can reject that.
If it’s based on objective data, that’s not Knightian uncertainty, and if it’s based on subjective guesses, the Knightian can reject that.
What I can say is that there appears to be room to interpret all the examples and definitions of “Knightian uncertainty” in two ways:
It’s where we move from a well-defined probabilistic model (i.e. “what’s the chance of obtaining a 7 as the sum of two fair die rolls”) to having to make judgment calls about how to model the world in order to make forecasts (i.e. “forecasting”).
It’s where we move from what I’m calling “forecasting” to having to make decisions without any information at all.
Knightian-1 is not exotic, and the examples of Knightian uncertainty I have encountered (things like forecasting the economic state of airline companies 30 years out, or the Ellsburg paradox) seem to be examples of this kind. Knightians can argue with these models, but they can’t reject the activity of forecasting as a valid form of endeavor.
Knightian-2 is more exotic, but I have yet to find a clear real-world example of it. It’s a theoretical case where it might be proper to reject the applicability of forecasting, but a Knightian would have to make a case that a particular situation is of this kind. I can’t even find or invent a hypothetical situation that matches it, and I am unconvinced this is a meaningful real-world concept.
No. Guesses and intuitions are ways of interpreting or synthesizing data. Data is a way of measuring the world. However, there are subjective/intuitive types of qualitative data. If I am a regional manager for Starbucks, go into one of the shops I’m managing, and come away with the qualitative judgment that “it looks like a shitshow,” there is observational data that that judgment is based on, even if I haven’t written it down or quantified it.
A hard number based on literally nothing is not data and is not an interpretation. But that’s not an interesting or realistic case—it doesn’t even fit the idea of “ass numbers,” a person’s best intuitive guess. At least in that case, we can hope that there’s some unconscious aggregation of memory, models of how the world works, and so on coming together to inform the number. It’s a valid estimate, although not particularly trustworthy in most cases. It’s not fundamentally different from the much more legible predictions of people like superforecasters.
I’m encouraging crisp distinctions between having a high standard of evidence, an explicit demonstration of a specific limit to our ability to forecast, and an unsubstantiated declaration that an entire broad topic is entirely beyond forecasting.
In the case of AGI, this would mean distinguishing between:
“I’d need a stronger argument and evidence for predicting AGI doom to update my credence any further.”
“Even an AGI can’t predict more than n pinball bounces out into the future given atomic-resolution data from only one moment in time.”
“Nobody can predict what will happen with AGI, it’s a case of Knightian uncertainty and simply incalculable! There are just too many possibilities!”
The first two cases are fine, the third one I think is an invalid form of argument.
Is the pinball problem really Knightian Uncertainty? I think we can form a model of the problem that tells us what we can know about the path of the ball, and what we can’t. We can calculate how our uncertainty grows with each bounce. I thought Knightian Uncertainty was more related to questions like “what if there is a multiball bonus if you hit the bumpers in the right order?”
Let me say a little more about the “is this Knightian uncertainty” question.
Here are some statements about Knightian uncertainty from the Wikipedia page:
Qualitatively, we can say that there is no widely accepted formal definition of Knightian uncertainty, and it’s disputed whether it is actually a meaningful concept at all.
The Ellsberg paradox is taken to illustrate Knightian uncertainty—a barrel either holds 2⁄3 yellow and 1⁄3 black balls, or 2⁄3 black and 1⁄3 yellow balls, but you don’t know which.
Personally, I just don’t see a paradox here. You start with probability uniformly distributed, and in this case, you have no other evidence to update with, so you assign an equal 50% chance to the possibility that the barrel is majority-black or majority-yellow. If I had some psychological insight into what the barrel-filler would do, then I can update based on that information.
In another MIT description of Knightian uncertainty, they offer another example:
First of all, this doesn’t seem entirely incalculable (assuming we can come up with a definition of ‘economic outlook’). If we want to know, say, the airline miles per year, we can pick a range from 0 to an arbitrarily high number X and say “I’m at least 99% sure it’s between 0 and X.” And maybe we are 70% confident that the economy will grow between now and then, and so we can say we’re even more confident that it’s between [current airline miles per year, X]. And so once again, while our error bars are wide, there’s nothing literally incalculable here.
The same article also acknowledges the controversy with a reverse spin in which almost everything is Knightian, and non-Knightian risk is only when risks are precisely calculable:
And if we go back to Knight himself,
So in fact, even under Knightian uncertainty, we can still make estimates! We don’t have to throw up our hands and say “I just don’t know, we’re in a separate magisterium because this uncertainty is Knightian!” We are just saying ’I can’t deduce the probabilities from mathematical argument, I don’t have a precise definition of the probability distribution, and so I must estimate what the outcomes might be and how likely they are.”
And that is exactly what people who put hard-number estimates on the likelihood of AI doom are doing. When Scott Alexander says “33% risk of AI doom” or Eliezer puts it at 90%, they are making estimates, and that is clearly a display of Knightian uncertainty as Knight would have defined it himself.
When others say “no, you can’t put any sort of hard probability on it, don’t even make an estimate!” they are not displaying Knightian uncertainty, they’re just rejecting the debate topic entirely.
Overall, as I delve into this, the examples of uncertainty purported to be Knightian just seem to be the sort of thing superforecasters have to estimate. Everything on Metaculus is an exercise in dealing with Knightian uncertainty. Every score on Metaculus results from forecasters establishing base rates, updating based on inside view considerations and the passage of time, and then turning that into a hard number estimate which gets aggregated. Nothing incalculable or mysterious there.
It’s possible to get meaningful results by system 2 processes like explicit calculation...Knights apriori probability …. and also by system 1 processes . But system.1 needs feedback to be accurate … that makes the difference between educated guesswork and guesswork...and feedback isnt always available.
Nothing can stop you making subjective estimates: plenty of things can stop them being objectively meaningful.
What’s hard about their numbers? They are giving an exact figure, without an error bar, but that is a superficial appearance....they haven’t actually performed a calculation , and they don’t actually know anything within +/-1%.
https://www.johndcook.com/blog/2018/10/26/excessive-precision/
That’s a reasonable complaint to me! “You can’t use numbers to make estimates because this uncertainty is Knightian” is not.
Is it unreasonable to require estimates to be meaningful?
Define “meaningful” in a way that’s unambiguous and clear to a stranger like me, and I’ll be happy to give you my opinion/argument!
The numbers that go into the final estimate are themselves objective , and not pulled out of the air, or anything else beginning with “a’”.
I think there are ideas about “objectivity” and “meaningfulness” that I don’t agree with implicit in your definition.
For example, let’s say I’m a regional manager for Starbucks. I go and inspect all the stores, and then, based on my subjective assessment of how well-organized they seem to be, I give them all a number scoring them on “organization.” Those estimates seem to me to be “meaningful,” in the sense of being a shorthand way of representing qualitative observational information, and yet I would also not say they are “objective,” in the sense that “anybody in their right mind would have come to the same conclusion.”
These point estimates seem useful on their own, and if the scorer wanted to go further, they could add error bars. We could even add another scorer, normalize their scores, and then compare them and do all sorts of statistics.
On the other hand, I could have several scorers all rank the same Starbucks, then gather then in a room and have them tell me their subjective impressions. It’s the same raw data, but now I’m getting the information in the form of a narrative instead of a number.
In all these cases, I claim that we are getting meaningful estimates out of the process, whether represented in the form of a number or in the form of a narrative, and that these estimates of “how organized the regional Starbucks are” is not “Knightianly uncertain” but is just a normal estimate.
Semantically, you can have “meaningful” information that only means your own subjective impression, and “estimates” that estimate exactly the same thing, and so on.
That’s not addressing the actual point. The point is not to exploit the vagueness of the English language. You wouldn’t accept monopoly money as payment even though it says “money” in the name.
You are kind of implying that it’s unfair of Knightians to reject subjective estimates because they have greater than zero value...but why shouldn’t they be entitled to set the threshold somewhere above eta?
Here’s a quick argument: there’s eight billion people, they’ve all got opinions, and I have not got the time to listen to them all.
I’m not sure what you mean.
Knightian uncertainty is not a well-defined concept, which is part of my problem with it. If you give a hard number probability I at least know what you mean.
If someone gives you a number without telling you how they arrived at it, it doesn’t mean anything.
My beef here is when people supply you with a number and do tell you how they arrived at it, and the Knightian says “this is Knightian uncertainty we are dealing with here, so we have to just ignore your argument and estimate, say ‘we just don’t know,’ and leave it at that.” Sounds like a straw man, but it isn’t.
Surely that would depend on how they arrived at it? If it’s based on objective data, that’s not Knightian uncertainty, and if it’s based on subjective guesses, then Knightian ls can reject that.
What I can say is that there appears to be room to interpret all the examples and definitions of “Knightian uncertainty” in two ways:
It’s where we move from a well-defined probabilistic model (i.e. “what’s the chance of obtaining a 7 as the sum of two fair die rolls”) to having to make judgment calls about how to model the world in order to make forecasts (i.e. “forecasting”).
It’s where we move from what I’m calling “forecasting” to having to make decisions without any information at all.
Knightian-1 is not exotic, and the examples of Knightian uncertainty I have encountered (things like forecasting the economic state of airline companies 30 years out, or the Ellsburg paradox) seem to be examples of this kind. Knightians can argue with these models, but they can’t reject the activity of forecasting as a valid form of endeavor.
Knightian-2 is more exotic, but I have yet to find a clear real-world example of it. It’s a theoretical case where it might be proper to reject the applicability of forecasting, but a Knightian would have to make a case that a particular situation is of this kind. I can’t even find or invent a hypothetical situation that matches it, and I am unconvinced this is a meaningful real-world concept.
Do you count guesses and intuitions as data?
No. Guesses and intuitions are ways of interpreting or synthesizing data. Data is a way of measuring the world. However, there are subjective/intuitive types of qualitative data. If I am a regional manager for Starbucks, go into one of the shops I’m managing, and come away with the qualitative judgment that “it looks like a shitshow,” there is observational data that that judgment is based on, even if I haven’t written it down or quantified it.
Not exclusively: they can be pretty random.
What we were discussing was the opposite...a hard number based on nothing.
A hard number based on literally nothing is not data and is not an interpretation. But that’s not an interesting or realistic case—it doesn’t even fit the idea of “ass numbers,” a person’s best intuitive guess. At least in that case, we can hope that there’s some unconscious aggregation of memory, models of how the world works, and so on coming together to inform the number. It’s a valid estimate, although not particularly trustworthy in most cases. It’s not fundamentally different from the much more legible predictions of people like superforecasters.
And having said all that, it is not unreasonable to set the bar somewhere higher.
I’m encouraging crisp distinctions between having a high standard of evidence, an explicit demonstration of a specific limit to our ability to forecast, and an unsubstantiated declaration that an entire broad topic is entirely beyond forecasting.
In the case of AGI, this would mean distinguishing between:
“I’d need a stronger argument and evidence for predicting AGI doom to update my credence any further.”
“Even an AGI can’t predict more than n pinball bounces out into the future given atomic-resolution data from only one moment in time.”
“Nobody can predict what will happen with AGI, it’s a case of Knightian uncertainty and simply incalculable! There are just too many possibilities!”
The first two cases are fine, the third one I think is an invalid form of argument.