Is Humbali right that generic uncertainty about maybe being wrong, without other extra premises, should increase the entropy of one’s probability distribution over AGI, thereby moving out its median further away in time?
I’ll give the homework a shot.
Entropy is the amount of uncertainty inherent in your probability distribution, so generic uncertainty implies an increase in the entropy of one’s probability distribution (whatever the eventual result is, it provides you more information than if would have if you were more certain beforehand). However, I do not think it follows that the median is therefore further in the future. Increasing one’s generic uncertainty regarding the difficulty of creating AGI rules out knowing that an AGI requires more compute than Google can currently throw at the problem, then it requires ruling out knowing that an AGI can’t be created using affordable 2021 consumer hardware, etc. High entropy probability distributions cannot rule out researchers having the final stroke of insight in 20 minutes, or the NSA having an airgapped AGI in their basement since 2017. Generic uncertainty means relying more heavily on your priors; it’s not clear to me that this moves the estimate towards longer timelines.
I was thinking something similar, but I missed the point about the prior. To get intuition, I considered placing like 99% probability on one day in 2030. Then generic uncertainty spreads out this distribution both ways, leaving the median exactly what it was before. Each bit of probability mass is equally likely to move left or right when you apply generic uncertainty. Although this seems like it should be slightly wrong since the tiny bit of probability that it is achieved right now can’t go back in time, so will always shift right.
In other words, I think this will be right for this particular case, but an incorrect argument for when significant probability mass is on it happening very soon, or for when there is a very large amount of correcting done.
It’s worth noting that gradient descent towards maximum entropy (with respect to the Wasserstein metric and Lebesgue measure, respectively) is equivalent to the heat equation, which justifies your picture of probability mass diffusing outward. It’s also exactly right that if you put a barrier at the left end of the possibility space (i.e. ruling out the date of AGI’s arrival being earlier than the present moment), then this natural direction of increasing entropy will eventually settle into all the probability masses spreading to the right forever, so the median will also move to the right forever.
This isn’t the only way of increasing entropy, though—just a very natural one. Even if I have to keep the median fixed at 2050, by keeping fixed all the 0.5 probability mass to the left of 2050, I can still increase entropy forever by spreading out only the probability masses to the right of 2050 further towards infinity.
I’ll give the homework a shot.
Entropy is the amount of uncertainty inherent in your probability distribution, so generic uncertainty implies an increase in the entropy of one’s probability distribution (whatever the eventual result is, it provides you more information than if would have if you were more certain beforehand). However, I do not think it follows that the median is therefore further in the future. Increasing one’s generic uncertainty regarding the difficulty of creating AGI rules out knowing that an AGI requires more compute than Google can currently throw at the problem, then it requires ruling out knowing that an AGI can’t be created using affordable 2021 consumer hardware, etc. High entropy probability distributions cannot rule out researchers having the final stroke of insight in 20 minutes, or the NSA having an airgapped AGI in their basement since 2017. Generic uncertainty means relying more heavily on your priors; it’s not clear to me that this moves the estimate towards longer timelines.
I think.
I was thinking something similar, but I missed the point about the prior. To get intuition, I considered placing like 99% probability on one day in 2030. Then generic uncertainty spreads out this distribution both ways, leaving the median exactly what it was before. Each bit of probability mass is equally likely to move left or right when you apply generic uncertainty. Although this seems like it should be slightly wrong since the tiny bit of probability that it is achieved right now can’t go back in time, so will always shift right.
In other words, I think this will be right for this particular case, but an incorrect argument for when significant probability mass is on it happening very soon, or for when there is a very large amount of correcting done.
It’s worth noting that gradient descent towards maximum entropy (with respect to the Wasserstein metric and Lebesgue measure, respectively) is equivalent to the heat equation, which justifies your picture of probability mass diffusing outward. It’s also exactly right that if you put a barrier at the left end of the possibility space (i.e. ruling out the date of AGI’s arrival being earlier than the present moment), then this natural direction of increasing entropy will eventually settle into all the probability masses spreading to the right forever, so the median will also move to the right forever.
This isn’t the only way of increasing entropy, though—just a very natural one. Even if I have to keep the median fixed at 2050, by keeping fixed all the 0.5 probability mass to the left of 2050, I can still increase entropy forever by spreading out only the probability masses to the right of 2050 further towards infinity.