Is Humbali right that generic uncertainty about maybe being wrong, without other extra premises, should increase the entropy of one’s probability distribution over AGI, thereby moving out its median further away in time?
Second, if there were an update—say EY learned “one of the steps used in my model was wrong”—this should indeed change the distribution. However, it should change it toward the prior distribution. It’s completely unclear what the prior distribution is, but there is no rule whatsoever that says “more entropy = more prior-y” as shown by the fact that a uniform distribution over the next 10000 years has extremely high entropy yet makes a ludicrously confident prediction.
See also Information Charts (second chapter). Being under-confident/losing confidence does not have to shift your probability toward the 50% mark; it shifts it toward the prior from whoever it was before, and the prior can be literally any probability. If it were universally held that AGI happens in 5 years, then this could be the prior, and updating downward on EY’s gears-level model would update the probability toward quicker timelines.
My answer to this is that
First, no update whatsoever should take place because a probability distribution already expresses uncertainty, and there’s no mechanism by which the uncertainty increased. Adele Lopez independently (and earlier) came up with the same answer.
Second, if there were an update—say EY learned “one of the steps used in my model was wrong”—this should indeed change the distribution. However, it should change it toward the prior distribution. It’s completely unclear what the prior distribution is, but there is no rule whatsoever that says “more entropy = more prior-y” as shown by the fact that a uniform distribution over the next 10000 years has extremely high entropy yet makes a ludicrously confident prediction.
See also Information Charts (second chapter). Being under-confident/losing confidence does not have to shift your probability toward the 50% mark; it shifts it toward the prior from whoever it was before, and the prior can be literally any probability. If it were universally held that AGI happens in 5 years, then this could be the prior, and updating downward on EY’s gears-level model would update the probability toward quicker timelines.