For indeed in a case like this, one first backs up and asks oneself “Is Humbali right or not?” and not “How can I prove Humbali wrong?”
Gonna write up some of my thoughts here without reading on, and post them (also without reading on).
I don’t get why Humbali’s objection has not already been ‘priced in’. Eliezer has a bunch of models and info and his gut puts the timeline at before 2050. I don’t think “what if you’re mistaken about everything” isn’t an argument Eliezer already considered, so I think it’s already priced into the prediction. You’re not allowed to just repeatedly use that argument until such a time as a person is maximally uncertain. (Nor are you allowed to keep using it until the person starts to agree with the position of the person in the room with more prestige.)
I also think this bit is blatantly over-the-top (to the extent of being a bit heavy-handed on Eliezer’s part):
“Humbali: Okay, so you’re more confident about your AGI beliefs, and OpenPhil is less confident. Therefore, to the extent that you might be wrong, the world is going to look more like OpenPhil’s forecasts of how the future will probably look, like world GDP doubling over four years before the first time it doubles over one year, and so on.”
“Maximum-entropy” and “OpenPhil’s forecasts” are not the same distribution. It is not clear to me that “OpenPhil’s forecasts” look closer to maximum-entropy than EY’s. I imagine OpenPhil’s forecasts have a lot less on shorter amounts of time (e.g. ~5 years).
(And potentially puts less on much longer amounts of time? Not sure here, but I do think that “smoothly follows the graph” tends to predict fewer strange things that would knock human civilization back 100 years as well as things that would bring AGI to us in 5 years.)
In fact, as I understand it from the essay above, “OpenPhil’s forecasts” involve taking maximum-entropy and then updating heavily on a number that isn’t relevant. I don’t know that I expect this to be more accurate than EY’s gut given a bunch of observations, so updating ‘toward it’ given uncertainty is wrong.
I don’t have a detailed knowledge of probability theory, and if you gave me a bunch of university exam questions using maximum-entropy distributions I’d quite likely fail to answer them correctly. It’s definitely on the table that Humbali knows it better than me, so I have tried to not have my arguments depend on any technical details. Something about my points might be wrong nonetheless because Humbali understands something I don’t.
Is Humbali right that generic uncertainty about maybe being wrong, without other extra premises, should increase the entropy of one’s probability distribution over AGI, thereby moving out its median further away in time?
I’ll add a quick answer: my gut says technically true, but that mostly I should just look at the arguments because they provide more weight than the prior. Strong evidence is common. It seems plausible to me that the prior over ‘number of years away’ should make me predict it’s more like 10 trillion years away or something, but that getting to observe the humans and the industrial revolution has already moved me to “likely in the next one thousand years” such that remembering this prior isn’t very informative any more.
My answer: technically true but practically irrelevant.
Gonna write up some of my thoughts here without reading on, and post them (also without reading on).
I don’t get why Humbali’s objection has not already been ‘priced in’. Eliezer has a bunch of models and info and his gut puts the timeline at before 2050. I don’t think “what if you’re mistaken about everything” isn’t an argument Eliezer already considered, so I think it’s already priced into the prediction. You’re not allowed to just repeatedly use that argument until such a time as a person is maximally uncertain. (Nor are you allowed to keep using it until the person starts to agree with the position of the person in the room with more prestige.)
I also think this bit is blatantly over-the-top (to the extent of being a bit heavy-handed on Eliezer’s part):
“Maximum-entropy” and “OpenPhil’s forecasts” are not the same distribution. It is not clear to me that “OpenPhil’s forecasts” look closer to maximum-entropy than EY’s. I imagine OpenPhil’s forecasts have a lot less on shorter amounts of time (e.g. ~5 years).
(And potentially puts less on much longer amounts of time? Not sure here, but I do think that “smoothly follows the graph” tends to predict fewer strange things that would knock human civilization back 100 years as well as things that would bring AGI to us in 5 years.)
In fact, as I understand it from the essay above, “OpenPhil’s forecasts” involve taking maximum-entropy and then updating heavily on a number that isn’t relevant. I don’t know that I expect this to be more accurate than EY’s gut given a bunch of observations, so updating ‘toward it’ given uncertainty is wrong.
I don’t have a detailed knowledge of probability theory, and if you gave me a bunch of university exam questions using maximum-entropy distributions I’d quite likely fail to answer them correctly. It’s definitely on the table that Humbali knows it better than me, so I have tried to not have my arguments depend on any technical details. Something about my points might be wrong nonetheless because Humbali understands something I don’t.
Hmm, alas, stopped reading too soon.
I’ll add a quick answer: my gut says technically true, but that mostly I should just look at the arguments because they provide more weight than the prior. Strong evidence is common. It seems plausible to me that the prior over ‘number of years away’ should make me predict it’s more like 10 trillion years away or something, but that getting to observe the humans and the industrial revolution has already moved me to “likely in the next one thousand years” such that remembering this prior isn’t very informative any more.
My answer: technically true but practically irrelevant.