Also note that iirc he only assigns about 10% to the EM scenario happening in general? At least, as of the writing of the book. I get the impression he just thinks about it a lot because it is the scenario that he, a human economist, can think about.
I have not read the book, but my memory is that in a blog post he said that the probability is “at least” 10%. I think he holds a much higher number, but doesn’t want to speak about it and just wants to insist that his hostile reader should accept at least 10%. In particular, if people say “no it won’t happen, 10%,” then that’s not a rebuttal at all. But maybe I’m confusing that with other numbers, eg, here where he says that it’s worth talking about even if it is only 1%.
Conditional on my key assumptions, I expect at least 30 percent of future situations to be usefully informed by my analysis. Unconditionally, I expect at least 5 percent.
I now estimate an unconditional 80% chance of it being a useful guide,
I think that means he previously put 15% on ems in general and 5% on his em scenario (ie, you were right).
80% on the specific scenario leaves little room for AI, let alone AI destroying all value. So maybe he now puts that <1%. But maybe he has just removed non-em non-AI scenarios. In particular, you have to put a lot of weight on completely unanticipated scenarios; perhaps that has gone from 80% to 10%.
I’d expect his “useful guide” claim to be compatible with worlds that’re entirely AGIs? He seems to think they’ll be subject to the same sorts of dynamics as humans, coordination problems and all that. I’m not convinced, but he seems quite confident.
(personally I think some coordination problems and legibility issues will always persist, but they’d be relatively unimportant, and focusing on them wont tell us much about the overall shape of AGI societies.)
Also note that iirc he only assigns about 10% to the EM scenario happening in general? At least, as of the writing of the book. I get the impression he just thinks about it a lot because it is the scenario that he, a human economist, can think about.
I have not read the book, but my memory is that in a blog post he said that the probability is “at least” 10%. I think he holds a much higher number, but doesn’t want to speak about it and just wants to insist that his hostile reader should accept at least 10%. In particular, if people say “no it won’t happen, 10%,” then that’s not a rebuttal at all. But maybe I’m confusing that with other numbers, eg, here where he says that it’s worth talking about even if it is only 1%.
Here he reports old numbers and new:
I think that means he previously put 15% on ems in general and 5% on his em scenario (ie, you were right).
80% on the specific scenario leaves little room for AI, let alone AI destroying all value. So maybe he now puts that <1%. But maybe he has just removed non-em non-AI scenarios. In particular, you have to put a lot of weight on completely unanticipated scenarios; perhaps that has gone from 80% to 10%.
I’d expect his “useful guide” claim to be compatible with worlds that’re entirely AGIs? He seems to think they’ll be subject to the same sorts of dynamics as humans, coordination problems and all that. I’m not convinced, but he seems quite confident.
(personally I think some coordination problems and legibility issues will always persist, but they’d be relatively unimportant, and focusing on them wont tell us much about the overall shape of AGI societies.)