This paper seems (to me) to do a very good job of laying out its assumptions and reasoning based on them. For that, I applaud it, and I’m very glad it exists. I specifically disagree with several of the key assumptions, like the idea that “human level” is a “wide ‘middle’ of the range of AI capabilities.” I very much appreciate that it shows that even granted no decisive strategic advantage there is strong reason to believe the usual mechanisms that reduce and limit war among humans mostly won’t apply.
At some point I’d like to see similar reasoning applied to the full decision tree of assumptions. I think most of the assumption are unnecessary in the sense that all the plausible options for that node lead to some form of “Humans most likely lose badly.” I don’t think most people would read such a thing, or properly understand it, but you can show them the big graph at the end that shows something like, “Out of 8192 paths AGI development could take, all but 3 lead to extinction or near-extinction, and the only one of those paths where we have control of which outcome we get is to not build AGI.”
This paper seems (to me) to do a very good job of laying out its assumptions and reasoning based on them. For that, I applaud it, and I’m very glad it exists. I specifically disagree with several of the key assumptions, like the idea that “human level” is a “wide ‘middle’ of the range of AI capabilities.” I very much appreciate that it shows that even granted no decisive strategic advantage there is strong reason to believe the usual mechanisms that reduce and limit war among humans mostly won’t apply.
At some point I’d like to see similar reasoning applied to the full decision tree of assumptions. I think most of the assumption are unnecessary in the sense that all the plausible options for that node lead to some form of “Humans most likely lose badly.” I don’t think most people would read such a thing, or properly understand it, but you can show them the big graph at the end that shows something like, “Out of 8192 paths AGI development could take, all but 3 lead to extinction or near-extinction, and the only one of those paths where we have control of which outcome we get is to not build AGI.”