I want to focus on your second question: “Human coordination ability seems within an order of magnitude of what’s needed for AI safety. Why the coincidence? (Why isn’t it much higher or lower?)”
Bottom line up front: Humanity has faced a few potentially existential crises in the past; world wars, nuclear standoffs, and the threat of biological warfare. The fact that we survived those, plus selection bias, seems like a sufficient explanation of why we are near the threshold for our current crises.
I think this is a straightforward argument. At the same time, I’m not going to get deep into the anthropic reasoning, which is critical here, but I’m not clear enough on to discuss clearly. (Side note: Stuart Armstrong recently mentioned to me that there are reasons I’m not yet familiar with for why anthropic shadows aren’t large, which is assumed in the below model.)
If we assume that large scale risks are distributed in some manner, such as from Bostrom’s urn of technologies (See: Vulnerable World Hypothesis—PDF,) we should expect that the attributes of the problems, including the coordination needed to withstand / avoid them, are distributed with some mean and variance. Whatever that mean and variance is, we expect that there should be more “easy” risks (near or below the mean) than “hard” ones. Unless the tail is very, very fat, this means that we are likely to see several moderate risks before we see more extreme ones. For a toy model, let’s assume risks show up at random yearly, and follow a standard normal distribution in terms of capability needed. If we had capability in the low single digits, we would be wiped out already with high probability. Given that we’ve come worryingly close, however, it seems clear that we aren’t in the high double digits either.
Given all of that, and the selection bias of asking the question when faced with larger risks, I think it’s a posteriori likely that most salient risks we face are close to our level of ability to overcome.
No, that’s implicit in the model—and either *some* crisis requiring higher capacity than we have will overwhelm us and we’ll all die (and it doesn’t matter which,) or the variance is relatively small so no such event occurs, and/or our capacity to manage risks grows quickly enough that we avoid the upper tail.
I want to focus on your second question: “Human coordination ability seems within an order of magnitude of what’s needed for AI safety. Why the coincidence? (Why isn’t it much higher or lower?)”
Bottom line up front: Humanity has faced a few potentially existential crises in the past; world wars, nuclear standoffs, and the threat of biological warfare. The fact that we survived those, plus selection bias, seems like a sufficient explanation of why we are near the threshold for our current crises.
I think this is a straightforward argument. At the same time, I’m not going to get deep into the anthropic reasoning, which is critical here, but I’m not clear enough on to discuss clearly. (Side note: Stuart Armstrong recently mentioned to me that there are reasons I’m not yet familiar with for why anthropic shadows aren’t large, which is assumed in the below model.)
If we assume that large scale risks are distributed in some manner, such as from Bostrom’s urn of technologies (See: Vulnerable World Hypothesis—PDF,) we should expect that the attributes of the problems, including the coordination needed to withstand / avoid them, are distributed with some mean and variance. Whatever that mean and variance is, we expect that there should be more “easy” risks (near or below the mean) than “hard” ones. Unless the tail is very, very fat, this means that we are likely to see several moderate risks before we see more extreme ones. For a toy model, let’s assume risks show up at random yearly, and follow a standard normal distribution in terms of capability needed. If we had capability in the low single digits, we would be wiped out already with high probability. Given that we’ve come worryingly close, however, it seems clear that we aren’t in the high double digits either.
Given all of that, and the selection bias of asking the question when faced with larger risks, I think it’s a posteriori likely that most salient risks we face are close to our level of ability to overcome.
Would the calculation be moved by this being the last crisis we will face?
No, that’s implicit in the model—and either *some* crisis requiring higher capacity than we have will overwhelm us and we’ll all die (and it doesn’t matter which,) or the variance is relatively small so no such event occurs, and/or our capacity to manage risks grows quickly enough that we avoid the upper tail.