3) Tech progress fails. No one is doing tech research.
4) We coordinate to avoid UFAI, and don’t know how to make FAI.
5) No coordination to avoid UFAI, no one has made one yet. (State we are currently in)
In the first 3 scenarios, humanity won’t be wiped out by some other tech. If we can coordinate around AI, I would suspect that we would manage to coordinate around other black balls. (AI tech seems unusually hard to coordinate around, as we don’t know where the dangerous regions are, tech near the dangerous regions is likely to be very profitable, it is an object entirely of information, thus easily copied and hidden. ) In state 5, it is possible for some other black ball to wipe out humanity.
So conditional on some black ball tech other than UFAI wiping out humanity, the most likely scenario is that it came sooner than UFAI could. I would be surprised if humanity stayed in state 5 for the next 100 years. (I would be most worried about grey goo here)
The other thread of possibility is that humanity coordinates around stopping UFAI being developed, and then gets wiped out by something else. This requires an impressive amount of coordination. It also requires that FAI isn’t developed (or is stopped by the coordination to avoid UFAI) Given this happens, I would expect that humans had got better at coordinating, that people who cared about X-risk were in positions of power, and that standards and presidents had been set. Anything that wipes out a humanity that well coordinated would have to be really hard to coordinate around.
This sounds roughly right to me. There is the FAI/UFAI threshold of technological development, and after humanity passes that threshold, it’s unlikely that coordination will be a key bottleneck in humanity’s future. I think many would disagree with this take, who think multi-polar worlds are more likely and that AGI systems may not cooperate well, but I think the view is roughly correct.
The main thing I’m pointing at in my post is 5) and 3)-transition-to-5). It seems quite possible to me that SAI will be out of reach for a while due to hardware development slowing, and that the application of other technologies could threaten humanity in the meantime.
Consider these 5 states
1)FAI
2)UFAI
3) Tech progress fails. No one is doing tech research.
4) We coordinate to avoid UFAI, and don’t know how to make FAI.
5) No coordination to avoid UFAI, no one has made one yet. (State we are currently in)
In the first 3 scenarios, humanity won’t be wiped out by some other tech. If we can coordinate around AI, I would suspect that we would manage to coordinate around other black balls. (AI tech seems unusually hard to coordinate around, as we don’t know where the dangerous regions are, tech near the dangerous regions is likely to be very profitable, it is an object entirely of information, thus easily copied and hidden. ) In state 5, it is possible for some other black ball to wipe out humanity.
So conditional on some black ball tech other than UFAI wiping out humanity, the most likely scenario is that it came sooner than UFAI could. I would be surprised if humanity stayed in state 5 for the next 100 years. (I would be most worried about grey goo here)
The other thread of possibility is that humanity coordinates around stopping UFAI being developed, and then gets wiped out by something else. This requires an impressive amount of coordination. It also requires that FAI isn’t developed (or is stopped by the coordination to avoid UFAI) Given this happens, I would expect that humans had got better at coordinating, that people who cared about X-risk were in positions of power, and that standards and presidents had been set. Anything that wipes out a humanity that well coordinated would have to be really hard to coordinate around.
This sounds roughly right to me. There is the FAI/UFAI threshold of technological development, and after humanity passes that threshold, it’s unlikely that coordination will be a key bottleneck in humanity’s future. I think many would disagree with this take, who think multi-polar worlds are more likely and that AGI systems may not cooperate well, but I think the view is roughly correct.
The main thing I’m pointing at in my post is 5) and 3)-transition-to-5). It seems quite possible to me that SAI will be out of reach for a while due to hardware development slowing, and that the application of other technologies could threaten humanity in the meantime.