Personally, I have gradually moved to seeing this as lowering my p(doom). I think humanity’s best chance is to politically coordinate to globally enforce strict AI regulation. I think the most likely route to this becoming politically feasible is through empirical demonstrations of the danger of AI. I think AI is more likely to be legibly empirically dangerous to political decision-makers if it is used in the military. Thus, I think military AI is, counter-intuitively, lowering p(doom). A big accident that caused military AI to kill thousands of innocent people that the military had not intended to kill could be really great for p(doom).
This is a sad thing to think, obviously. I’m hopeful we can come up with harmless demonstrations of the dangers involved, so that political action will be taken without anyone needing to be killed.
In scenarios where AI becomes powerful enough to present an extinction risk to humanity, I don’t expect that the level of robotic weaponry it has control over to matter much. It will have many many opportunities to hurt humanity that look nothing like armed robots and greatly exceed the power of armed robots.
Personally, I have gradually moved to seeing this as lowering my p(doom). I think humanity’s best chance is to politically coordinate to globally enforce strict AI regulation. I think the most likely route to this becoming politically feasible is through empirical demonstrations of the danger of AI. I think AI is more likely to be legibly empirically dangerous to political decision-makers if it is used in the military. Thus, I think military AI is, counter-intuitively, lowering p(doom). A big accident that caused military AI to kill thousands of innocent people that the military had not intended to kill could be really great for p(doom).
This is a sad thing to think, obviously. I’m hopeful we can come up with harmless demonstrations of the dangers involved, so that political action will be taken without anyone needing to be killed.
In scenarios where AI becomes powerful enough to present an extinction risk to humanity, I don’t expect that the level of robotic weaponry it has control over to matter much. It will have many many opportunities to hurt humanity that look nothing like armed robots and greatly exceed the power of armed robots.