Something which concerns me is that transformative AI will likely be a powerful destabilizing force, which will place countries currently behind in AI development (e.g. Russia and China) in a difficult position. Their governments are currently in the position of seeing that peacefully adhering to the status quo may lead to rapid disempowerment, and that the potential for coercive action to interfere with disempowerment is high. It is pretty clearly easier and cheaper to destroy chip fabs than create them, easier to kill tech employees with potent engineering skills than to train new ones.
I agree that conditions of war make safe transitions to AGI harder, make people more likely to accept higher risk. I don’t see what to do about the fact that the development of AI power is itself presenting pressures towards war. This seems bad. I don’t know what I can do to make the situation better though.
Something which concerns me is that transformative AI will likely be a powerful destabilizing force, which will place countries currently behind in AI development (e.g. Russia and China) in a difficult position. Their governments are currently in the position of seeing that peacefully adhering to the status quo may lead to rapid disempowerment, and that the potential for coercive action to interfere with disempowerment is high. It is pretty clearly easier and cheaper to destroy chip fabs than create them, easier to kill tech employees with potent engineering skills than to train new ones.
I agree that conditions of war make safe transitions to AGI harder, make people more likely to accept higher risk. I don’t see what to do about the fact that the development of AI power is itself presenting pressures towards war. This seems bad. I don’t know what I can do to make the situation better though.