I disagree that the point about the scientists not realizing nuclear weapons would likely become an existential risk changes what I see as the correct choice to make. I think that, knowing that I was a scientist in the USA and I had either the choice to help the US government build nuclear weapons and thus set the world up for a tense, potentially existential, détente between the US’s enemies (Nazis and/or communists and/or others), or… not help. Still seems clearly correct to me to help, since I think a dangerous détente is a better option than only the Nazis or only Stalin having nuclear weapons.
In the current context, I do think there is important strategic game theory overlap with those times, since it seems likely that AI (whether AGI or not) will potentially disrupt the long-standing nuclear détente in the next few years. I expect that whichever government controls the strongest AI in five years from now, if not sooner, will also be nearly immune to long-range missile attacks, conventional military threats, and bioweapons, but able to deploy those things (or a wide range of other coercive technologies) at will against other nations.
Point of clarification: I didn’t mean that there should be a rule against helping one’s country race to develop nukes, the argument I’m making is that humans should have a rule against helping one’s country race to develop nukes that one expects by-default to (say) ignite the atmosphere and kill everyone and for which there is no known countermeasure.
I disagree that the point about the scientists not realizing nuclear weapons would likely become an existential risk changes what I see as the correct choice to make. I think that, knowing that I was a scientist in the USA and I had either the choice to help the US government build nuclear weapons and thus set the world up for a tense, potentially existential, détente between the US’s enemies (Nazis and/or communists and/or others), or… not help. Still seems clearly correct to me to help, since I think a dangerous détente is a better option than only the Nazis or only Stalin having nuclear weapons.
In the current context, I do think there is important strategic game theory overlap with those times, since it seems likely that AI (whether AGI or not) will potentially disrupt the long-standing nuclear détente in the next few years. I expect that whichever government controls the strongest AI in five years from now, if not sooner, will also be nearly immune to long-range missile attacks, conventional military threats, and bioweapons, but able to deploy those things (or a wide range of other coercive technologies) at will against other nations.
Point of clarification: I didn’t mean that there should be a rule against helping one’s country race to develop nukes, the argument I’m making is that humans should have a rule against helping one’s country race to develop nukes that one expects by-default to (say) ignite the atmosphere and kill everyone and for which there is no known countermeasure.
Ah, yes. Well that certainly makes sense! Thanks for the clarification.