reality rarely calls for that kind of nigh-paradoxical action
In my experience, reality frequently includes scenarios where the best way to improve my ability to defend myself involves also improving my ability to harm others, should I decide to do that. So it doesn’t seem that implausible to me.
Indeed, militaries are pretty much built on this principle, and are fairly common.
I am familiar with the libertarian argument that if everyone has more destructive power, the society is safer. The analogous position would be that if everyone pursues (Friendly) AGI vigorously, existential risk would be reduced. That might well be reasonable, but as far as I can tell, that’s NOT what is advocated.
Rather, we are all asked to avoid AGI research (and go into software development and make money and donate? How much safer is general software development for a corporation than careful AGI research?) and instead sponsor SIAI/EY doing (Friendly) AGI research while SIAI/EY is fairly closed-mouth about it.
It just seems to me like it would take a terribly delicate balance of probabilities to make this the safest course forward.
In my experience, reality frequently includes scenarios where the best way to improve my ability to defend myself involves also improving my ability to harm others, should I decide to do that. So it doesn’t seem that implausible to me.
Indeed, militaries are pretty much built on this principle, and are fairly common.
But, sure… there are certainly alternatives.
I am familiar with the libertarian argument that if everyone has more destructive power, the society is safer. The analogous position would be that if everyone pursues (Friendly) AGI vigorously, existential risk would be reduced. That might well be reasonable, but as far as I can tell, that’s NOT what is advocated.
Rather, we are all asked to avoid AGI research (and go into software development and make money and donate? How much safer is general software development for a corporation than careful AGI research?) and instead sponsor SIAI/EY doing (Friendly) AGI research while SIAI/EY is fairly closed-mouth about it.
It just seems to me like it would take a terribly delicate balance of probabilities to make this the safest course forward.