I am familiar with the libertarian argument that if everyone has more destructive power, the society is safer. The analogous position would be that if everyone pursues (Friendly) AGI vigorously, existential risk would be reduced. That might well be reasonable, but as far as I can tell, that’s NOT what is advocated.
Rather, we are all asked to avoid AGI research (and go into software development and make money and donate? How much safer is general software development for a corporation than careful AGI research?) and instead sponsor SIAI/EY doing (Friendly) AGI research while SIAI/EY is fairly closed-mouth about it.
It just seems to me like it would take a terribly delicate balance of probabilities to make this the safest course forward.
I am familiar with the libertarian argument that if everyone has more destructive power, the society is safer. The analogous position would be that if everyone pursues (Friendly) AGI vigorously, existential risk would be reduced. That might well be reasonable, but as far as I can tell, that’s NOT what is advocated.
Rather, we are all asked to avoid AGI research (and go into software development and make money and donate? How much safer is general software development for a corporation than careful AGI research?) and instead sponsor SIAI/EY doing (Friendly) AGI research while SIAI/EY is fairly closed-mouth about it.
It just seems to me like it would take a terribly delicate balance of probabilities to make this the safest course forward.