Here I argue that following the Maxipok rule could have truly catastrophic consequences.
Here I provide a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges. I also discuss the worrisome phenomenon of “latent agential risks.”
And finally, here I argue that a superintelligence singleton constitutes the only mechanism that could neutralize the “threat of universal unilateralism” and the consequent breakdown of the social contract, resulting in a Hobbesian state of constant war among Earthians.
I would genuinely welcome feedback on any of these papers! The first one seems especially relevant to the good denizens of this website. :-)
Could the Maxipok rule have catastrophic consequences? (I argue yes.)
Here I argue that following the Maxipok rule could have truly catastrophic consequences.
Here I provide a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges. I also discuss the worrisome phenomenon of “latent agential risks.”
And finally, here I argue that a superintelligence singleton constitutes the only mechanism that could neutralize the “threat of universal unilateralism” and the consequent breakdown of the social contract, resulting in a Hobbesian state of constant war among Earthians.
I would genuinely welcome feedback on any of these papers! The first one seems especially relevant to the good denizens of this website. :-)