Most of your arguments hinge on it being difficult to develop superintelligence. But superintelligence is not a prerequisite before AGI could destroy all humanity. This is easily provable by the fact that humans have the capability to destroy all humanity (nukes and bioterrorism are only two ways).
You may argue that if the AGI is only human level, that we can thwart it. But that doesn’t seem obvious to me, primarily because of AGI’s ease of self-replication. Imagine a billion human intelligence aliens suddenly pop up on the internet with the intent to destroy humanity. It’s not 100% to succeed, but seems pretty likely they would to me.
This is a fair point, and a rather uncontroversial one, increasing capabilities in whatever area lowers the threshold for a relevant calamity. But this seems like a rather general argument, no? In this case it would go as “imagine everyone having a pocket nuke or a virus synthesizer”.
Most of your arguments hinge on it being difficult to develop superintelligence. But superintelligence is not a prerequisite before AGI could destroy all humanity. This is easily provable by the fact that humans have the capability to destroy all humanity (nukes and bioterrorism are only two ways).
You may argue that if the AGI is only human level, that we can thwart it. But that doesn’t seem obvious to me, primarily because of AGI’s ease of self-replication. Imagine a billion human intelligence aliens suddenly pop up on the internet with the intent to destroy humanity. It’s not 100% to succeed, but seems pretty likely they would to me.
This is a fair point, and a rather uncontroversial one, increasing capabilities in whatever area lowers the threshold for a relevant calamity. But this seems like a rather general argument, no? In this case it would go as “imagine everyone having a pocket nuke or a virus synthesizer”.