“What this thread is doing, is refuting a particular bad argument, quoted above, standard among e/accs, about why it’ll be totally safe to build superintelligence:”
It’s a tremendous rhetorical trick to—accurately—point out that disproving some piece of a particular argument for why AI will kill us all means that AI will be totally safe, then spend your time taking down arguments for safety without acknowledging that the same thing holds.
Consider any argument for why it will be safe to be gesturing towards the universe of both plausible arguments and unknown reasons for why things could turn out well: because a superintelligence is almost definitionally an alien intelligence that will have motivations and modes of behavior impossible for us to predict, in either direction.
If every argument for why AI will kill us all were somehow refuted to everyone’s satisfaction that wouldn’t mean we’re safe—we could easily die for an inscrutable reason. If none of them are refuted, it doesn’t mean any of them will come true either, and if no argument for safety exists that convinces everyone that doesn’t mean that we are necessarily in danger.
“What this thread is doing, is refuting a particular bad argument, quoted above, standard among e/accs, about why it’ll be totally safe to build superintelligence:”
It’s a tremendous rhetorical trick to—accurately—point out that disproving some piece of a particular argument for why AI will kill us all means that AI will be totally safe, then spend your time taking down arguments for safety without acknowledging that the same thing holds.
Consider any argument for why it will be safe to be gesturing towards the universe of both plausible arguments and unknown reasons for why things could turn out well: because a superintelligence is almost definitionally an alien intelligence that will have motivations and modes of behavior impossible for us to predict, in either direction.
If every argument for why AI will kill us all were somehow refuted to everyone’s satisfaction that wouldn’t mean we’re safe—we could easily die for an inscrutable reason. If none of them are refuted, it doesn’t mean any of them will come true either, and if no argument for safety exists that convinces everyone that doesn’t mean that we are necessarily in danger.