And then, when humans are worth more to the advance of this AI’s radically changed goals as mere atoms than for all the things we can do, it simply kills us all.
I agree with this, though again I think the “changed” can be ommitted.
Secondly, I also think it’s possible that rather than the unaligned superintelligence killing us all in the same second like EY often says, that it may kill us off in a manner like how humans kill off other species (i.e. we know we are doing it, but it doesn’t look like a war.)
Noting that this part doesn’t seem necessary to me. The agent may be misaligned before the capability gain.
I agree with this, though again I think the “changed” can be ommitted.
Secondly, I also think it’s possible that rather than the unaligned superintelligence killing us all in the same second like EY often says, that it may kill us off in a manner like how humans kill off other species (i.e. we know we are doing it, but it doesn’t look like a war.)
Re my last point, see Ben Weinstein-Raun’s vision here: https://twitter.com/benwr/status/1646685868940460032