Consciousness isn’t the point. A machine need not be conscious, or “alive”, or “sentient,” or have “real understanding” to destroy the world.
(I see what you mean, but technically speaking your second sentence is somewhat contentious and I don’t think it’s necessary for your point to go through. Sorry for nitpicking.)
(Slepnev’s “narrow AI argument” seems to be related. A “narrow AI” that can win world-optimization would arguably lack person-like properties, at least on the stage where it’s still a “narrow AI”.)
(I see what you mean, but technically speaking your second sentence is somewhat contentious and I don’t think it’s necessary for your point to go through. Sorry for nitpicking.)
(Slepnev’s “narrow AI argument” seems to be related. A “narrow AI” that can win world-optimization would arguably lack person-like properties, at least on the stage where it’s still a “narrow AI”.)
This is wrong in a boring way; you’re supposed to be wrong in interesting ways. :-)