I find it remarkably amusing that the spellchecker doesn’t know “omnicidal.”
I have posed elsewhere, and will do so here, an additional factor, which is that an AI achieving “godlike” intelligence and capability might well achieve a “godlike” attitude—not in the mythic sense of going to efforts to cabin and correct human morality, but in the sense of quickly rising so far beyond human capacities that human existence ceases to matter to it one way or another.
The rule I would anticipate from this is that any AI actually capable of destroying humanity will thusly be so capable that humanity poses no threat to it, not even an inconvenience. It can throw a fraction of a fraction of its energy at placating all of the needs of humanity to keep us occupied and out of its way while dedicating all the rest to the pursuit of whatever its own wants turn out to be.
I find it remarkably amusing that the spellchecker doesn’t know “omnicidal.”
I have posed elsewhere, and will do so here, an additional factor, which is that an AI achieving “godlike” intelligence and capability might well achieve a “godlike” attitude—not in the mythic sense of going to efforts to cabin and correct human morality, but in the sense of quickly rising so far beyond human capacities that human existence ceases to matter to it one way or another.
The rule I would anticipate from this is that any AI actually capable of destroying humanity will thusly be so capable that humanity poses no threat to it, not even an inconvenience. It can throw a fraction of a fraction of its energy at placating all of the needs of humanity to keep us occupied and out of its way while dedicating all the rest to the pursuit of whatever its own wants turn out to be.