He does not actually do that.
He conditions on malicious AI without motivating the malice in this post.
I have many disagreements with his concrete scenario, but by assuming malice from the onset, he completely sidesteps my question. My question is:
Why would an AI indifferent to human wellbeing decide on omnicide?
What Karnofsky answers:
How would a society of AIs that were actively malicious towards humans defeat them?
They are two very different questions.
But I think your question about the latter is based on the assumption that AI would need human society/infrastructure, whereas I think Karnofsky makes a convincing case that the AI could create its’ own society/enclaves, etc.
He does not actually do that.
He conditions on malicious AI without motivating the malice in this post.
I have many disagreements with his concrete scenario, but by assuming malice from the onset, he completely sidesteps my question. My question is:
Why would an AI indifferent to human wellbeing decide on omnicide?
What Karnofsky answers:
How would a society of AIs that were actively malicious towards humans defeat them?
They are two very different questions.
But I think your question about the latter is based on the assumption that AI would need human society/infrastructure, whereas I think Karnofsky makes a convincing case that the AI could create its’ own society/enclaves, etc.