Ok, if you’re sufficiently worried about the possibility of that outcome, I’ll be happy to grant it to your side of the bet… even though at the time, it seemed to me clear that your assertion that the world would end meant that we wouldn’t continue as conscious beings.
I definitely wouldn’t predict that outcome. I would be very surprised, since I think the world will continue in the usual way. But is it really that likely even on your model?
It’s part of a larger class of scenarios where “AI has the power and desire to kill us with a fingersnap, but our lives are ransomed by someone else with the ability to make paperclips”.
Ok, if you’re sufficiently worried about the possibility of that outcome, I’ll be happy to grant it to your side of the bet… even though at the time, it seemed to me clear that your assertion that the world would end meant that we wouldn’t continue as conscious beings.
I definitely wouldn’t predict that outcome. I would be very surprised, since I think the world will continue in the usual way. But is it really that likely even on your model?
It’s part of a larger class of scenarios where “AI has the power and desire to kill us with a fingersnap, but our lives are ransomed by someone else with the ability to make paperclips”.