Really? Huh. To me that seems both pretty world-endy and strongly against the spirit of what was implied by your original statement… would you predict this outcome? Is it something that your model allows to happen? I know it’s not something I would feel compelled to make excuses for—more like “I TOLD YOU SO!”
What exactly do you think happens in the scenario described?
Ok, if you’re sufficiently worried about the possibility of that outcome, I’ll be happy to grant it to your side of the bet… even though at the time, it seemed to me clear that your assertion that the world would end meant that we wouldn’t continue as conscious beings.
I definitely wouldn’t predict that outcome. I would be very surprised, since I think the world will continue in the usual way. But is it really that likely even on your model?
It’s part of a larger class of scenarios where “AI has the power and desire to kill us with a fingersnap, but our lives are ransomed by someone else with the ability to make paperclips”.
Really? Huh. To me that seems both pretty world-endy and strongly against the spirit of what was implied by your original statement… would you predict this outcome? Is it something that your model allows to happen? I know it’s not something I would feel compelled to make excuses for—more like “I TOLD YOU SO!”
What exactly do you think happens in the scenario described?
Ok, if you’re sufficiently worried about the possibility of that outcome, I’ll be happy to grant it to your side of the bet… even though at the time, it seemed to me clear that your assertion that the world would end meant that we wouldn’t continue as conscious beings.
I definitely wouldn’t predict that outcome. I would be very surprised, since I think the world will continue in the usual way. But is it really that likely even on your model?
It’s part of a larger class of scenarios where “AI has the power and desire to kill us with a fingersnap, but our lives are ransomed by someone else with the ability to make paperclips”.