Why doesn’t the AI decide to colonise the universe for example?
It could decide to do that. The question is just whether space colonization is performed in the service of human preferences or non-human preferences. If humans control 0.00001% of the universe, and we’re only kept alive because a small minority of AIs pay some resources to preserve us, as if we were an endangered species, then I’d consider that “human disempowerment”.
Sure, although you could rephrase “disempowerment” to be “current status quo” which I imagine most people would be quite happy with.
The delta between [disempowerment/status quo] and [extinction] appears vast (essentially infinite). The conclusion that Scenario 6 is “somewhat likely” and would be “very bad” doesn’t seem to consider that delta.
I agree with you here to some extent. I’m much less worried about disempowerment than extinction. But the way we get disempowered could also be really bad. Like, I’d rather humanity not be like a pet in a zoo.
It could decide to do that. The question is just whether space colonization is performed in the service of human preferences or non-human preferences. If humans control 0.00001% of the universe, and we’re only kept alive because a small minority of AIs pay some resources to preserve us, as if we were an endangered species, then I’d consider that “human disempowerment”.
Sure, although you could rephrase “disempowerment” to be “current status quo” which I imagine most people would be quite happy with.
The delta between [disempowerment/status quo] and [extinction] appears vast (essentially infinite). The conclusion that Scenario 6 is “somewhat likely” and would be “very bad” doesn’t seem to consider that delta.
I agree with you here to some extent. I’m much less worried about disempowerment than extinction. But the way we get disempowered could also be really bad. Like, I’d rather humanity not be like a pet in a zoo.