I don’t understand the logic jump from point 5 to point 6, or at least the probability of that jump. Why doesn’t the AI decide to colonise the universe for example?
If an AI can ensure its survival with sufficient resources (for example, ‘living’ where humans aren’t eg: the asteroid belt) then the likelihood of the 5 ➡ 6 transition seems low.
I’m not clear how you’re estimating the likelihood of that transition, and what other state transitions might be available.
Why doesn’t the AI decide to colonise the universe for example?
It could decide to do that. The question is just whether space colonization is performed in the service of human preferences or non-human preferences. If humans control 0.00001% of the universe, and we’re only kept alive because a small minority of AIs pay some resources to preserve us, as if we were an endangered species, then I’d consider that “human disempowerment”.
Sure, although you could rephrase “disempowerment” to be “current status quo” which I imagine most people would be quite happy with.
The delta between [disempowerment/status quo] and [extinction] appears vast (essentially infinite). The conclusion that Scenario 6 is “somewhat likely” and would be “very bad” doesn’t seem to consider that delta.
I agree with you here to some extent. I’m much less worried about disempowerment than extinction. But the way we get disempowered could also be really bad. Like, I’d rather humanity not be like a pet in a zoo.
I don’t understand the logic jump from point 5 to point 6, or at least the probability of that jump. Why doesn’t the AI decide to colonise the universe for example?
If an AI can ensure its survival with sufficient resources (for example, ‘living’ where humans aren’t eg: the asteroid belt) then the likelihood of the 5 ➡ 6 transition seems low.
I’m not clear how you’re estimating the likelihood of that transition, and what other state transitions might be available.
It could decide to do that. The question is just whether space colonization is performed in the service of human preferences or non-human preferences. If humans control 0.00001% of the universe, and we’re only kept alive because a small minority of AIs pay some resources to preserve us, as if we were an endangered species, then I’d consider that “human disempowerment”.
Sure, although you could rephrase “disempowerment” to be “current status quo” which I imagine most people would be quite happy with.
The delta between [disempowerment/status quo] and [extinction] appears vast (essentially infinite). The conclusion that Scenario 6 is “somewhat likely” and would be “very bad” doesn’t seem to consider that delta.
I agree with you here to some extent. I’m much less worried about disempowerment than extinction. But the way we get disempowered could also be really bad. Like, I’d rather humanity not be like a pet in a zoo.