A big difference is that assuming you’re talking about futures in which AI hasn’t catastrophic outcomes, no one will be forcibly mandated to do anything.
Why do you believe this? It seems to me that in the unlikely event that the AI doesn’t exterminate humanity, it’s much more likely to be aligned with the expressed values of whoever has their hands on the controls at the moment of no return, than to an overriding commitment to universal individual choice.
Why do you believe this? It seems to me that in the unlikely event that the AI doesn’t exterminate humanity, it’s much more likely to be aligned with the expressed values of whoever has their hands on the controls at the moment of no return, than to an overriding commitment to universal individual choice.