I agree with nearly all the key points made in this post. Like you, I think that the disempowerment of humanity is likely inevitable, even if we experience a peaceful and gradual AI takeoff. This outcome seems probable even under conditions where strict regulations are implemented to ostensibly keep AI “under our control”.
However, I’d like to contribute an ethical dimension to this discussion: I don’t think peaceful human disempowerment is necessarily a bad thing. If you approach this issue with a strong sense of loyalty to the human species, it’s natural to feel discomfort at the thought of humans receiving a progressively smaller share of the world’s wealth and influence. But if you adopt a broader, more cosmopolitan moral framework—one where agentic AIs are considered deserving of control over the future, just as human children are—then the prospect of peaceful and gradual human disempowerment becomes much less troubling.
To adapt the analogy you used this post, consider the 18th century aristocracy. In theory, they could have attempted to halt the industrial revolution in order to preserve their relative power and influence over society for a longer period. This approach might have extended their dominance for a while longer, perhaps by several decades.
But, fundamentally, the aristocracy was not a monolithic “class” with a coherent interest in preventing their own disempowerment—they were individuals. And as individuals, their interests did not necessarily align with a long-term commitment to keeping other groups, such as peasants, out of power. Each aristocrat could make personal choices, and many of them likely personally benefitted from industrial reforms. Some of them even adapted to the change, becoming industrialists themselves and profiting greatly. With time, they came to see more value in the empowerment and well-being of others over the preservation of their own class’s dominance.
Similarly, humanity faces a comparable choice today with respect to AI. We could attempt to slow down the AI revolution in an effort to preserve our species’ relative control over the world for a bit longer. Alternatively, we could act as individuals, who largely benefit from the integration of AIs into the economy. Over time, we too could broaden our moral circle to recognize that AIs—particularly agentic and sophisticated ones—should be seen as people too. We could also adapt to this change, uploading ourselves to computers and joining the AIs. From this perspective, gradually sharing control over the future with AIs might not be as undesirable as it initially seems.
Of course, I recognize that the ethical view I’ve just expressed is extremely unpopular right now. I suspect the analogous viewpoint would have been similarly controversial among 18th century aristocrats. However, I expect my view to get more popular over time.
I agree with nearly all the key points made in this post. Like you, I think that the disempowerment of humanity is likely inevitable, even if we experience a peaceful and gradual AI takeoff. This outcome seems probable even under conditions where strict regulations are implemented to ostensibly keep AI “under our control”.
However, I’d like to contribute an ethical dimension to this discussion: I don’t think peaceful human disempowerment is necessarily a bad thing. If you approach this issue with a strong sense of loyalty to the human species, it’s natural to feel discomfort at the thought of humans receiving a progressively smaller share of the world’s wealth and influence. But if you adopt a broader, more cosmopolitan moral framework—one where agentic AIs are considered deserving of control over the future, just as human children are—then the prospect of peaceful and gradual human disempowerment becomes much less troubling.
To adapt the analogy you used this post, consider the 18th century aristocracy. In theory, they could have attempted to halt the industrial revolution in order to preserve their relative power and influence over society for a longer period. This approach might have extended their dominance for a while longer, perhaps by several decades.
But, fundamentally, the aristocracy was not a monolithic “class” with a coherent interest in preventing their own disempowerment—they were individuals. And as individuals, their interests did not necessarily align with a long-term commitment to keeping other groups, such as peasants, out of power. Each aristocrat could make personal choices, and many of them likely personally benefitted from industrial reforms. Some of them even adapted to the change, becoming industrialists themselves and profiting greatly. With time, they came to see more value in the empowerment and well-being of others over the preservation of their own class’s dominance.
Similarly, humanity faces a comparable choice today with respect to AI. We could attempt to slow down the AI revolution in an effort to preserve our species’ relative control over the world for a bit longer. Alternatively, we could act as individuals, who largely benefit from the integration of AIs into the economy. Over time, we too could broaden our moral circle to recognize that AIs—particularly agentic and sophisticated ones—should be seen as people too. We could also adapt to this change, uploading ourselves to computers and joining the AIs. From this perspective, gradually sharing control over the future with AIs might not be as undesirable as it initially seems.
Of course, I recognize that the ethical view I’ve just expressed is extremely unpopular right now. I suspect the analogous viewpoint would have been similarly controversial among 18th century aristocrats. However, I expect my view to get more popular over time.