Of course, capital is useful in order to exert influence now. Although I would suggest that for a noticeable impact on events, capital or power is needed, which are inaccessible to the vast majority of the population.
But can we end up in a world where the richest 1% or 0.1% will survive, and the rest will die? Unlikely. Even if property rights were respected, such a world would have to turn into a mad hell. Even a world in which only people like Sam Altman and their entourage will survive the singularity seems more likely. But the most likely options should be the extinction of all or the survival of almost all without a strong correlation with current well-being. Am I mistaken?
But can we end up in a world where the richest 1% or 0.1% will survive, and the rest will die? Unlikely. Even if property rights were respected, such a world would have to turn into a mad hell.
Even a world in which only people like Sam Altman and their entourage will survive the singularity seems more likely.
But the most likely options should be the extinction of all or the survival of almost all without a strong correlation with current well-being. Am I mistaken?
I think the answer is yes, and the main way I could see this happening is that we live in an alignment is easy world, property rights are approximately respected for the rich (because they can create robotic armies/supply lines to defend themselves), but anyone else’s property rights are not respected.
I think the core crux is I expect alignment is reasonably easy, and I also think that without massive reform that is unfortunately not that likely, the mechanisms that allow capitalism to help humans by transforming selfish actions into making other people well off will erode rapidly, and I believe that once you are able to make a robot workforce that doesn’t require humans, it becomes necessary to make assumptions about their benevolence in order to survive, and we are really bad at making political systems do work when we have to assume benevolence/trustworthiness.
To be fair, I do agree with this:
Of course, capital is useful in order to exert influence now. Although I would suggest that for a noticeable impact on events, capital or power is needed, which are inaccessible to the vast majority of the population.
I would expect that the absence of a global catastrophe for ~2 years after the creation of AGI would increase the chances of most people’s survival. Especially in a scenario where alignment was easy. After all, then there will be time for political and popular action. We can expect something strange when politicians and their voters finally understand the existential horror of the situation! I don’t know. Attempts to ban all AI? The Butlerian jihad? Nationalization of AI companies? Revolutions and military coups? Everything seems possible. If AI respects the right to property, why shouldn’t it respect the right to UBI if such a law is passed? The rapid growth of the economy will make it possible to feed many. In fact, a world in which someone shrugs their shoulders and allows 99% of the population to die seems obviously unsafe for the remaining 1%.
I think the crux is I don’t believe political will/popular action will matter until AI can clearly automate ~all jobs, for both reasonable and unreasonable reasons, and I think this is far too late to do much of anything by default, in the sense that the point of no return was way earlier.
In order for political action to be useful, it needs to be done when there are real signs that AI could for example automate AI research, not when the event has already happened.
Of course, capital is useful in order to exert influence now. Although I would suggest that for a noticeable impact on events, capital or power is needed, which are inaccessible to the vast majority of the population.
But can we end up in a world where the richest 1% or 0.1% will survive, and the rest will die? Unlikely. Even if property rights were respected, such a world would have to turn into a mad hell.
Even a world in which only people like Sam Altman and their entourage will survive the singularity seems more likely.
But the most likely options should be the extinction of all or the survival of almost all without a strong correlation with current well-being. Am I mistaken?
I think the answer is yes, and the main way I could see this happening is that we live in an alignment is easy world, property rights are approximately respected for the rich (because they can create robotic armies/supply lines to defend themselves), but anyone else’s property rights are not respected.
I think the core crux is I expect alignment is reasonably easy, and I also think that without massive reform that is unfortunately not that likely, the mechanisms that allow capitalism to help humans by transforming selfish actions into making other people well off will erode rapidly, and I believe that once you are able to make a robot workforce that doesn’t require humans, it becomes necessary to make assumptions about their benevolence in order to survive, and we are really bad at making political systems do work when we have to assume benevolence/trustworthiness.
To be fair, I do agree with this:
I would expect that the absence of a global catastrophe for ~2 years after the creation of AGI would increase the chances of most people’s survival. Especially in a scenario where alignment was easy.
After all, then there will be time for political and popular action. We can expect something strange when politicians and their voters finally understand the existential horror of the situation!
I don’t know. Attempts to ban all AI? The Butlerian jihad? Nationalization of AI companies? Revolutions and military coups? Everything seems possible.
If AI respects the right to property, why shouldn’t it respect the right to UBI if such a law is passed? The rapid growth of the economy will make it possible to feed many.
In fact, a world in which someone shrugs their shoulders and allows 99% of the population to die seems obviously unsafe for the remaining 1%.
I think the crux is I don’t believe political will/popular action will matter until AI can clearly automate ~all jobs, for both reasonable and unreasonable reasons, and I think this is far too late to do much of anything by default, in the sense that the point of no return was way earlier.
In order for political action to be useful, it needs to be done when there are real signs that AI could for example automate AI research, not when the event has already happened.