In the last year it has really hit me at a personal level what graphs like these mean. I’m imagining driving down to Mountain View and a town once filled with people who had “made it” and seeing a ghost town. No more jobs, no more prestige, no more promise of a stable life. As the returns to capital grow exponentially and the returns to labor decline to zero, the gap between the haves and the have-nots will only grow.
If someone can actually get superintelligence to do what they want, then perhaps universal basic income can at the very least prevent actual starvation and maybe even provide a life of abundance.
But I can’t help but feeling such a situation is fundamentally unstable. If the government’s desires become disconnected from those of the people at any point, by what mechanism can balance be restored?
In the past the government was fundamentally reliant on its citizens for one simple reason; citizens produced taxable revenue.
That will no longer be the case. Every country will become a petro state on steroids.
I’m imagining driving down to Mountain View and a town once filled with people who had “made it” and seeing a ghost town
I’m guessing that people who “made it” have a bunch of capital that they can use to purchase AI labor under the scenario you outline (i.e., someone gets superintelligence to do what they want).
But I can’t help but feeling such a situation is fundamentally unstable. If the government’s desires become disconnected from those of the people at any point, by what mechanism can balance be restored?
I’m not sure I’m getting the worry here. Is it that the government (or whoever directs superintelligences) is going to kill the rest because of the same reasons we worry about misaligned superintelligences or that they’re going to enrich themselves while the rest starves (but otherwise not consuming all useful resources)? If that’s this second scenario you’re worrying about, that seems unlikely to me because even as a few parties hit the jackpot, the rest can still deploy the remaining capital they have. Even if they didn’t have any capital to purchase AI labor, they would still organize amongst themselves to produce useful things that they need, and they would form a different market until they also get to superintelligence, and in that world, it should happen pretty quickly.
I’m guessing that people who “made it” have a bunch of capital that they can use to purchase AI labor under the scenario you outline (i.e., someone gets superintelligence to do what they want).
If the superintelligence is willing to deprive people of goods and services because they lack capital, then why would it be empathetic towards those that have capital? The superintelligence would be a monopsony and monopoly, and could charge any amount for someone existing for an arbitrarily short amount of time. Assuming it even respects property law when it is aligned with its creators.
Is it that the government (or whoever directs superintelligences) is going to kill the rest because of the same reasons we worry about misaligned superintelligences
“Kill” is such a dirty word. Just not grant them the means to sustain themselves.
or that they’re going to enrich themselves while the rest starves (but otherwise not consuming all useful resources)? If that’s this second scenario you’re worrying about, that seems unlikely to me because even as a few parties hit the jackpot, the rest can still deploy the remaining capital they have. Even if they didn’t have any capital to purchase AI labor, they would still organize amongst themselves to produce useful things that they need, and they would form a different market until they also get to superintelligence, and in that world, it should happen pretty quickly.
Why would capital owners with a superintelligence ever let those without capital build their own superintelligence? That sounds like a recipe for AI war—are the poors really going to program their superintelligence with anything other than the fundamental rejection of the concept of capital ownership in a post-scarcity society?
Government is also reliant on its citizens to not violently protest, which would happen if it got to the point you describe.
The idealist in me hopes that eventually those with massive gains in productivity/wealth from automating everything would want to start doing things for the good of humanity™, right? …Hopefully that point is long before large scale starvation.
Isn’t it a distribution problem? World hunger has almost disappeared however. (The issue is hungrier nations have more kids, so progress is a bit hidden).
Wikipedia: in 2023, there were 733 million people suffering from hunger. That’s 9% of the population. Most of these people just don’t have the money to buy food. That’s a ‘distribution problem’, for money, in the sense that we don’t give it to them. Also, world hunger is actually rising again..
A lot of them are trapped in corrupt systems that are very costly and have ethics concerns blocking change. We have the money to feed them, but it would take far more money to turn a bunch of African countries into stable democracies. Overthrowing dictatorships might also raise ethics concerns about colonialism.
The easiest solution would just be lots of immigration, but host population reject that because of our evolutionary pecularities.
I agree that changing systems is difficult. But providing basic means isn’t, really. I personally think we should feed starving people even if they live in a dictatorship.
In the last year it has really hit me at a personal level what graphs like these mean. I’m imagining driving down to Mountain View and a town once filled with people who had “made it” and seeing a ghost town. No more jobs, no more prestige, no more promise of a stable life. As the returns to capital grow exponentially and the returns to labor decline to zero, the gap between the haves and the have-nots will only grow.
If someone can actually get superintelligence to do what they want, then perhaps universal basic income can at the very least prevent actual starvation and maybe even provide a life of abundance.
But I can’t help but feeling such a situation is fundamentally unstable. If the government’s desires become disconnected from those of the people at any point, by what mechanism can balance be restored?
In the past the government was fundamentally reliant on its citizens for one simple reason; citizens produced taxable revenue.
That will no longer be the case. Every country will become a petro state on steroids.
I’m guessing that people who “made it” have a bunch of capital that they can use to purchase AI labor under the scenario you outline (i.e., someone gets superintelligence to do what they want).
I’m not sure I’m getting the worry here. Is it that the government (or whoever directs superintelligences) is going to kill the rest because of the same reasons we worry about misaligned superintelligences or that they’re going to enrich themselves while the rest starves (but otherwise not consuming all useful resources)? If that’s this second scenario you’re worrying about, that seems unlikely to me because even as a few parties hit the jackpot, the rest can still deploy the remaining capital they have. Even if they didn’t have any capital to purchase AI labor, they would still organize amongst themselves to produce useful things that they need, and they would form a different market until they also get to superintelligence, and in that world, it should happen pretty quickly.
If the superintelligence is willing to deprive people of goods and services because they lack capital, then why would it be empathetic towards those that have capital? The superintelligence would be a monopsony and monopoly, and could charge any amount for someone existing for an arbitrarily short amount of time. Assuming it even respects property law when it is aligned with its creators.
“Kill” is such a dirty word. Just not grant them the means to sustain themselves.
Why would capital owners with a superintelligence ever let those without capital build their own superintelligence? That sounds like a recipe for AI war—are the poors really going to program their superintelligence with anything other than the fundamental rejection of the concept of capital ownership in a post-scarcity society?
Government is also reliant on its citizens to not violently protest, which would happen if it got to the point you describe.
The idealist in me hopes that eventually those with massive gains in productivity/wealth from automating everything would want to start doing things for the good of humanity™, right?
…Hopefully that point is long before large scale starvation.
Have we eventually solved world hunger by giving 1% of GDP to the global poor?
Also, note it’s not obvious that ASI can be aligned.
Isn’t it a distribution problem? World hunger has almost disappeared however. (The issue is hungrier nations have more kids, so progress is a bit hidden).
Wikipedia: in 2023, there were 733 million people suffering from hunger. That’s 9% of the population. Most of these people just don’t have the money to buy food. That’s a ‘distribution problem’, for money, in the sense that we don’t give it to them. Also, world hunger is actually rising again..
Some more data: https://www.linkedin.com/posts/ottobarten_about-700-million-people-in-the-world-cannot-activity-7266965529762873344-rvqK
We could easily solve this if we wanted to, but apparently we don’t. That’s one data point why I fear intent-aligned superintelligence.
A lot of them are trapped in corrupt systems that are very costly and have ethics concerns blocking change. We have the money to feed them, but it would take far more money to turn a bunch of African countries into stable democracies. Overthrowing dictatorships might also raise ethics concerns about colonialism.
The easiest solution would just be lots of immigration, but host population reject that because of our evolutionary pecularities.
I agree that changing systems is difficult. But providing basic means isn’t, really. I personally think we should feed starving people even if they live in a dictatorship.
The point is the money or food just won’t get to them. How do you send food to a region in a civil war between 2 dictators?