I see your concern. I don’t think that people who are currently disadvantaged will remain behind in the expansion toward infinity. If the rich who create AGI control the future and choose to be dicks about it, all of the rest of us are screwed, not just the currently-poor.
If those who create AGI choose to use it for the common good, it will very quickly elevate the poor to equality with the current top .01%, in terms of educational opportunities. And they will be effectively wealthier than the top .01%.
That’s why I see working on AGI alignment (and the societal alignment sub-problem; trying to make sure the people who control aligned AGI/ASI aren’t total dicks or foolish about it so we die anyway) is by far the most likely thing we can do to make the world better for the disadvantaged.
Because we are not remotely on top of this shit, so there’s a very good chance we all get oblivion instead of paradise-on-earth. And all of us have finite time and energy to spend.
I see your concern. I don’t think that people who are currently disadvantaged will remain behind in the expansion toward infinity. If the rich who create AGI control the future and choose to be dicks about it, all of the rest of us are screwed, not just the currently-poor.
If those who create AGI choose to use it for the common good, it will very quickly elevate the poor to equality with the current top .01%, in terms of educational opportunities. And they will be effectively wealthier than the top .01%.
That’s why I see working on AGI alignment (and the societal alignment sub-problem; trying to make sure the people who control aligned AGI/ASI aren’t total dicks or foolish about it so we die anyway) is by far the most likely thing we can do to make the world better for the disadvantaged.
Because we are not remotely on top of this shit, so there’s a very good chance we all get oblivion instead of paradise-on-earth. And all of us have finite time and energy to spend.