I agree that the negative outcomes from technological unemployment do not get enough attention but my model of how the world will implement Transformative AI is quite different to yours.
Our current society doesn’t say “humans should thrive”, it says “professional humans should thrive”
Let us define workers to be the set of humans whose primary source of wealth comes from selling their labour. This is a very broad group that includes people colloquially called working class (manual labourers, baristas, office workers, teachers etc) but we are also including many people who are well remunerated for their time, such as surgeons, senior developers or lawyers.
Ceteris paribus, there is a trend that those who can perform more valuable, difficult and specialised work can sell their labour at a higher value. Among workers, those who earn more are usually “professionals”. I believe this is essentially the same point you were making.
However, this is not a complete description of who society allows to “thrive”. It neglects a small group of people with very high wealth. This is the group of people who have moved beyond needing to sell their labour and instead are rewarded for owning capital. It is this group who society says should thrive and one of the strongest predictors of whether you will be a member is the amount of wealth your parents give you.
The issue is that this small group is owns a disproportionate proportion of shares in frontier AI companies.
Assuming we develop techniques to reliably align AGIs to arbitrary goals, there is little reason to expect private entities to intentionally give up power (doing so would be acting contrary to the interests of their shareholders).
Workers unable to compete with artificial agents will find themselves relying on the charity and goodwill of a small group of elites. (And of course, as technology progresses, this group will eventually include all workers.)
Those lucky enough to own substantial equity in AI companies will thrive as the majority of wealth generated by AI workers flows to them.
In itself, this scenario isn’t an existential threat. But I suspect many humans would consider their descendants being trapped into serfdom is a very bad outcome.
I worry a focus on preventing the complete extinction of the human race means that we are moving towards AI Safety solutions which lead to rather bleak futures in the majority of timelines.[1]
My personal utility function considers permanent techno-feudalism forever removing the agency of the majority of humans is only slightly better than everyone dying.
I suspect that some fraction of humans currently alive also consider a permanent loss of freedom to be only marginally better (or even worse) than death.
It’s somewhat intentional that I say “professionals” instead of “workers”, because if I understand correctly, by now the majority of the workforce in the most developed countries is made up of white-collar workers. I think AI is especially relevant to us because professionals rely almost exclusively on intelligence, knowledge and connections, whereas e.g. cooks also rely on dexterity. (Admittedly, AI will probably progress simultaneously with robots, which will hit people who do more hands-on work too.)
I think in the scenario you describe, one of the things that matters most is how well the police/military can keep up with the AI frontier. If states maintain sovereignty (or at least, the US maintains hegemony), people can “just” implement a wealth tax to distribute the goods from AI.
Under that model, the question then becomes who to distribute the goods to. I guess the answer would end up being “all citizens”. Hm, I was about to say “in which case we are back in the “but wouldn’t people’s capabilities degenerate into vestigiality?” issue”, but really if people have a guaranteed source of income to survive, as long as we are only moderately more expensive than the AI and not extremely more expensive, the AI would probably want to offer us jobs because of comparative advantages. Maybe that’s the sort of scenario Habryka was getting at...
(Admittedly, AI will probably progress simultaneously with robots, which will hit people who do more hands-on work too.)
This looks increasingly unlikely to me. It seems to me (from an outsider’s perspective) that the current bottleneck in robotics is the low dexterity of existing hardware far more than the software to animate robot arms, or even the physics simulation software to test it. And on the flip side current proto-AGI research makes the embodied cognition thesis seems very unlikely.
I agree that the negative outcomes from technological unemployment do not get enough attention but my model of how the world will implement Transformative AI is quite different to yours.
Let us define workers to be the set of humans whose primary source of wealth comes from selling their labour. This is a very broad group that includes people colloquially called working class (manual labourers, baristas, office workers, teachers etc) but we are also including many people who are well remunerated for their time, such as surgeons, senior developers or lawyers.
Ceteris paribus, there is a trend that those who can perform more valuable, difficult and specialised work can sell their labour at a higher value. Among workers, those who earn more are usually “professionals”. I believe this is essentially the same point you were making.
However, this is not a complete description of who society allows to “thrive”. It neglects a small group of people with very high wealth. This is the group of people who have moved beyond needing to sell their labour and instead are rewarded for owning capital. It is this group who society says should thrive and one of the strongest predictors of whether you will be a member is the amount of wealth your parents give you.
The issue is that this small group is owns a disproportionate proportion of shares in frontier AI companies.
Assuming we develop techniques to reliably align AGIs to arbitrary goals, there is little reason to expect private entities to intentionally give up power (doing so would be acting contrary to the interests of their shareholders).
Workers unable to compete with artificial agents will find themselves relying on the charity and goodwill of a small group of elites. (And of course, as technology progresses, this group will eventually include all workers.)
Those lucky enough to own substantial equity in AI companies will thrive as the majority of wealth generated by AI workers flows to them.
In itself, this scenario isn’t an existential threat. But I suspect many humans would consider their descendants being trapped into serfdom is a very bad outcome.
I worry a focus on preventing the complete extinction of the human race means that we are moving towards AI Safety solutions which lead to rather bleak futures in the majority of timelines.[1]
My personal utility function considers permanent techno-feudalism forever removing the agency of the majority of humans is only slightly better than everyone dying.
I suspect that some fraction of humans currently alive also consider a permanent loss of freedom to be only marginally better (or even worse) than death.
It’s somewhat intentional that I say “professionals” instead of “workers”, because if I understand correctly, by now the majority of the workforce in the most developed countries is made up of white-collar workers. I think AI is especially relevant to us because professionals rely almost exclusively on intelligence, knowledge and connections, whereas e.g. cooks also rely on dexterity. (Admittedly, AI will probably progress simultaneously with robots, which will hit people who do more hands-on work too.)
I think in the scenario you describe, one of the things that matters most is how well the police/military can keep up with the AI frontier. If states maintain sovereignty (or at least, the US maintains hegemony), people can “just” implement a wealth tax to distribute the goods from AI.
Under that model, the question then becomes who to distribute the goods to. I guess the answer would end up being “all citizens”. Hm, I was about to say “in which case we are back in the “but wouldn’t people’s capabilities degenerate into vestigiality?” issue”, but really if people have a guaranteed source of income to survive, as long as we are only moderately more expensive than the AI and not extremely more expensive, the AI would probably want to offer us jobs because of comparative advantages. Maybe that’s the sort of scenario Habryka was getting at...
This looks increasingly unlikely to me. It seems to me (from an outsider’s perspective) that the current bottleneck in robotics is the low dexterity of existing hardware far more than the software to animate robot arms, or even the physics simulation software to test it. And on the flip side current proto-AGI research makes the embodied cognition thesis seems very unlikely.