This would require a scenario a lot like in the podcast we were talking about, where there’s a government-led project to get to transformative AI, and then rather than using that AI to dramatically help all humanity, the government instead decides to ban using AI to dramatically help all humanity (as a side effect of affirming the status quo and banning all uses of AI that threaten its own power), while still allowing limited access to this AI technology by the wealthy and powerful.
I actually don’t think this is that likely, despite the fact that some people claim to be aiming for this future (or some similar future where humans remain in control and capitalism doesn’t suffer a discontinuity). Even assuming this AI project doesn’t kill everyone or otherwise go wrong, I think in an egalitarian setting there’s overwhelming pressure to take transformative actions (save peoples’ lives etc), and even in a dictatorial or plutocratic setting there’s a lot of pressure to take transformative dictatorial actions (for your basic hedonist: kill off everyone they don’t care for to save resources, for your more refined dictator: subtly arrange events so that their preferred political decisions work wonderfully and produce a flourishing civilization full of people who view them as a great leader).
(Edited because my previous reply was a bit off the mark.)
I don’t think this scenario depends on government. If AI is better at all jobs and can make more efficient use of all resources, “AI does all jobs and uses all resources” is the efficient market outcome. All that’s needed is that companies align their AIs to the company’s money interest, and people use and adapt AI in the pursuit of money interest. Which is what’s happening now.
A single AI taking dramatic transformative action seems less likely to me, because it’ll have to take place in a world already planted thick with AI and near-AI following money interests.
This would require a scenario a lot like in the podcast we were talking about, where there’s a government-led project to get to transformative AI, and then rather than using that AI to dramatically help all humanity, the government instead decides to ban using AI to dramatically help all humanity (as a side effect of affirming the status quo and banning all uses of AI that threaten its own power), while still allowing limited access to this AI technology by the wealthy and powerful.
I actually don’t think this is that likely, despite the fact that some people claim to be aiming for this future (or some similar future where humans remain in control and capitalism doesn’t suffer a discontinuity). Even assuming this AI project doesn’t kill everyone or otherwise go wrong, I think in an egalitarian setting there’s overwhelming pressure to take transformative actions (save peoples’ lives etc), and even in a dictatorial or plutocratic setting there’s a lot of pressure to take transformative dictatorial actions (for your basic hedonist: kill off everyone they don’t care for to save resources, for your more refined dictator: subtly arrange events so that their preferred political decisions work wonderfully and produce a flourishing civilization full of people who view them as a great leader).
(Edited because my previous reply was a bit off the mark.)
I don’t think this scenario depends on government. If AI is better at all jobs and can make more efficient use of all resources, “AI does all jobs and uses all resources” is the efficient market outcome. All that’s needed is that companies align their AIs to the company’s money interest, and people use and adapt AI in the pursuit of money interest. Which is what’s happening now.
A single AI taking dramatic transformative action seems less likely to me, because it’ll have to take place in a world already planted thick with AI and near-AI following money interests.