I thought job loss was a short-term but not a long-term risk and only a secondary cause of x-risks. Which is why we don’t worry about it around here much. FWIW being enslaved permanently doesn’t sound better to me than being extinct, so I wouldn’t frame that distinction as a problem.
The AI wanting to permanently enslave us would change the game board in poorly explored ways. I could imagine that e.g. it would be more plausible that it would reliably form concepts of human values, which could be used to align it to prevent slavery. Also its control ability as a slavemaster would likely vastly exceed that of human slavemasters, meaning it probably wouldn’t need stuff like violence or keeping us fearful in order to control us. I don’t have a clear idea of what it would look like, partly because the scenario is not realistic since the AI would realistically have an absolute economic advantage over us and therefore would rather want us dead than slaves.
I thought the standard plan was: we figure out how to ensure that an aligned AGI/ASI takes over; it helps humans because it wants to. It’s way better than us at making stuff, so keeping humanity in good conditions is actually super easy, barely an inconvenience.
That is the standard rationalist plan, but “take over the world, for the greater good (of course)” is such a classic villain move that we shouldn’t be surprised when all the ways we’ve explored seem to lead to horrible outcomes.
Anyone who’s understood a Burning Man event knows that humans have a blast working on collaborative projects, even when they don’t need to be done at all, so I have no concerns that we’ll all get sad without necessary or “meaningful” work to do.
Idk, Burning Man consists of people who have been shaped by society and who go out of their way to participate. I could imagine that without society existing in the background, people would not really be maintaining the ambition or skills necessary to have something like Burning Man exist.
The AI wanting to permanently enslave us would change the game board in poorly explored ways. I could imagine that e.g. it would be more plausible that it would reliably form concepts of human values, which could be used to align it to prevent slavery. Also its control ability as a slavemaster would likely vastly exceed that of human slavemasters, meaning it probably wouldn’t need stuff like violence or keeping us fearful in order to control us. I don’t have a clear idea of what it would look like, partly because the scenario is not realistic since the AI would realistically have an absolute economic advantage over us and therefore would rather want us dead than slaves.
That is the standard rationalist plan, but “take over the world, for the greater good (of course)” is such a classic villain move that we shouldn’t be surprised when all the ways we’ve explored seem to lead to horrible outcomes.
Idk, Burning Man consists of people who have been shaped by society and who go out of their way to participate. I could imagine that without society existing in the background, people would not really be maintaining the ambition or skills necessary to have something like Burning Man exist.