I thought job loss was a short-term but not a long-term risk and only a secondary cause of x-risks. Which is why we don’t worry about it around here much. FWIW being enslaved permanently doesn’t sound better to me than being extinct, so I wouldn’t frame that distinction as a problem.
I thought the standard plan was: we figure out how to ensure that an aligned AGI/ASI takes over; it helps humans because it wants to. It’s way better than us at making stuff, so keeping humanity in good conditions is actually super easy, barely an inconvenience.
That could be either because we got value alignment right, or we put personal intent aligned AGI into the hands of someone with the common decency to help all of humanity with it. And it/they wisely decided to prevent the creation of other AGI.
The ASI makes sub-sentient safe minds to do the grunt work; humanity’s biggest challenge is deciding what to do with unlimited time and vast resources.
Anyone who’s understood a Burning Man event knows that humans have a blast working on collaborative projects, even when they don’t need to be done at all, so I have no concerns that we’ll all get sad without necessary or “meaningful” work to do.
So job loss seems like a problem only if it makes us freak out and do something stupider.
Perhaps you’re thinking of the scenario where AGI takes over the economy but doesn’t otherwise directly care about humans. I think job loss is a weird way to frame humans being utterly obsoleted and having their oxygen eventually “purchased” for better uses. AGI that doesn’t care about humans but does care about property rights would be strange. FWIW, I think people sometimes see AGI proliferation as a potential good outcome, but that’s exactly wrong; the multipolar scenario with personal intent aligned AGI serving different competing masters seems likely to result in doom, too.
I thought job loss was a short-term but not a long-term risk and only a secondary cause of x-risks. Which is why we don’t worry about it around here much. FWIW being enslaved permanently doesn’t sound better to me than being extinct, so I wouldn’t frame that distinction as a problem.
The AI wanting to permanently enslave us would change the game board in poorly explored ways. I could imagine that e.g. it would be more plausible that it would reliably form concepts of human values, which could be used to align it to prevent slavery. Also its control ability as a slavemaster would likely vastly exceed that of human slavemasters, meaning it probably wouldn’t need stuff like violence or keeping us fearful in order to control us. I don’t have a clear idea of what it would look like, partly because the scenario is not realistic since the AI would realistically have an absolute economic advantage over us and therefore would rather want us dead than slaves.
I thought the standard plan was: we figure out how to ensure that an aligned AGI/ASI takes over; it helps humans because it wants to. It’s way better than us at making stuff, so keeping humanity in good conditions is actually super easy, barely an inconvenience.
That is the standard rationalist plan, but “take over the world, for the greater good (of course)” is such a classic villain move that we shouldn’t be surprised when all the ways we’ve explored seem to lead to horrible outcomes.
Anyone who’s understood a Burning Man event knows that humans have a blast working on collaborative projects, even when they don’t need to be done at all, so I have no concerns that we’ll all get sad without necessary or “meaningful” work to do.
Idk, Burning Man consists of people who have been shaped by society and who go out of their way to participate. I could imagine that without society existing in the background, people would not really be maintaining the ambition or skills necessary to have something like Burning Man exist.
I thought job loss was a short-term but not a long-term risk and only a secondary cause of x-risks. Which is why we don’t worry about it around here much. FWIW being enslaved permanently doesn’t sound better to me than being extinct, so I wouldn’t frame that distinction as a problem.
I thought the standard plan was: we figure out how to ensure that an aligned AGI/ASI takes over; it helps humans because it wants to. It’s way better than us at making stuff, so keeping humanity in good conditions is actually super easy, barely an inconvenience.
That could be either because we got value alignment right, or we put personal intent aligned AGI into the hands of someone with the common decency to help all of humanity with it. And it/they wisely decided to prevent the creation of other AGI.
The ASI makes sub-sentient safe minds to do the grunt work; humanity’s biggest challenge is deciding what to do with unlimited time and vast resources.
Anyone who’s understood a Burning Man event knows that humans have a blast working on collaborative projects, even when they don’t need to be done at all, so I have no concerns that we’ll all get sad without necessary or “meaningful” work to do.
So job loss seems like a problem only if it makes us freak out and do something stupider.
Perhaps you’re thinking of the scenario where AGI takes over the economy but doesn’t otherwise directly care about humans. I think job loss is a weird way to frame humans being utterly obsoleted and having their oxygen eventually “purchased” for better uses. AGI that doesn’t care about humans but does care about property rights would be strange. FWIW, I think people sometimes see AGI proliferation as a potential good outcome, but that’s exactly wrong; the multipolar scenario with personal intent aligned AGI serving different competing masters seems likely to result in doom, too.
The AI wanting to permanently enslave us would change the game board in poorly explored ways. I could imagine that e.g. it would be more plausible that it would reliably form concepts of human values, which could be used to align it to prevent slavery. Also its control ability as a slavemaster would likely vastly exceed that of human slavemasters, meaning it probably wouldn’t need stuff like violence or keeping us fearful in order to control us. I don’t have a clear idea of what it would look like, partly because the scenario is not realistic since the AI would realistically have an absolute economic advantage over us and therefore would rather want us dead than slaves.
That is the standard rationalist plan, but “take over the world, for the greater good (of course)” is such a classic villain move that we shouldn’t be surprised when all the ways we’ve explored seem to lead to horrible outcomes.
Idk, Burning Man consists of people who have been shaped by society and who go out of their way to participate. I could imagine that without society existing in the background, people would not really be maintaining the ambition or skills necessary to have something like Burning Man exist.