Did Eliezer actually say that artificial superintelligence will inevitably take over human society? I thought his take was mostly “we are made of atoms...”, the “society” part is kind of irrelevant, except insofar as it is a convenient way to take over the physical world. Maybe it will mind-control a few humans to do its short-term bidding, humans are notoriously easy to mind-control.
I don’t think he says in verbatim that ASI will “take over” human society as far as I remember, but it’s definitely there in the subtext when he says something akin to when we create an ASI, we must align it and we must nail it on the first try.
The reasoning is that all AI ever does is work on its optimization function. If we optimize an ASI to calculate the Riemann hypothesis, or to produce identical strawberries without aligning it first, we’re all toast, because we’re either being turned into computing resources, or fertilizer to grow strawberries. At this point we can count human society as taken over, because it doesn’t exist anymore.
I think he says that ASI will killallhumans or something like that, the exact mechanism is left unspecified, because we cannot predict how it will go, especially given how easy it is to deal with humans once you are smarter than them.
And I think that the “all AI ever does is work on its optimization function” reasoning has been rather soundly falsified, none of the recent ML models resemble an optimizer. So, we are most likely toast, but in other more interesting ways.
Did Eliezer actually say that artificial superintelligence will inevitably take over human society? I thought his take was mostly “we are made of atoms...”, the “society” part is kind of irrelevant, except insofar as it is a convenient way to take over the physical world. Maybe it will mind-control a few humans to do its short-term bidding, humans are notoriously easy to mind-control.
I don’t think he says in verbatim that ASI will “take over” human society as far as I remember, but it’s definitely there in the subtext when he says something akin to when we create an ASI, we must align it and we must nail it on the first try.
The reasoning is that all AI ever does is work on its optimization function. If we optimize an ASI to calculate the Riemann hypothesis, or to produce identical strawberries without aligning it first, we’re all toast, because we’re either being turned into computing resources, or fertilizer to grow strawberries. At this point we can count human society as taken over, because it doesn’t exist anymore.
I think he says that ASI will killallhumans or something like that, the exact mechanism is left unspecified, because we cannot predict how it will go, especially given how easy it is to deal with humans once you are smarter than them.
And I think that the “all AI ever does is work on its optimization function” reasoning has been rather soundly falsified, none of the recent ML models resemble an optimizer. So, we are most likely toast, but in other more interesting ways.