I have not a shred of a doubt that something smarter than us can kill us all easily should it choose to. Humans are ridiculously easy to kill. A few well placed words and they kill each other even. I also have no doubt that keeping something smarter than you confined is a doomed idea. What I am not convinced of is that that something smarter will try to eradicate humans. I am not arguing against the orthogonality thesis here, but against the point that “AGI will have a single-minded utility function and to achieve its goal it will destroy humanity in the process (because we are made of atoms, etc).” In fact, were it the case, it would have happened somewhere in our past light cone already, with rather visible consequences, something I refer to as a Fermi AGI paradox. I am not sure what I am missing here.
In fact, were it the case, it would have happened somewhere in our past light cone already, with rather visible consequences, something I refer to as a Fermi AGI paradox.
There’s work [1, 2] suggesting that there’s actually a reasonable chance of us being the first in the universe, in which case there’s no paradox.
Yes, if we are the first in the universe, then there is no paradox. But the AGI Fermi paradox is stricter than the usual Fermi paradox, where other “civilizations” may still not be in a cosmic expansion phase, not in the grabby aliens phase. The premise of an AGI is that it would “foom” to take over the galaxy as fast as it can. So, either a universe-altering AGI is not a thing, or it is not inevitable once a civilization can create artificial evolution, or mybe something else is going on.
Alien civilizations already existing in numbers but not having left their original planets isn’t a solution to the Fermi paradox, because if the civilizations were numerous some of them would have left their original planets. So removing it from the solution-space doesn’t add any notable constraints. But the grabby aliens model does solve the Fermi paradox.
I think the risk level becomes clearer when stepping back from stories of how pursuing specific utility functions lead to humanity’s demise. An AGI will have many powerful levers on the world at its disposal. Very few combinations of lever pulls result in a good outcome for humans.
From the perspective of ants in an anthill, the actual utility function(s) of the humans is of minor relevance; the ants will be destroyed by a nuclear bomb in much the same way as they will be destroyed by a new construction site or a group of mischievous kids playing around.
(I think your Fermi AGI paradox is a good point, I don’t quite know how to factor that into my AGI risk assessment.)
I have not a shred of a doubt that something smarter than us can kill us all easily should it choose to. Humans are ridiculously easy to kill. A few well placed words and they kill each other even. I also have no doubt that keeping something smarter than you confined is a doomed idea. What I am not convinced of is that that something smarter will try to eradicate humans. I am not arguing against the orthogonality thesis here, but against the point that “AGI will have a single-minded utility function and to achieve its goal it will destroy humanity in the process (because we are made of atoms, etc).” In fact, were it the case, it would have happened somewhere in our past light cone already, with rather visible consequences, something I refer to as a Fermi AGI paradox. I am not sure what I am missing here.
There’s work [1, 2] suggesting that there’s actually a reasonable chance of us being the first in the universe, in which case there’s no paradox.
Yes, if we are the first in the universe, then there is no paradox. But the AGI Fermi paradox is stricter than the usual Fermi paradox, where other “civilizations” may still not be in a cosmic expansion phase, not in the grabby aliens phase. The premise of an AGI is that it would “foom” to take over the galaxy as fast as it can. So, either a universe-altering AGI is not a thing, or it is not inevitable once a civilization can create artificial evolution, or mybe something else is going on.
Alien civilizations already existing in numbers but not having left their original planets isn’t a solution to the Fermi paradox, because if the civilizations were numerous some of them would have left their original planets. So removing it from the solution-space doesn’t add any notable constraints. But the grabby aliens model does solve the Fermi paradox.
I think the risk level becomes clearer when stepping back from stories of how pursuing specific utility functions lead to humanity’s demise. An AGI will have many powerful levers on the world at its disposal. Very few combinations of lever pulls result in a good outcome for humans.
From the perspective of ants in an anthill, the actual utility function(s) of the humans is of minor relevance; the ants will be destroyed by a nuclear bomb in much the same way as they will be destroyed by a new construction site or a group of mischievous kids playing around.
(I think your Fermi AGI paradox is a good point, I don’t quite know how to factor that into my AGI risk assessment.)