From what I know, the danger of UFAI isn’t that such an AI would be evil like in fiction (anthropomorphized AIs), but rather that it wouldn’t care about us and would want to use resources to achieve goals other than what humans would want (“all that energy and those atoms, I need them to make more computronium, sorry”).
I presume he was referring to disutopias and wireheading scenarios that he could hypothetically consider worse than death.
That was my understanding, but I think that any world in which there is an AGI that isn’t Friendly probably won’t be very stable. If that happens, I think there’s a lot more chances that humanity will be destroyed quickly and you won’t be woken up than that a stable but “worse than death” world will form and decide to wake you up.
But maybe I’m missing something that makes such “worse than death” worlds plausible.
I presume he was referring to disutopias and wireheading scenarios that he could hypothetically consider worse than death.
That was my understanding, but I think that any world in which there is an AGI that isn’t Friendly probably won’t be very stable. If that happens, I think there’s a lot more chances that humanity will be destroyed quickly and you won’t be woken up than that a stable but “worse than death” world will form and decide to wake you up.
But maybe I’m missing something that makes such “worse than death” worlds plausible.
I think you’re right. The main risk would be Friendly to Someone Else AI.