Delaying AI doesn’t make any sense unless the extra time gives us a better shot at solving the problem. If it’s just “there is nothing we can do to make AI safer”, then there is no reason to postpone the inevitable (or at least, very little reason, the net value of 8 billion lives for however many years we have left). Unless we can delay AGI indefinitely (which at this point seems fanciful), at some point we’re going to have to face the problem.
I strongly disagree-voted (but upvoted). Even if there is nothing we can do to make AI safer, there is value to delaying AGI by even a few days: good things remain good even if they last a finite time. Of course, if P(AI not controllable) is low enough the ongoing deaths matter more.
Delaying AI doesn’t make any sense unless the extra time gives us a better shot at solving the problem. If it’s just “there is nothing we can do to make AI safer”, then there is no reason to postpone the inevitable (or at least, very little reason, the net value of 8 billion lives for however many years we have left). Unless we can delay AGI indefinitely (which at this point seems fanciful), at some point we’re going to have to face the problem.
I strongly disagree-voted (but upvoted). Even if there is nothing we can do to make AI safer, there is value to delaying AGI by even a few days: good things remain good even if they last a finite time. Of course, if P(AI not controllable) is low enough the ongoing deaths matter more.