Anyone up for creating thousands of clones of von Neumann and raising them to think that AI alignment is a really important problem? I’d trust them over me!
Setting aside this proposal’s, ah, logistical difficulties, I certainly don’t think we should ignore interventions that target only the (say) 10% of the probability space in which superintelligence takes longest to appear.
So, is it now the consensus opinion round here that we’re all dead in less than twenty years? (Sounds about right to me, but I’ve always been a pessimist...)
It’s not consensus. Ajeya, Richard, Paul, and Rohin are prominent examples of people widely considered to have expertise on this topic who think it’s not true. (I think they’d say something more like 10% chance? IDK)
Anyone up for creating thousands of clones of von Neumann and raising them to think that AI alignment is a really important problem? I’d trust them over me!
I don’t think they’d even need to be raised to think that; they’d figure it out on their own. Unfortunately we don’t have enough time.
Setting aside this proposal’s, ah, logistical difficulties, I certainly don’t think we should ignore interventions that target only the (say) 10% of the probability space in which superintelligence takes longest to appear.
So, is it now the consensus opinion round here that we’re all dead in less than twenty years? (Sounds about right to me, but I’ve always been a pessimist...)
It’s not consensus. Ajeya, Richard, Paul, and Rohin are prominent examples of people widely considered to have expertise on this topic who think it’s not true. (I think they’d say something more like 10% chance? IDK)
This author is: https://fantasticanachronism.com/2021/03/23/two-paths-to-the-future/
“I believe the best choice is cloning. More specifically, cloning John von Neumann one million times”