I now think the probabilities of AI risk have steeply declined to only 0.1-10%, and all of that probability mass is plausibly reducible to ridiculously low numbers by going to the stars and speeding up technological progress.
I think this is wrong (in that how does speeding up reduce risk? What do you want to speed up?) . I’d be actually interested in the case for this I got promised in the title.
Specifically, it’s the fact that one of the most intractable problems, arguably the core reason why AI safety is so hard to iterate, is likely a non-problem, and the fact that abstractions at least are interpretable and often converge to human abstractions is a good sign for the natural abstractions hypothesis. Thus, capabilities work shift from being net-negative to net positive in expectation.
I will change the title to reflect that it’s capabilities work that is net positive, and while increasing AI capabilities is one goal, other goals might be evident.
Thus, capabilities work shift from being net-negative to net positive in expectation.
This feels to obvious to say, but I am not against building AGI ever, but because the stakes are so high and the incentives are aligned all wrong I think on the margin speeding up is bad. I do see the selfish argument and understand not everyone would like to sacrifice themselves, their loved ones or anyone likely to die before AGI is around for the sake of humanity. Also making AGI happen sooner is on the margin not good for taking over the galaxy I think (Somewhere in the EA forum is a good estimate for this. The basic argument is that space colonization is only O(n^2) or O(n^3) so very slow).
Also if you are very concerned about yourself cryonics seems like the more prosocial version. Like 0.1-10% seems still kinda high for my personal risk preferences.
I think this is wrong (in that how does speeding up reduce risk? What do you want to speed up?) . I’d be actually interested in the case for this I got promised in the title.
Specifically, it’s the fact that one of the most intractable problems, arguably the core reason why AI safety is so hard to iterate, is likely a non-problem, and the fact that abstractions at least are interpretable and often converge to human abstractions is a good sign for the natural abstractions hypothesis. Thus, capabilities work shift from being net-negative to net positive in expectation.
I will change the title to reflect that it’s capabilities work that is net positive, and while increasing AI capabilities is one goal, other goals might be evident.
This feels to obvious to say, but I am not against building AGI ever, but because the stakes are so high and the incentives are aligned all wrong I think on the margin speeding up is bad. I do see the selfish argument and understand not everyone would like to sacrifice themselves, their loved ones or anyone likely to die before AGI is around for the sake of humanity. Also making AGI happen sooner is on the margin not good for taking over the galaxy I think (Somewhere in the EA forum is a good estimate for this. The basic argument is that space colonization is only O(n^2) or O(n^3) so very slow).
Also if you are very concerned about yourself cryonics seems like the more prosocial version. Like 0.1-10% seems still kinda high for my personal risk preferences.