Specifically, it’s the fact that one of the most intractable problems, arguably the core reason why AI safety is so hard to iterate, is likely a non-problem, and the fact that abstractions at least are interpretable and often converge to human abstractions is a good sign for the natural abstractions hypothesis. Thus, capabilities work shift from being net-negative to net positive in expectation.
I will change the title to reflect that it’s capabilities work that is net positive, and while increasing AI capabilities is one goal, other goals might be evident.
Thus, capabilities work shift from being net-negative to net positive in expectation.
This feels to obvious to say, but I am not against building AGI ever, but because the stakes are so high and the incentives are aligned all wrong I think on the margin speeding up is bad. I do see the selfish argument and understand not everyone would like to sacrifice themselves, their loved ones or anyone likely to die before AGI is around for the sake of humanity. Also making AGI happen sooner is on the margin not good for taking over the galaxy I think (Somewhere in the EA forum is a good estimate for this. The basic argument is that space colonization is only O(n^2) or O(n^3) so very slow).
Also if you are very concerned about yourself cryonics seems like the more prosocial version. Like 0.1-10% seems still kinda high for my personal risk preferences.
Specifically, it’s the fact that one of the most intractable problems, arguably the core reason why AI safety is so hard to iterate, is likely a non-problem, and the fact that abstractions at least are interpretable and often converge to human abstractions is a good sign for the natural abstractions hypothesis. Thus, capabilities work shift from being net-negative to net positive in expectation.
I will change the title to reflect that it’s capabilities work that is net positive, and while increasing AI capabilities is one goal, other goals might be evident.
This feels to obvious to say, but I am not against building AGI ever, but because the stakes are so high and the incentives are aligned all wrong I think on the margin speeding up is bad. I do see the selfish argument and understand not everyone would like to sacrifice themselves, their loved ones or anyone likely to die before AGI is around for the sake of humanity. Also making AGI happen sooner is on the margin not good for taking over the galaxy I think (Somewhere in the EA forum is a good estimate for this. The basic argument is that space colonization is only O(n^2) or O(n^3) so very slow).
Also if you are very concerned about yourself cryonics seems like the more prosocial version. Like 0.1-10% seems still kinda high for my personal risk preferences.