sadism and wills to power are baked into almost every human mind (with the exception of outliers of course). force multiplying those instincts is much worse than an AI which simply decides to repurpose the atoms in a human for something else.
I don’t think the result of intelligence enhancement would be “multiplying those instincts” for the vast majority of people; humans don’t seem to end up more sadistic as they get smarter and have more options.
i would argue that everyone dying is actually a pretty great ending compared to hyperexistential risks. it is effectively +inf relative utility.
I’m curious what value you assign to the ratio [U(paperclipped) - U(worst future)] / [U(best future) - U(paperclipped)]? It can’t be literally infinity unless U(paperclipped) = U(best future).
with humans you’d need the will and capability to engineer in at least +5sd empathy and −10sd sadism into every superbaby.
So your model is that we need to eradicate any last trace of sadism before superbabies is a good idea?
That’s variance explained. I was talking about effect size attenuation, which is what we care about for editing.
Supplementary table 10 is looking at direct and indirect effects of the EA PGI on other phenotypes. The results for the Cog Perf PGI are in supplementary table 13.