I think it is very unclear that we want fewer ‘maladaptive’ people in the world in the sense that we can measure with personality traits such as the big five.
Would reducing the number of outliers in neuroticism also reduce the number of people emotionally invested in X-risk? The downstream results of such a modification do not seem to be clear.
It seems like producing a more homogeneous personality distribution would also reduce the robustness of society.
The core weirdness of this post to me is that the first conditioning on LLM/AI does all the IQ tasks, and humans are not involved in auditing that system in a case where high IQ is important. Personally, I feel like assuming that AI does all the IQ tasks is a moot case. We are pets or dead in that case.
I think it is very unclear that we want fewer ‘maladaptive’ people in the world in the sense that we can measure with personality traits such as the big five.
Would reducing the number of outliers in neuroticism also reduce the number of people emotionally invested in X-risk? The downstream results of such a modification do not seem to be clear.
It seems like producing a more homogeneous personality distribution would also reduce the robustness of society.
The core weirdness of this post to me is that the first conditioning on LLM/AI does all the IQ tasks, and humans are not involved in auditing that system in a case where high IQ is important. Personally, I feel like assuming that AI does all the IQ tasks is a moot case. We are pets or dead in that case.