I don’t think anyone is saying this outright so I suppose I will—pushing forward the frontier on intelligence enhancement as a solution to alignment is not wise. The second order effects of pushing that particular frontier (both the capabilities and overton window) are disastrous, and our intelligence outpacing our wisdom is what got us into this mess in the first place.
I absolutely agree. Since lots of things are happening in the brain, you can’t amplify intelligence without tearing down lots of Chesterton-Schelling fences. Making a community wealthy or powerful will make all the people and structures and norms inside it OOD.
But at the same time, we need nuanced calculations comparing the expected costs and the expected benefits. We will need to do those calculations as we go along, so we can update based on which tech and projects turn out to be low hanging fruit. Staying the course also doesn’t seem to be a winning strategy.
You’re not going to just be able to stop the train at the moment the costs outweigh the benefits. The majority of negative consequences will most likely come from grey swans that won’t show up in your nuanced calculations of costs and benefits.
EY particularly mean “intelligence in broad sense”, including wisdom. One of his proposals was “identify structures that makes people rationalize, disable them”.
I don’t think anyone is saying this outright so I suppose I will—pushing forward the frontier on intelligence enhancement as a solution to alignment is not wise. The second order effects of pushing that particular frontier (both the capabilities and overton window) are disastrous, and our intelligence outpacing our wisdom is what got us into this mess in the first place.
I absolutely agree. Since lots of things are happening in the brain, you can’t amplify intelligence without tearing down lots of Chesterton-Schelling fences. Making a community wealthy or powerful will make all the people and structures and norms inside it OOD.
But at the same time, we need nuanced calculations comparing the expected costs and the expected benefits. We will need to do those calculations as we go along, so we can update based on which tech and projects turn out to be low hanging fruit. Staying the course also doesn’t seem to be a winning strategy.
You’re not going to just be able to stop the train at the moment the costs outweigh the benefits. The majority of negative consequences will most likely come from grey swans that won’t show up in your nuanced calculations of costs and benefits.
EY particularly mean “intelligence in broad sense”, including wisdom. One of his proposals was “identify structures that makes people rationalize, disable them”.
This is not what I mean by wisdom.