I agree that if you hyper-focus on a very small number of people, the plan would be much more tractable (though still not really tractable in my view).
My stance is that you will get significantly more leverage if you focus on leveraging AIs to augment alignment researchers than if you try to make them a little smarter via reading a document / gene therapy. One of my agendas focuses on the former, so that’s probably worth factoring in.
I agree that if you hyper-focus on a very small number of people, the plan would be much more tractable (though still not really tractable in my view).
My stance is that you will get significantly more leverage if you focus on leveraging AIs to augment alignment researchers than if you try to make them a little smarter via reading a document / gene therapy. One of my agendas focuses on the former, so that’s probably worth factoring in.