There may be highly targeted interventions (other than x-risk reduction efforts) which can have big trajectory changes (including indirectly improving humans’ ability to address x-risks).
This is, more or less, the intended purpose behind spending all this energy on studying rationality rather than directly researching FAI. I’m not saying I agree with that reasoning, by the way. But that was the initial reasoning behind Less Wrong, for better or worse. Would we be farther ahead if rather than working on rationality, Eliezer started working immediately on FAI? Maybe, but but likely not. I could see it being argued both ways. But anyway, this shows an actual, very concrete, example of this kind of intervention.
This is, more or less, the intended purpose behind spending all this energy on studying rationality rather than directly researching FAI. I’m not saying I agree with that reasoning, by the way. But that was the initial reasoning behind Less Wrong, for better or worse. Would we be farther ahead if rather than working on rationality, Eliezer started working immediately on FAI? Maybe, but but likely not. I could see it being argued both ways. But anyway, this shows an actual, very concrete, example of this kind of intervention.