The biggest problem with attempting to work on existential risk isn’t the expenditure of resources. The problem is that our beliefs on the matter are—more or less necessarily—far mode. And the problem with far mode beliefs is that they have no connection to reality. To us smart, rational people it seems as though intelligence and rationality should be sufficient to draw true conclusions about reality itself even in the absence of immediate data, but when this is actually tried outside narrow technical domains like physics, the accuracy of the results is generally worse than random chance.
The one thing that protects us from the inaccuracy of far mode beliefs is compartmentalization. We believe all sorts of stories that have no connection to reality, but this is usually quite harmless because we instinctively keep them in a separate compartment, and only allow near mode beliefs (which are usually reasonably accurate, thanks to feedback from the real world) to influence our actions.
Unfortunately this can break down for intellectuals, because we import the idea of consistency—which certainly sounds like a good thing on the face of it—and sometimes allow it to persuade us to override our compartmentalization instincts, so that we start actually behaving in accordance with our far mode beliefs. The results range from a waste of resources (e.g. SETI) to horrific (e.g. 9/11).
I’ve come to the conclusion that the take-home message probably isn’t so much hold accurate far mode beliefs—I have no evidence, after all, that this is even possible (and yes, I’ve looked hard, with strong motivation to find something). The take-home message, I now think, is compartmentalization evolved for good reason, it is a vital safeguard, and whatever else we do we cannot afford to let it fail.
In short, the best advice I can give is, whatever you do, do it in a domain where there is some real world feedback on the benefits, a domain where you can make decisions on the basis of probably reasonably accurate near mode beliefs. Low-leverage benefit is better than high-leverage harm.
The biggest problem with attempting to work on existential risk isn’t the expenditure of resources. The problem is that our beliefs on the matter are—more or less necessarily—far mode. And the problem with far mode beliefs is that they have no connection to reality. To us smart, rational people it seems as though intelligence and rationality should be sufficient to draw true conclusions about reality itself even in the absence of immediate data, but when this is actually tried outside narrow technical domains like physics, the accuracy of the results is generally worse than random chance.
The one thing that protects us from the inaccuracy of far mode beliefs is compartmentalization. We believe all sorts of stories that have no connection to reality, but this is usually quite harmless because we instinctively keep them in a separate compartment, and only allow near mode beliefs (which are usually reasonably accurate, thanks to feedback from the real world) to influence our actions.
Unfortunately this can break down for intellectuals, because we import the idea of consistency—which certainly sounds like a good thing on the face of it—and sometimes allow it to persuade us to override our compartmentalization instincts, so that we start actually behaving in accordance with our far mode beliefs. The results range from a waste of resources (e.g. SETI) to horrific (e.g. 9/11).
I’ve come to the conclusion that the take-home message probably isn’t so much hold accurate far mode beliefs—I have no evidence, after all, that this is even possible (and yes, I’ve looked hard, with strong motivation to find something). The take-home message, I now think, is compartmentalization evolved for good reason, it is a vital safeguard, and whatever else we do we cannot afford to let it fail.
In short, the best advice I can give is, whatever you do, do it in a domain where there is some real world feedback on the benefits, a domain where you can make decisions on the basis of probably reasonably accurate near mode beliefs. Low-leverage benefit is better than high-leverage harm.
This is somewhat of an overstatement, to say the least. And your conclusion seems to rely on that overstatement.
I agree!
I also disagree.
But the parts of me that disagree are kept separate in my mind from the parts that agree, so that’s all right.
But parts of me endorse consistency, so I suppose it’s only a matter of time before this ends badly.