I agree with this comment. There is one point that I think we can extend usefully, which may dissolve the distinction with Homo Novis:
I think there’s a key skill a rationalist should attain, which is knowing in which environments you will fail to be rational, and avoiding those environments.
While I agree, I also fully expect the list of environments in which we are able to think clearly should expand over time as the art advances. There are two areas where I think shaping the environment will fail as an alternative strategy: first is that we cannot advance the art’s power over a new environment without testing ourselves in that environment; second is that there are tail risks to consider, which is to say we inevitably will have such environments imposed on us at some point. Consider events like car accidents, weather like tornadoes, malevolent action like a robbery, or medical issues like someone else choking or having a seizure.
I strongly expect that the ability to think clearly in extreme environments would have payoffs in less extreme environments. For example, a lot of the stress in a bad situation comes from the worry that it will turn into a worse situation; if we are confident of the ability to make good decisions in the worse situation, we should be less worried in the merely bad one, which should allow for better decisions in the merely bad one, thus making the worse situation less likely, and so on.
Also, consider the case of tail opportunities rather than tail risks; it seems like a clearly good idea to work extending rationality to extremely good situations that also compromise clear thought. Things like: winning the lottery; getting hit on by someone you thought was out of your league; landing an interview with a much sought after investor. In fact I feel like all of the discussion around entrepreneurship falls into this category—the whole pitch is seeking out high-risk/high-reward opportunities. The idea that basic execution becomes harder when the rewards get huge is a common trope, but if we apply the test from the quote it comes back as avoid environments with huge upside which clearly doesn’t scan (but is itself also a trope).
As a final note—and I emphasize up front I don’t know how to square this exactly—I feel like there should be some correspondence between bad environments and bad problems. Consider that one of the motivating problems for our community is X-risk, which is a suite of problems that are by default too huge to wrap our minds around, too horrible to emotionally grapple with, etc. In short, they also meet the criteria for reliably causing rationality to fail, but this motivates us to improve our arts to deal with it. Why should problems be treated in the opposite way as environments?
So I think the Homo Novis distinction comes down to them being in possession of a fully developed art already; we are having to make do with an incomplete one.
I agree with this comment. There is one point that I think we can extend usefully, which may dissolve the distinction with Homo Novis:
While I agree, I also fully expect the list of environments in which we are able to think clearly should expand over time as the art advances. There are two areas where I think shaping the environment will fail as an alternative strategy: first is that we cannot advance the art’s power over a new environment without testing ourselves in that environment; second is that there are tail risks to consider, which is to say we inevitably will have such environments imposed on us at some point. Consider events like car accidents, weather like tornadoes, malevolent action like a robbery, or medical issues like someone else choking or having a seizure.
I strongly expect that the ability to think clearly in extreme environments would have payoffs in less extreme environments. For example, a lot of the stress in a bad situation comes from the worry that it will turn into a worse situation; if we are confident of the ability to make good decisions in the worse situation, we should be less worried in the merely bad one, which should allow for better decisions in the merely bad one, thus making the worse situation less likely, and so on.
Also, consider the case of tail opportunities rather than tail risks; it seems like a clearly good idea to work extending rationality to extremely good situations that also compromise clear thought. Things like: winning the lottery; getting hit on by someone you thought was out of your league; landing an interview with a much sought after investor. In fact I feel like all of the discussion around entrepreneurship falls into this category—the whole pitch is seeking out high-risk/high-reward opportunities. The idea that basic execution becomes harder when the rewards get huge is a common trope, but if we apply the test from the quote it comes back as avoid environments with huge upside which clearly doesn’t scan (but is itself also a trope).
As a final note—and I emphasize up front I don’t know how to square this exactly—I feel like there should be some correspondence between bad environments and bad problems. Consider that one of the motivating problems for our community is X-risk, which is a suite of problems that are by default too huge to wrap our minds around, too horrible to emotionally grapple with, etc. In short, they also meet the criteria for reliably causing rationality to fail, but this motivates us to improve our arts to deal with it. Why should problems be treated in the opposite way as environments?
So I think the Homo Novis distinction comes down to them being in possession of a fully developed art already; we are having to make do with an incomplete one.
For now.
Tl;dr for last two comments:
Know your limits.
Expand your limits.