The most important traits of the new humans are that… they prize rationality under all circumstances—to be accepted by them you have to retain clear thinking and problem-solving capability even when you’re stressed, hungry, tired, cold, or in combat
Interestingly, as a LessWronger, I don’t think of myself in quite this way. I think there’s a key skill a rationalist should attain, which is knowing in which environments you will fail to be rational, and avoiding those environments. Knowing your limits, and using that knowledge when making plans.
One that I’ve dealt with, that I think is pertinent for a lot of people, is being aware of how social media can destroy my attention and leave me feeling quite socially self-conscious. Bringing them into my environment damages my ability to think.
On the one hand, becoming able to think clearly and make good decisions while using social media is valuable and for many necessary. Here are some of the ways I try to do that, in the style of the Homo Novis:
But one of the important tools I have is avoiding being in those environments. I respond with very strict rules around Sabbath/Rest Days so I can clear my head. I also don’t carry a phone in general, and install content blockers on my laptop. I think these approaches are more like “avoiding situations where I cannot think clearly” than “learning to think clearly in difficult situations”.
There’s a balance between the two strategies. “Learn to think clearly in more environments” and “shape your environment to help you think clearly / not hinder your ability to think clearly”. In response to a situation where I can’t think clearly, sometimes I pick the one, and sometimes the other.
All that said, Gulf is totally added to my reading list. I read both The Moon is a Harsh Mistress and Stranger in a Strange Land for the first time this year and that was a thrill.
It would certainly be a mistake to interpret your martial art’s principle of “A warrior should be able to fight well even in unfavourable combat situations” as “A warrior should always immediately charge into combat, even when that would lead to an unfavourable situation”, or “There’s no point in trying to manoeuvre into a favourable situation”
Great point. A few (related) examples come to mind:
Paul Graham’s essay The Top Idea in Your Mind. “I realized recently that what one thinks about in the shower in the morning is more important than I’d thought. I knew it was a good time to have ideas. Now I’d go further: now I’d say it’s hard to do a really good job on anything you don’t think about in the shower.”
Trying to figure out dinner is the worst when I’m already hungry. I still haven’t reached a level of success where I’m satisfied, but I’ve had some success with 1) planning out meals for the next ~2 weeks, that way instead of deciding what to make for dinner, I just pick something off the list, 2) meal prepping, 3) having Meal Squares as a backup.
Grooming meetings vs. (I guess you can call it) asynchronous grooming. In scrum, you have meetings where ~15 people get in a room (*”room”), look at the tasks that need to be done, go through each of them, and try to plan each task out + address any questions about the task. With so many people + a fast pace, things can get a little chaotic, and I find it difficult to add much value contributing. However, we’re trying something new where tickets are assigned to people before the grooming meeting, and developers have a little “homework assignment” to groom their ticket before the grooming meeting. And then during the grooming meeting you present your ticket and give others a chance to comment or ask questions. We’re starting it this week so I’m not sure if it will be more effective, but I have a strong sense that it will be.
Arguments. It’s hard to be productive when things get heated. Probably better to take a breather and come back to it.
I have come to believe that people’s ability to come to correct opinions about important questions is in large part a result of whether their social and monetary incentives reward them when they have accurate models in a specific domain. This means a person can have extremely good opinions in one domain of reality, because they are subject to good incentives, while having highly inaccurate models in a large variety of other domains in which their incentives are not well optimized.
People’s rationality is much more defined by their ability to maneuver themselves into environments in which their external incentives align with their goals, than by their ability to have correct opinions while being subject to incentives they don’t endorse. This is a tractable intervention and so the best people will be able to have vastly more accurate beliefs than the average person, but it means that “having accurate beliefs in one domain” doesn’t straightforwardly generalize to “will have accurate beliefs in other domains”.
One is strongly predictive of the other, and that’s in part due to general thinking skills and broad cognitive ability. But another major piece of the puzzle is the person’s ability to build and seek out environments with good incentive structures.
Everyone is highly irrational in their beliefs about at least some aspects of reality, and positions of power in particular tend to encourage strong incentives that don’t tend to be optimally aligned with the truth. This means that highly competent people in positions of power often have less accurate beliefs than competent people who are not in positions of power.
The design of systems that hold people who have power and influence accountable in a way that aligns their interests with both forming accurate beliefs and the interests of humanity at large is a really important problem, and is a major determinant of the overall quality of the decision-making ability of a community. General rationality training helps, but for collective decision making the creation of accountability systems, the tracking of outcome metrics and the design of incentives is at least as big of a factor as the degree to which the individual members of the community are able to come to accurate beliefs on their own.
Hah, I was thinking of replying to say I was largely just repeating things you said in that post.
Nonetheless, thanks both Kaj and Eric, I might turn it into a little post. It’s not bad to have two posts saying the same thing (slightly differently).
Similarly, for instrumental rationality, I’ve been trying to lean harder on putting myself in environments that induce me to be more productive, rather than working on strategies to stay productive when my environment is making that difficult.
I agree with this comment. There is one point that I think we can extend usefully, which may dissolve the distinction with Homo Novis:
I think there’s a key skill a rationalist should attain, which is knowing in which environments you will fail to be rational, and avoiding those environments.
While I agree, I also fully expect the list of environments in which we are able to think clearly should expand over time as the art advances. There are two areas where I think shaping the environment will fail as an alternative strategy: first is that we cannot advance the art’s power over a new environment without testing ourselves in that environment; second is that there are tail risks to consider, which is to say we inevitably will have such environments imposed on us at some point. Consider events like car accidents, weather like tornadoes, malevolent action like a robbery, or medical issues like someone else choking or having a seizure.
I strongly expect that the ability to think clearly in extreme environments would have payoffs in less extreme environments. For example, a lot of the stress in a bad situation comes from the worry that it will turn into a worse situation; if we are confident of the ability to make good decisions in the worse situation, we should be less worried in the merely bad one, which should allow for better decisions in the merely bad one, thus making the worse situation less likely, and so on.
Also, consider the case of tail opportunities rather than tail risks; it seems like a clearly good idea to work extending rationality to extremely good situations that also compromise clear thought. Things like: winning the lottery; getting hit on by someone you thought was out of your league; landing an interview with a much sought after investor. In fact I feel like all of the discussion around entrepreneurship falls into this category—the whole pitch is seeking out high-risk/high-reward opportunities. The idea that basic execution becomes harder when the rewards get huge is a common trope, but if we apply the test from the quote it comes back as avoid environments with huge upside which clearly doesn’t scan (but is itself also a trope).
As a final note—and I emphasize up front I don’t know how to square this exactly—I feel like there should be some correspondence between bad environments and bad problems. Consider that one of the motivating problems for our community is X-risk, which is a suite of problems that are by default too huge to wrap our minds around, too horrible to emotionally grapple with, etc. In short, they also meet the criteria for reliably causing rationality to fail, but this motivates us to improve our arts to deal with it. Why should problems be treated in the opposite way as environments?
So I think the Homo Novis distinction comes down to them being in possession of a fully developed art already; we are having to make do with an incomplete one.
Interestingly, as a LessWronger, I don’t think of myself in quite this way. I think there’s a key skill a rationalist should attain, which is knowing in which environments you will fail to be rational, and avoiding those environments. Knowing your limits, and using that knowledge when making plans.
One that I’ve dealt with, that I think is pertinent for a lot of people, is being aware of how social media can destroy my attention and leave me feeling quite socially self-conscious. Bringing them into my environment damages my ability to think.
On the one hand, becoming able to think clearly and make good decisions while using social media is valuable and for many necessary. Here are some of the ways I try to do that, in the style of the Homo Novis:
I notice when I’m being encouraged to use the wrong concepts (e.g. PR rather than honor) or believe deeply bad theories of ethics (e.g. the copenhagen theory of ethics)
I keep my identity small / use my identity carefully
I build a better model of my social environment, how knowledge propagates, and the narrative forces pushing me with (especially the forces of blandness) so I can see threats coming
But one of the important tools I have is avoiding being in those environments. I respond with very strict rules around Sabbath/Rest Days so I can clear my head. I also don’t carry a phone in general, and install content blockers on my laptop. I think these approaches are more like “avoiding situations where I cannot think clearly” than “learning to think clearly in difficult situations”.
There’s a balance between the two strategies. “Learn to think clearly in more environments” and “shape your environment to help you think clearly / not hinder your ability to think clearly”. In response to a situation where I can’t think clearly, sometimes I pick the one, and sometimes the other.
All that said, Gulf is totally added to my reading list. I read both The Moon is a Harsh Mistress and Stranger in a Strange Land for the first time this year and that was a thrill.
It would certainly be a mistake to interpret your martial art’s principle of “A warrior should be able to fight well even in unfavourable combat situations” as “A warrior should always immediately charge into combat, even when that would lead to an unfavourable situation”, or “There’s no point in trying to manoeuvre into a favourable situation”
Great point. A few (related) examples come to mind:
Paul Graham’s essay The Top Idea in Your Mind. “I realized recently that what one thinks about in the shower in the morning is more important than I’d thought. I knew it was a good time to have ideas. Now I’d go further: now I’d say it’s hard to do a really good job on anything you don’t think about in the shower.”
Trying to figure out dinner is the worst when I’m already hungry. I still haven’t reached a level of success where I’m satisfied, but I’ve had some success with 1) planning out meals for the next ~2 weeks, that way instead of deciding what to make for dinner, I just pick something off the list, 2) meal prepping, 3) having Meal Squares as a backup.
Grooming meetings vs. (I guess you can call it) asynchronous grooming. In scrum, you have meetings where ~15 people get in a room (*”room”), look at the tasks that need to be done, go through each of them, and try to plan each task out + address any questions about the task. With so many people + a fast pace, things can get a little chaotic, and I find it difficult to add much value contributing. However, we’re trying something new where tickets are assigned to people before the grooming meeting, and developers have a little “homework assignment” to groom their ticket before the grooming meeting. And then during the grooming meeting you present your ticket and give others a chance to comment or ask questions. We’re starting it this week so I’m not sure if it will be more effective, but I have a strong sense that it will be.
Arguments. It’s hard to be productive when things get heated. Probably better to take a breather and come back to it.
I think this comment would make for a good top-level post almost as it is.
This post of mine feels closely related: https://www.lesswrong.com/posts/xhE4TriBSPywGuhqi/integrity-and-accountability-are-core-parts-of-rationality
Hah, I was thinking of replying to say I was largely just repeating things you said in that post.
Nonetheless, thanks both Kaj and Eric, I might turn it into a little post. It’s not bad to have two posts saying the same thing (slightly differently).
Agreed.
Similarly, for instrumental rationality, I’ve been trying to lean harder on putting myself in environments that induce me to be more productive, rather than working on strategies to stay productive when my environment is making that difficult.
I agree with this comment. There is one point that I think we can extend usefully, which may dissolve the distinction with Homo Novis:
While I agree, I also fully expect the list of environments in which we are able to think clearly should expand over time as the art advances. There are two areas where I think shaping the environment will fail as an alternative strategy: first is that we cannot advance the art’s power over a new environment without testing ourselves in that environment; second is that there are tail risks to consider, which is to say we inevitably will have such environments imposed on us at some point. Consider events like car accidents, weather like tornadoes, malevolent action like a robbery, or medical issues like someone else choking or having a seizure.
I strongly expect that the ability to think clearly in extreme environments would have payoffs in less extreme environments. For example, a lot of the stress in a bad situation comes from the worry that it will turn into a worse situation; if we are confident of the ability to make good decisions in the worse situation, we should be less worried in the merely bad one, which should allow for better decisions in the merely bad one, thus making the worse situation less likely, and so on.
Also, consider the case of tail opportunities rather than tail risks; it seems like a clearly good idea to work extending rationality to extremely good situations that also compromise clear thought. Things like: winning the lottery; getting hit on by someone you thought was out of your league; landing an interview with a much sought after investor. In fact I feel like all of the discussion around entrepreneurship falls into this category—the whole pitch is seeking out high-risk/high-reward opportunities. The idea that basic execution becomes harder when the rewards get huge is a common trope, but if we apply the test from the quote it comes back as avoid environments with huge upside which clearly doesn’t scan (but is itself also a trope).
As a final note—and I emphasize up front I don’t know how to square this exactly—I feel like there should be some correspondence between bad environments and bad problems. Consider that one of the motivating problems for our community is X-risk, which is a suite of problems that are by default too huge to wrap our minds around, too horrible to emotionally grapple with, etc. In short, they also meet the criteria for reliably causing rationality to fail, but this motivates us to improve our arts to deal with it. Why should problems be treated in the opposite way as environments?
So I think the Homo Novis distinction comes down to them being in possession of a fully developed art already; we are having to make do with an incomplete one.
For now.
Tl;dr for last two comments:
Know your limits.
Expand your limits.