I had similar discussions, but I’m worried this is not a good way how to think about the situation. IMO the best part of both ‘rationality’ and ‘effective altruism’ is often the overlap—people who to a large extent belong to both communities/do not see the labels as something really important for their identity.
Rationality asks the question “How to think clearly”. For many people who start to think more clearly, this leads to an update of their goals toward the question “How we can do as much good as possible (thinking rationally)”, and acting on the answer, which is effective altruism.
Effective altruism asks the question “How we can do as much good as possible, thinking rationally and based on data?”. For many people who actually start thinking about the question, this leads to an update “the ability to think clearly is critical when trying to answer the question”. Which is rationality.
This is also to some extent predictive about failure modes. “Rationality without the EA part” can deteriorate into something like high-IQ-people discussion club and can have trouble with actions. “EA without the rationality part” can be something like a group of high-scrupulosity people who are personally very nice and donate to effective charities, but actually look away from things which are important.
This is not to say that organizations identified with either of the brands are flawless.
Also—we have now several geographically separated experiments in how the EA / LW / rationality / long-termist communities may look like, outside of the Bay area, and my feeling is places where the core of the communities is shared are healthier/producing more good things than places where the overlap is small, and that is better than having lot of distrust.
I had similar discussions, but I’m worried this is not a good way how to think about the situation. IMO the best part of both ‘rationality’ and ‘effective altruism’ is often the overlap—people who to a large extent belong to both communities/do not see the labels as something really important for their identity.
Systematic reasons for that may be...
This is also to some extent predictive about failure modes. “Rationality without the EA part” can deteriorate into something like high-IQ-people discussion club and can have trouble with actions. “EA without the rationality part” can be something like a group of high-scrupulosity people who are personally very nice and donate to effective charities, but actually look away from things which are important.
This is not to say that organizations identified with either of the brands are flawless.
Also—we have now several geographically separated experiments in how the EA / LW / rationality / long-termist communities may look like, outside of the Bay area, and my feeling is places where the core of the communities is shared are healthier/producing more good things than places where the overlap is small, and that is better than having lot of distrust.