I’ve thought through an explanation as to why there exist people who are not effective altruists. I think it’s important to understand these viewpoints if EAs want to convert more people to their side.
As an added bonus, I think this explanation generalizes to many cases where a person’s actions contradict their knowledge—thinking through this helped me better understand why I think I take actions which contradict my knowledge.
Summary: people’s gut feel (which actually governs most decision-making) takes time, thought and effort to catch up to their systematic reasoning (which is capable of absorbing new information much quicker). This explains phenomena such as “why not everyone who has heard of EA is an EA” and “why not everyone who has heard of factory farming is a vegan”.
Outcome / why this was useful for me to think about: This framework of “systematic reasoning” vs “gut feel” is useful for me when thinking about what I know, how well I know it, and whether I act on this knowledge. This helps distinguish between two possible types of “this person is acting contrary to knowledge they have”: 1) the person’s actions disagree with their gut feel and systematic reasoning (= lack of control) or 2) the person’s actions agree with their systematic reasoning but not gut feel (= still processing the knowledge).
Full explanation: People’s views on career choices, moral principles, and most generally the moral value of particular actions are quite rarely influenced by systematic reasoning. Instead, people automatically develop priors on these things by interacting with society and make most decisions according to gut feel.
Making gut feel decisions instead of using systematic reasoning is generally a good move. At any moment, we are deciding not to do an insanely high number of technically feasible actions. Evaluating all of these is computationally intractable. (for arguments like these see “Algorithms to Live By”)
When people are introduced to EA, they will usually not object to premises such as “we should make choices to do more good at the margin” and “some charities are 10-100x more effective than others”. However just because they agree with this doesn’t mean they’re going to immediately become an EA. In other words, anybody can quickly understand EA concepts through their systematic reasoning, but that doesn’t mean it has also reached their gut feel reasoning (= becoming an EA).
A person’s gut feel on EA topics is all of their priors on charitable giving, global problems, career advice, and doing good in general. Even the most well-worded argument isn’t immediately going to sway a person’s priors so much that they immediately become an EA. But over time, a person’s priors can be updated via repeated exposure and internal reflection. So maybe you explain EA to someone and they’re initially skeptical, but they continue carefully considering EA ideas and become more and more of an EA.
This framework is actually quite general. Here’s another example: consider a person who is aware that factory farming is cruel but regularly eats meat. This is because their gut feel on whether meat is OK hasn’t caught up to systematic reasoning about factory farming being unethical.
Just like the EA example explained above, there is often no perfect explanation which can instantly turn somebody into a gut feel vegan. Rather, they have to put in the work to reflect on pro-vegan evidence presented to them.
(n.b: the terms “systematic reasoning” and “gut feel” are not as thoughtfully chosen as they could be—I’d appreciate references to better or more standard terms!)
Ah, I googled those and the results mostly mentioned “Thinking Fast and Slow”. The book has been on my list for a while but it sounds like I should give it higher priority. Thanks for the pointer!
I’ve thought through an explanation as to why there exist people who are not effective altruists. I think it’s important to understand these viewpoints if EAs want to convert more people to their side.
As an added bonus, I think this explanation generalizes to many cases where a person’s actions contradict their knowledge—thinking through this helped me better understand why I think I take actions which contradict my knowledge.
Summary: people’s gut feel (which actually governs most decision-making) takes time, thought and effort to catch up to their systematic reasoning (which is capable of absorbing new information much quicker). This explains phenomena such as “why not everyone who has heard of EA is an EA” and “why not everyone who has heard of factory farming is a vegan”.
Outcome / why this was useful for me to think about: This framework of “systematic reasoning” vs “gut feel” is useful for me when thinking about what I know, how well I know it, and whether I act on this knowledge. This helps distinguish between two possible types of “this person is acting contrary to knowledge they have”: 1) the person’s actions disagree with their gut feel and systematic reasoning (= lack of control) or 2) the person’s actions agree with their systematic reasoning but not gut feel (= still processing the knowledge).
Full explanation: People’s views on career choices, moral principles, and most generally the moral value of particular actions are quite rarely influenced by systematic reasoning. Instead, people automatically develop priors on these things by interacting with society and make most decisions according to gut feel.
Making gut feel decisions instead of using systematic reasoning is generally a good move. At any moment, we are deciding not to do an insanely high number of technically feasible actions. Evaluating all of these is computationally intractable. (for arguments like these see “Algorithms to Live By”)
When people are introduced to EA, they will usually not object to premises such as “we should make choices to do more good at the margin” and “some charities are 10-100x more effective than others”. However just because they agree with this doesn’t mean they’re going to immediately become an EA. In other words, anybody can quickly understand EA concepts through their systematic reasoning, but that doesn’t mean it has also reached their gut feel reasoning (= becoming an EA).
A person’s gut feel on EA topics is all of their priors on charitable giving, global problems, career advice, and doing good in general. Even the most well-worded argument isn’t immediately going to sway a person’s priors so much that they immediately become an EA. But over time, a person’s priors can be updated via repeated exposure and internal reflection. So maybe you explain EA to someone and they’re initially skeptical, but they continue carefully considering EA ideas and become more and more of an EA.
This framework is actually quite general. Here’s another example: consider a person who is aware that factory farming is cruel but regularly eats meat. This is because their gut feel on whether meat is OK hasn’t caught up to systematic reasoning about factory farming being unethical.
Just like the EA example explained above, there is often no perfect explanation which can instantly turn somebody into a gut feel vegan. Rather, they have to put in the work to reflect on pro-vegan evidence presented to them.
(n.b: the terms “systematic reasoning” and “gut feel” are not as thoughtfully chosen as they could be—I’d appreciate references to better or more standard terms!)
The standard terms: Gut feel = ‘System 1’, systematic reasoning = ‘system 2’ :)
Ah, I googled those and the results mostly mentioned “Thinking Fast and Slow”. The book has been on my list for a while but it sounds like I should give it higher priority. Thanks for the pointer!