Being more rational makes rationalization harder. When confronted with thought experiments such as Peter Singer’s drowning child example, it makes it harder to come up with reasons for not changing one’s actions while still maintaining a self-image of being caring. While non-rationalists often object to EA by bringing up bad arguments (e.g. by not understanding expected utility theory or decision-making under uncertainty), rationalists are more likely to draw more radical conclusions. This means they might either accept the extreme conclusion that they want to be more effectively altruistic, or they accept the extreme conclusion that they don’t share the premise that the thought experiment relies on, namely that they care significantly about others for their own sake. Increased rationality weeds out the “middle-ground-positions” that are kept in place by rationalizations or a simple lack of further thinking (which doesn’t refer to all such positions of course).
It would be interesting to figure out the factors that determine which way the bullet will be bitten. I would predict that the vast majority of EA-rationalists have other EAs in their close social environment.
I wouldn’t suggest that people’s response to dilemmas like Singer’s is rationalization. Rather, I’d say that people have principles but are not very good at articulating them. If they say they should save a dying child because of some principle, that “principle” is just their best attempt to approximate the actual principle that they can’t articulate.
If the principle doesn’t fit when applied to another case, fixing up the principle isn’t rationalization; it’s recognizing that the stated principle was only ever an approximation, and trying to find a better approximation. (And if the fix up is based on bad reasoning, that’s just “trying to find a better approximation, and making a mistake doing so”.)
It may be easier to see when not talking about saving children. If you tell me you don’t like winter days, and I point out that Christmas is a winter day and you like Christmas, and you then respond “well, I meant a typical winter day, not a special one like Christmas”, that’s not a rationalization, that’s just revising what was never a 100% accurate statement and should not have been expected to be.
Being more rational makes rationalization harder. When confronted with thought experiments such as Peter Singer’s drowning child example, it makes it harder to come up with reasons for not changing one’s actions while still maintaining a self-image of being caring. While non-rationalists often object to EA by bringing up bad arguments (e.g. by not understanding expected utility theory or decision-making under uncertainty), rationalists are more likely to draw more radical conclusions. This means they might either accept the extreme conclusion that they want to be more effectively altruistic, or they accept the extreme conclusion that they don’t share the premise that the thought experiment relies on, namely that they care significantly about others for their own sake. Increased rationality weeds out the “middle-ground-positions” that are kept in place by rationalizations or a simple lack of further thinking (which doesn’t refer to all such positions of course).
It would be interesting to figure out the factors that determine which way the bullet will be bitten. I would predict that the vast majority of EA-rationalists have other EAs in their close social environment.
I wouldn’t suggest that people’s response to dilemmas like Singer’s is rationalization. Rather, I’d say that people have principles but are not very good at articulating them. If they say they should save a dying child because of some principle, that “principle” is just their best attempt to approximate the actual principle that they can’t articulate.
If the principle doesn’t fit when applied to another case, fixing up the principle isn’t rationalization; it’s recognizing that the stated principle was only ever an approximation, and trying to find a better approximation. (And if the fix up is based on bad reasoning, that’s just “trying to find a better approximation, and making a mistake doing so”.)
It may be easier to see when not talking about saving children. If you tell me you don’t like winter days, and I point out that Christmas is a winter day and you like Christmas, and you then respond “well, I meant a typical winter day, not a special one like Christmas”, that’s not a rationalization, that’s just revising what was never a 100% accurate statement and should not have been expected to be.
Yes, this is a good point.