This is moral philosophy you’re getting into, so I don’t think that there’s a community-wide consensus. LessWrong is big, and I’ve read more of the stuff about psychology and philosophy of language than anything else, rather than the stuff on moral philosophy, but I’ll take a swing at this.
Let’s imagine a hypothetical scenario: there is a guy, Steve, who really does not feel anything when he helps other people nor when does other “good” things generally; he does this only because his philosophy or religion tells them to. Say this guy was introduced to ideas of rationality and thus he is no longer bound by his philosophy/religion. And if Steve also does not feel bad about other people suffering (or even takes pleasure in it?)?
What i wanted to say is that rationality is a gun that can point both ways: and it is a good thing that LessWrong “sells” this gun with a safety mechanism (if it is such “safety mechanism”. Once again, maybe I missed something really critical that explains why altruism and “being good” is the most rational strategy).
In other ways, Steve does not really care about humanity; he cares about his well-being and will utilize all knowledge he got just to meet his ends ( people are different, aren’t they? and ends are different, too).
It seems that your implicit question is, “If rationality makes people more effective at doing things that I don’t value, then should the ideas of rationality be spread?” That depends on how many people there are with values that are inconsistent with yours, and it also depends on how much it makes people do things that you do value. And I would contend that a world full of more rational people would still be a better world than this one even if it means that there are a few sadists who are more effective for it. There are murderers who kill people with guns, and this is bad; but there are many, many more soldiers who protect their nations with guns, and the existence of those nations allow much higher standards of living than would be otherwise possible, and this is good. There are more good people than evil people in the world. But it’s also true that sometimes people can for the first time follow their beliefs to their logical conclusions and, as a result, do things that very few people value.
Or even another, average rationalist Jack estimated that his own net gain will be significantly bigger if he hurts or kills someone (considering his emotions and feelings about overall humanity net gain, and all other possible factors). That means he must carry on? Or is it a taboo here? Or maybe it is a problem of this site’s demographics and nobody even considered this scenario (which fact I really doubt).
Jack doesn’t have to do anything. If ‘rationality’ doesn’t get you what you want, then you’re not being rational. Forget about Jack; put yourself in Jack’s situation. If you had already made your choice, and you killed all of those people, would you regret it? I don’t mean “Would you feel bad that all of those people had died, but you would still think that you did the right thing?” I mean, if you could go back and do it again, would you do it differently? If you wouldn’t change it, then you did the right thing. If you would change it, then you did the wrong thing. Rationality isn’t a goal in itself, rationality is the way to get what you want, and if being ‘rational’ doesn’t get you what you want, then you’re not being rational.
It seems that your implicit question is, “If rationality makes people more effective at doing things that I don’t value, >then should the ideas of rationality be spread?” That depends on how many people there are with values that are >inconsistent with yours, and it also depends on how much it makes people do things that you do value. And I would >contend that a world full of more rational people would still be a better world than this one even if it means that there >are a few sadists who are more effective for it. There are murderers who kill people with guns, and this is bad; but >there are many, many more soldiers who protect their nations with guns, and the existence of those nations allow >much higher standards of living than would be otherwise possible, and this is good. There are more good people >than evil people in the world. But it’s also true that sometimes people can for the first time follow their beliefs to their >logical conclusions and, as a result, do things that very few people value.
Excellent answer! Yes, you deducted the implicit question right. I also agree that this is a rather abstract field of moral philosophy, though i did not see that at first. Although I don’t think that your argument for the world being a better place with everyone being rational holds up, especially this point
There are more good people than evil people in the world.
Even if there are, there is no proof that after becoming “rational” they will not become “bad” (apostrophes because bad is not defined sufficiently, but that’ll do.). I can imagine some interesting prospect for experiments in this field by the way. I also think that the result will vary if the subject is placed in society of only-rationalists vs usual society—with “bad” actions carried out more in the second example, as there is much less room for cooperation.
But of course that is pointless discussion, as the situation is not really based on reality in any way and we can’t really tell what will happen. :)
Welcome, Ozyrus.
This is moral philosophy you’re getting into, so I don’t think that there’s a community-wide consensus. LessWrong is big, and I’ve read more of the stuff about psychology and philosophy of language than anything else, rather than the stuff on moral philosophy, but I’ll take a swing at this.
It seems that your implicit question is, “If rationality makes people more effective at doing things that I don’t value, then should the ideas of rationality be spread?” That depends on how many people there are with values that are inconsistent with yours, and it also depends on how much it makes people do things that you do value. And I would contend that a world full of more rational people would still be a better world than this one even if it means that there are a few sadists who are more effective for it. There are murderers who kill people with guns, and this is bad; but there are many, many more soldiers who protect their nations with guns, and the existence of those nations allow much higher standards of living than would be otherwise possible, and this is good. There are more good people than evil people in the world. But it’s also true that sometimes people can for the first time follow their beliefs to their logical conclusions and, as a result, do things that very few people value.
Jack doesn’t have to do anything. If ‘rationality’ doesn’t get you what you want, then you’re not being rational. Forget about Jack; put yourself in Jack’s situation. If you had already made your choice, and you killed all of those people, would you regret it? I don’t mean “Would you feel bad that all of those people had died, but you would still think that you did the right thing?” I mean, if you could go back and do it again, would you do it differently? If you wouldn’t change it, then you did the right thing. If you would change it, then you did the wrong thing. Rationality isn’t a goal in itself, rationality is the way to get what you want, and if being ‘rational’ doesn’t get you what you want, then you’re not being rational.
Excellent answer! Yes, you deducted the implicit question right. I also agree that this is a rather abstract field of moral philosophy, though i did not see that at first. Although I don’t think that your argument for the world being a better place with everyone being rational holds up, especially this point
Even if there are, there is no proof that after becoming “rational” they will not become “bad” (apostrophes because bad is not defined sufficiently, but that’ll do.). I can imagine some interesting prospect for experiments in this field by the way. I also think that the result will vary if the subject is placed in society of only-rationalists vs usual society—with “bad” actions carried out more in the second example, as there is much less room for cooperation.
But of course that is pointless discussion, as the situation is not really based on reality in any way and we can’t really tell what will happen. :)