Mr. Yudkowsky, Why throughout all of your posts do you continue to speak of altruistic action as good or praiseworthy? Evolutionary psychology disproves ethical cognitivism (the position that moral or value propositions such as “X is right” or “One ought to do X” admit of truth or falsehood) as much as it disproves religion. Just as there’s no invisible dragon in my garage, there’s also no such as thing as a value or a moral obligation. To be sure, the implausibility of ethical cognitivism doesn’t give you reason to turn into a selfish, raging nihilist. At the same time, it doesn’t give you any reason NOT to. So, I ask, why do you still speak of altruistic actions as somehow better than selfish actions? I submit that this is a bias affecting your own ethical thinking and that its source is either cultural habituation or an innate disposition.
ECL
Karma: −13
Mr. Yudkowsky, It is the fact that purported moral facts add nothing to a description of a state of affairs and have no explanatory or predictive power that they are not facts at all. Statements of moral proposition such as “X-actions are wrong” or “One ought to do X-actions” are rather simply expressions of preferences or pro attitudes for X actions. If one has these preferences, then, those preference combined with a belief that a particular action A is an X-action gives you a reason to perform action A. However, if an agent does not have a pro attitude for X-actions, then the belief that A is an X-action does not give the agent a reason to perform A. So, the fact that an action is altruistic gives me no reason to perform the action unless I already have a pro attitude for altruistic actions. What I can’t conceive of is a reason for adopting one set of pro attitudes (such as pro attitudes for altruistic actions) over another set of pro attitudes (such as pro attitudes for selfish actions) since sets of pro attitudes can only be judged good or bad in light of another, higher set of pro attitudes. So, I can’t conceive of an agent-independent reason for acting altruistically; all reasons are necessarily agent-relative since a reason for performing a particular action can only be explicated in terms of an agent’s already existing preferences. Yet most ethical theories claim to provide people with just such an agent-independent reason for being altruistic. And I think this position is latent in your argument: you seem to think that there are good, agent-independent, rationally derivable reasons for being altruistic, and I want to know what those purported reasons are.