Mr. Yudkowsky,
It is the fact that purported moral facts add nothing to a description of a state of affairs and have no explanatory or predictive power that they are not facts at all.
Statements of moral proposition such as “X-actions are wrong” or “One ought to do X-actions” are rather simply expressions of preferences or pro attitudes for X actions. If one has these preferences, then, those preference combined with a belief that a particular action A is an X-action gives you a reason to perform action A. However, if an agent does not have a pro attitude for X-actions, then the belief that A is an X-action does not give the agent a reason to perform A.
So, the fact that an action is altruistic gives me no reason to perform the action unless I already have a pro attitude for altruistic actions.
What I can’t conceive of is a reason for adopting one set of pro attitudes (such as pro attitudes for altruistic actions) over another set of pro attitudes (such as pro attitudes for selfish actions) since sets of pro attitudes can only be judged good or bad in light of another, higher set of pro attitudes.
So, I can’t conceive of an agent-independent reason for acting altruistically; all reasons are necessarily agent-relative since a reason for performing a particular action can only be explicated in terms of an agent’s already existing preferences. Yet most ethical theories claim to provide people with just such an agent-independent reason for being altruistic. And I think this position is latent in your argument: you seem to think that there are good, agent-independent, rationally derivable reasons for being altruistic, and I want to know what those purported reasons are.
Mr. Yudkowsky, It is the fact that purported moral facts add nothing to a description of a state of affairs and have no explanatory or predictive power that they are not facts at all. Statements of moral proposition such as “X-actions are wrong” or “One ought to do X-actions” are rather simply expressions of preferences or pro attitudes for X actions. If one has these preferences, then, those preference combined with a belief that a particular action A is an X-action gives you a reason to perform action A. However, if an agent does not have a pro attitude for X-actions, then the belief that A is an X-action does not give the agent a reason to perform A. So, the fact that an action is altruistic gives me no reason to perform the action unless I already have a pro attitude for altruistic actions. What I can’t conceive of is a reason for adopting one set of pro attitudes (such as pro attitudes for altruistic actions) over another set of pro attitudes (such as pro attitudes for selfish actions) since sets of pro attitudes can only be judged good or bad in light of another, higher set of pro attitudes. So, I can’t conceive of an agent-independent reason for acting altruistically; all reasons are necessarily agent-relative since a reason for performing a particular action can only be explicated in terms of an agent’s already existing preferences. Yet most ethical theories claim to provide people with just such an agent-independent reason for being altruistic. And I think this position is latent in your argument: you seem to think that there are good, agent-independent, rationally derivable reasons for being altruistic, and I want to know what those purported reasons are.