If it’s true that moral decisions cannot be made on a rational basis, then it should be impossible for me to find an example of a moral decision which was made rationally, right?
All decisions are in a sense “moral decisions”. You should distinguish the process of decision-making from the question of figuring out your values. You can’t define values “on rational basis”, but you use a rational process to figure out what your values actually are, and to construct a plan towards achieving given values (based, in particular, on epistemically rational understanding of the world).
I think a lot of confusion here comes from people lumping together ultimate and intermediate goals in their definitions of morality. Ultimate goals are parts of your utility function: what you really want. As you said, you can’t derive these rationally; they’re just there. Intermediate goals, on the other hand, are mental shortcuts, things that you want as a proxy for some deeper desire. An example would be the goal that violent criminals get thrown in jail or otherwise separated from society; the ultimate goal that this serves is our desire to avoid things like being savagely beaten by random ne’er-do-wells when we go to the 7⁄11 to buy delicious melon bread. But if there were a more effective, or cheaper, or more humane way to prevent violent crime, rationality can help you figure out that you should prefer it.
Rationality can and should define your intermediate goals, but can’t define your ultimate goals. But when most people talk about morality, they make no distinction between these. As soon as you do distinguish between these, the question tends to dissolve. Just look at all the flak that Sam Harris is getting for saying that science can answer moral questions. What he’s really saying is that science can help us determine our utility functions and figure out how to optimize them. The criticism he gets would probably evaporate if he would taboo “morality” for a little while, but he gets way more media attention by talking this way.
All decisions are in a sense “moral decisions”. You should distinguish the process of decision-making from the question of figuring out your values. You can’t define values “on rational basis”, but you use a rational process to figure out what your values actually are, and to construct a plan towards achieving given values (based, in particular, on epistemically rational understanding of the world).
I think a lot of confusion here comes from people lumping together ultimate and intermediate goals in their definitions of morality. Ultimate goals are parts of your utility function: what you really want. As you said, you can’t derive these rationally; they’re just there. Intermediate goals, on the other hand, are mental shortcuts, things that you want as a proxy for some deeper desire. An example would be the goal that violent criminals get thrown in jail or otherwise separated from society; the ultimate goal that this serves is our desire to avoid things like being savagely beaten by random ne’er-do-wells when we go to the 7⁄11 to buy delicious melon bread. But if there were a more effective, or cheaper, or more humane way to prevent violent crime, rationality can help you figure out that you should prefer it.
Rationality can and should define your intermediate goals, but can’t define your ultimate goals. But when most people talk about morality, they make no distinction between these. As soon as you do distinguish between these, the question tends to dissolve. Just look at all the flak that Sam Harris is getting for saying that science can answer moral questions. What he’s really saying is that science can help us determine our utility functions and figure out how to optimize them. The criticism he gets would probably evaporate if he would taboo “morality” for a little while, but he gets way more media attention by talking this way.
I’m not sure I follow. Are you using “values” in the sense of “terminal values”? Or “instrumental values”? Or perhaps something else?