There is no need for morality to be grounded in emotional effects alone. After all, there is also a part of you that thinks that there is, or might be, something “horrible” about this, and that part also has input into your decision-making process.
Similarly, I’d be wary of your point about utility maximisation. You’re not really a simple utility-maximising agent, so it’s not like there’s any simple concept that corresponds to “your utility”. Also, the concept of maximising “utility generally” doesn’t really make sense; there is no canonical way of adding your own utility function together with everyone else’s.
Nonetheless, if you were to cash out your concepts of what things are worth and how things ought to be, then in principle it should be possible to turn them into a utility function. However, there is a priori no reason that that utility function has to only be defined over your own feelings and emotions.
If you could obtain the altruistic high without doing any of the actual altruism, would it still be just as worthwhile?
If you could obtain the altruistic high without doing any of the actual altruism, would it still be just as worthwhile?
The high is a mechanism by which values are established. Reward or punishment in the past but not necessarily in the present is sufficient for making you value something in the present. Because of our limited memories introspection is pretty useless for figuring out whether you value something because of the high or not.
If you have the values already and you don’t have any reason to believe the values themselves could be problematic, does it matter how you got them?
It may be that an altruistic high in the past has led you to value altruism in the present, but what matters in the present is whether you value the altruism itself over and above the high.
There is no need for morality to be grounded in emotional effects alone. After all, there is also a part of you that thinks that there is, or might be, something “horrible” about this, and that part also has input into your decision-making process.
Similarly, I’d be wary of your point about utility maximisation. You’re not really a simple utility-maximising agent, so it’s not like there’s any simple concept that corresponds to “your utility”. Also, the concept of maximising “utility generally” doesn’t really make sense; there is no canonical way of adding your own utility function together with everyone else’s.
Nonetheless, if you were to cash out your concepts of what things are worth and how things ought to be, then in principle it should be possible to turn them into a utility function. However, there is a priori no reason that that utility function has to only be defined over your own feelings and emotions.
If you could obtain the altruistic high without doing any of the actual altruism, would it still be just as worthwhile?
The high is a mechanism by which values are established. Reward or punishment in the past but not necessarily in the present is sufficient for making you value something in the present. Because of our limited memories introspection is pretty useless for figuring out whether you value something because of the high or not.
If you have the values already and you don’t have any reason to believe the values themselves could be problematic, does it matter how you got them?
It may be that an altruistic high in the past has led you to value altruism in the present, but what matters in the present is whether you value the altruism itself over and above the high.