There are LOTS of ways that saving the drowning child could be self-regarding. It could be part of your self-image and feeling good. It could be to make you look good to others when talking about it later (or to avoid criticism for failing when someone notices you were near the dead kid earlier). It could be you’re hoping to set a precedent for someone saving you later. Singer’s example encourages you to ignore most of these considerations, but then doesn’t acknowledge that your intuitions about “correct action” are built from these considerations.
I think whether you say “right vs wrong” or “better vs worse”, you’re still comparing on only one dimension, and you still fall prey to the generalization/scaling problem: there are, in real actions, always variations in context, expected results, and resources available that a thought experiment can’t capture, and that will make a given intuition differ across instances. You can’t take an easy choice and then assert it’s universal.
Yes, these are the self-regarding reasons I imagined you had in mind. My point stands, however, that the behaviour is at least seemingly other-regarding, and it is still action to which the term ‘moral’ appropriately applies. The kinds of things you are surmising about here are for the realm of meta-ethics and moral psychology; not normative and applied ethics. It might well be that I am only motivated by self-interest to act seemingly morally in accordance with consistency (crudely, that ‘egoism’ is true), but this says nothing as to what this moral system or what consistency requires.
Maybe I’m confused, but I think there’s a crux in there. Sure, you CAN define “moral” as “other-regarding”, and it’s an operational-ish classification. But that doesn’t resolve the problem that actual decisions and behaviors do not make this distinction very sharply. Both practical/self-regarding and other-regarding aspects are part of decision criteria, but even worse, MOST considerations blend them in confusing-to-introspect ways.
You end up with EITHER “separate moral reasoning domain is unhelpful” or “morals may be fairly straightforward, but they don’t fully apply to most situations”.
There are LOTS of ways that saving the drowning child could be self-regarding. It could be part of your self-image and feeling good. It could be to make you look good to others when talking about it later (or to avoid criticism for failing when someone notices you were near the dead kid earlier). It could be you’re hoping to set a precedent for someone saving you later. Singer’s example encourages you to ignore most of these considerations, but then doesn’t acknowledge that your intuitions about “correct action” are built from these considerations.
I think whether you say “right vs wrong” or “better vs worse”, you’re still comparing on only one dimension, and you still fall prey to the generalization/scaling problem: there are, in real actions, always variations in context, expected results, and resources available that a thought experiment can’t capture, and that will make a given intuition differ across instances. You can’t take an easy choice and then assert it’s universal.
Yes, these are the self-regarding reasons I imagined you had in mind. My point stands, however, that the behaviour is at least seemingly other-regarding, and it is still action to which the term ‘moral’ appropriately applies. The kinds of things you are surmising about here are for the realm of meta-ethics and moral psychology; not normative and applied ethics. It might well be that I am only motivated by self-interest to act seemingly morally in accordance with consistency (crudely, that ‘egoism’ is true), but this says nothing as to what this moral system or what consistency requires.
Maybe I’m confused, but I think there’s a crux in there. Sure, you CAN define “moral” as “other-regarding”, and it’s an operational-ish classification. But that doesn’t resolve the problem that actual decisions and behaviors do not make this distinction very sharply. Both practical/self-regarding and other-regarding aspects are part of decision criteria, but even worse, MOST considerations blend them in confusing-to-introspect ways.
You end up with EITHER “separate moral reasoning domain is unhelpful” or “morals may be fairly straightforward, but they don’t fully apply to most situations”.