’if it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it.
Singer seems to take a small unquantified intuition and apply it generally, without regard to scaling or repeatability. The hidden assumption there is that ‘comparable moral importance’ is obvious or even meaningful. And what work is the word “moral” doing in that sentence? Why not just “comparable personal importance”? It seems like it’s a mechanism to prevent noticing that tradeoffs are the actual hard part, and saying “moral” tries to keep the decision out of the realm of logic and choice.
It does not seem to me that the reasons to save the drowning child could be ‘personal’ or self-regarding, and even if they could, they would be such that from them follow other imperatives that are at least seemingly other-regarding and on which the term ‘moral’ would be, I think, appropriate.
As for the scaling objection, it is a good one and one that has appeared in the comment section of my link-post on the EA forum. I will say here what I did there: that it seems very counter-intuitive to me to suppose there are no ‘rights’ and ‘wrongs’ and only things that are ‘better’ and ‘worse’, and that even if this is true, it would be useful to sometimes suppose the former exist, and distinguish between actions which come in the former category and those which come in the latter.
There are LOTS of ways that saving the drowning child could be self-regarding. It could be part of your self-image and feeling good. It could be to make you look good to others when talking about it later (or to avoid criticism for failing when someone notices you were near the dead kid earlier). It could be you’re hoping to set a precedent for someone saving you later. Singer’s example encourages you to ignore most of these considerations, but then doesn’t acknowledge that your intuitions about “correct action” are built from these considerations.
I think whether you say “right vs wrong” or “better vs worse”, you’re still comparing on only one dimension, and you still fall prey to the generalization/scaling problem: there are, in real actions, always variations in context, expected results, and resources available that a thought experiment can’t capture, and that will make a given intuition differ across instances. You can’t take an easy choice and then assert it’s universal.
Yes, these are the self-regarding reasons I imagined you had in mind. My point stands, however, that the behaviour is at least seemingly other-regarding, and it is still action to which the term ‘moral’ appropriately applies. The kinds of things you are surmising about here are for the realm of meta-ethics and moral psychology; not normative and applied ethics. It might well be that I am only motivated by self-interest to act seemingly morally in accordance with consistency (crudely, that ‘egoism’ is true), but this says nothing as to what this moral system or what consistency requires.
Maybe I’m confused, but I think there’s a crux in there. Sure, you CAN define “moral” as “other-regarding”, and it’s an operational-ish classification. But that doesn’t resolve the problem that actual decisions and behaviors do not make this distinction very sharply. Both practical/self-regarding and other-regarding aspects are part of decision criteria, but even worse, MOST considerations blend them in confusing-to-introspect ways.
You end up with EITHER “separate moral reasoning domain is unhelpful” or “morals may be fairly straightforward, but they don’t fully apply to most situations”.
Singer seems to take a small unquantified intuition and apply it generally, without regard to scaling or repeatability. The hidden assumption there is that ‘comparable moral importance’ is obvious or even meaningful. And what work is the word “moral” doing in that sentence? Why not just “comparable personal importance”? It seems like it’s a mechanism to prevent noticing that tradeoffs are the actual hard part, and saying “moral” tries to keep the decision out of the realm of logic and choice.
It does not seem to me that the reasons to save the drowning child could be ‘personal’ or self-regarding, and even if they could, they would be such that from them follow other imperatives that are at least seemingly other-regarding and on which the term ‘moral’ would be, I think, appropriate.
As for the scaling objection, it is a good one and one that has appeared in the comment section of my link-post on the EA forum. I will say here what I did there: that it seems very counter-intuitive to me to suppose there are no ‘rights’ and ‘wrongs’ and only things that are ‘better’ and ‘worse’, and that even if this is true, it would be useful to sometimes suppose the former exist, and distinguish between actions which come in the former category and those which come in the latter.
There are LOTS of ways that saving the drowning child could be self-regarding. It could be part of your self-image and feeling good. It could be to make you look good to others when talking about it later (or to avoid criticism for failing when someone notices you were near the dead kid earlier). It could be you’re hoping to set a precedent for someone saving you later. Singer’s example encourages you to ignore most of these considerations, but then doesn’t acknowledge that your intuitions about “correct action” are built from these considerations.
I think whether you say “right vs wrong” or “better vs worse”, you’re still comparing on only one dimension, and you still fall prey to the generalization/scaling problem: there are, in real actions, always variations in context, expected results, and resources available that a thought experiment can’t capture, and that will make a given intuition differ across instances. You can’t take an easy choice and then assert it’s universal.
Yes, these are the self-regarding reasons I imagined you had in mind. My point stands, however, that the behaviour is at least seemingly other-regarding, and it is still action to which the term ‘moral’ appropriately applies. The kinds of things you are surmising about here are for the realm of meta-ethics and moral psychology; not normative and applied ethics. It might well be that I am only motivated by self-interest to act seemingly morally in accordance with consistency (crudely, that ‘egoism’ is true), but this says nothing as to what this moral system or what consistency requires.
Maybe I’m confused, but I think there’s a crux in there. Sure, you CAN define “moral” as “other-regarding”, and it’s an operational-ish classification. But that doesn’t resolve the problem that actual decisions and behaviors do not make this distinction very sharply. Both practical/self-regarding and other-regarding aspects are part of decision criteria, but even worse, MOST considerations blend them in confusing-to-introspect ways.
You end up with EITHER “separate moral reasoning domain is unhelpful” or “morals may be fairly straightforward, but they don’t fully apply to most situations”.