Whether it’s the right choice is a function of your moral system. Under some moral systems it is, and under some it isn’t. However notice the “everyone knows” part. Everyone does know. Which percentage of the population do you expect to agree that letting the child drown was the right thing to do?
Of course the scenario is ridiculous anyway
Any more than the trolley one? Hypotheticals aren’t know for their realism.
Under some moral systems it is, and under some it isn’t.
Right. And provided some of the latter moral systems are ones endorsed by actual people, it cannot be true that “Everyone knows …”.
Which percentage of the population [...]
Oh, I’m sorry. I’d thought we were having a discussion about ethics, not a popularity contest. What percentage of the population has even heard of utilitarianism? What proportion has heard of it and has a reasonably accurate idea what it is?
Any more than the trolley one?
Nope, ridiculous to a similar extent and in similar ways. This is relevant not because there’s anything wrong with using unrealistic hypothetical questions to explore moral systems, but because there’s something wrong with making a naked appeal to intuition when addressing an unrealistic hypothetical question (that being what entirelyuseless just did). Because our intuitions are not calibrated for weird hypothetical situations and we shouldn’t expect what they tell us about such situations to be very enlightening.
Whether it’s the right choice is a function of your moral system. Under some moral systems it is, and under some it isn’t. However notice the “everyone knows” part. Everyone does know. Which percentage of the population do you expect to agree that letting the child drown was the right thing to do?
A while back, a lot of people would have agreed that setting cats on fire for entertainment was totally cool.
Any more than the trolley one? Hypotheticals aren’t know for their realism.
The idea is that the argument sneaks in intuitions about the situation that have been explicitly stipulated away.
Yes, and which conclusion do you draw from this observation?
I don’t see how defining morality as the popular vote doesn’t entail moral progress being a random walk, and don’t think that that definition provides any kind of answer to most of the questions that we pose within the cultural category ‘moral philosophy’.
I am not sure I understand. Which intuitions have been explicitly stipulated away and where?
There’s implicit uncertainty about how to compare the moral weight of children and adults. Is there not always some number of adults that would be better to save than a fixed number of children? Would you sacrifice ten million adults for one child? There’s some number. People have unique intuitions about the moral weight of children, as opposed to adults, and most utilitarians don’t make any kind of concrete judgments about what the weights should be. If you throw in something like this, then you’re not countering a claim that anyone has actually made.
There are other intuitions that implicitly affect the judgment, like pleasure, social reputation, uncertainty about the assumptions themselves. In particular, it’s hard to suspend your disbelief in a thought experiment. If it really were the case that you knew with certainty that you could live and save two people instead of dying trying to save someone else and failing, then yes, you should pick the action that leads to the outcome with the greatest number of people safe. And finally, these things never actually happen. You seem to champion pragmatism constantly; I don’t see how being able to save a life for $4,000 instead of $100,000 and ignoring quirks about my ability to perceive large scopes and distant localities to come to the conclusion that, yes, in fact, I should save twenty-five lives instead of one life, is counterintuitive, unpragmatic, or morally indefensible. I see thought experiments against utilitarianism as counterintuition porn, pitting a jury-rigged human brain against the most alien, unrealistic situation that you possibly can.
I don’t see how defining morality as the popular vote doesn’t entail moral progress being a random walk
You imply that the empirically observed (“popular”) morality of different societies at different times is a random walk. Is that a bullet you wish to bite?
The point I had in mind, though, wasn’t defining morality through democracy. If you think that your moral opinions about cats on fire are better than those of some fellows a century or two ago, you have a couple of ways to argue for this.
One would be to claim that moral progress exists and is largely montonic and inescapable, thus your morality is better just because it comes later in time. Another would be to claim that you are in some way exceptional (in terms of your position in space and/or time), for example you can see the Truth better than those other folks because they were deficient in some way.
As you are probably well aware of, such claims tend to be controversial and have issues. I was wondering which path do you want to take. I’m guessing the moral progress path, am I right?
There’s implicit uncertainty about … other intuitions that implicitly affect the judgment, like pleasure …
Sure, but what has been explicitly stipulated away?
I don’t see how being able to save a life for $4,000 instead of $100,000 … is counterintuitive, unpragmatic, or morally indefensible.
That’s not what we are talking about, is it? We are talking more about immediate, visceral-reaction kinds of actions versus far-off, unconnected, and statistical-averages kinds. In some way it’s an emotion vs intellect sort of a conflict, or, put in different terms, hardwired biological imperatives vs abstract calculations.
You are saying that abstract calculations provide the right answer, but I don’t see it as self-evident: see my post above about putting all your trust into a single maximization.
Whether it’s the right choice is a function of your moral system. Under some moral systems it is, and under some it isn’t. However notice the “everyone knows” part. Everyone does know. Which percentage of the population do you expect to agree that letting the child drown was the right thing to do?
Any more than the trolley one? Hypotheticals aren’t know for their realism.
Right. And provided some of the latter moral systems are ones endorsed by actual people, it cannot be true that “Everyone knows …”.
Oh, I’m sorry. I’d thought we were having a discussion about ethics, not a popularity contest. What percentage of the population has even heard of utilitarianism? What proportion has heard of it and has a reasonably accurate idea what it is?
Nope, ridiculous to a similar extent and in similar ways. This is relevant not because there’s anything wrong with using unrealistic hypothetical questions to explore moral systems, but because there’s something wrong with making a naked appeal to intuition when addressing an unrealistic hypothetical question (that being what entirelyuseless just did). Because our intuitions are not calibrated for weird hypothetical situations and we shouldn’t expect what they tell us about such situations to be very enlightening.
A while back, a lot of people would have agreed that setting cats on fire for entertainment was totally cool.
The idea is that the argument sneaks in intuitions about the situation that have been explicitly stipulated away.
Yes, and which conclusion do you draw from this observation?
I am not sure I understand. Which intuitions have been explicitly stipulated away and where?
I don’t see how defining morality as the popular vote doesn’t entail moral progress being a random walk, and don’t think that that definition provides any kind of answer to most of the questions that we pose within the cultural category ‘moral philosophy’.
There’s implicit uncertainty about how to compare the moral weight of children and adults. Is there not always some number of adults that would be better to save than a fixed number of children? Would you sacrifice ten million adults for one child? There’s some number. People have unique intuitions about the moral weight of children, as opposed to adults, and most utilitarians don’t make any kind of concrete judgments about what the weights should be. If you throw in something like this, then you’re not countering a claim that anyone has actually made.
There are other intuitions that implicitly affect the judgment, like pleasure, social reputation, uncertainty about the assumptions themselves. In particular, it’s hard to suspend your disbelief in a thought experiment. If it really were the case that you knew with certainty that you could live and save two people instead of dying trying to save someone else and failing, then yes, you should pick the action that leads to the outcome with the greatest number of people safe. And finally, these things never actually happen. You seem to champion pragmatism constantly; I don’t see how being able to save a life for $4,000 instead of $100,000 and ignoring quirks about my ability to perceive large scopes and distant localities to come to the conclusion that, yes, in fact, I should save twenty-five lives instead of one life, is counterintuitive, unpragmatic, or morally indefensible. I see thought experiments against utilitarianism as counterintuition porn, pitting a jury-rigged human brain against the most alien, unrealistic situation that you possibly can.
You imply that the empirically observed (“popular”) morality of different societies at different times is a random walk. Is that a bullet you wish to bite?
The point I had in mind, though, wasn’t defining morality through democracy. If you think that your moral opinions about cats on fire are better than those of some fellows a century or two ago, you have a couple of ways to argue for this.
One would be to claim that moral progress exists and is largely montonic and inescapable, thus your morality is better just because it comes later in time. Another would be to claim that you are in some way exceptional (in terms of your position in space and/or time), for example you can see the Truth better than those other folks because they were deficient in some way.
As you are probably well aware of, such claims tend to be controversial and have issues. I was wondering which path do you want to take. I’m guessing the moral progress path, am I right?
Sure, but what has been explicitly stipulated away?
That’s not what we are talking about, is it? We are talking more about immediate, visceral-reaction kinds of actions versus far-off, unconnected, and statistical-averages kinds. In some way it’s an emotion vs intellect sort of a conflict, or, put in different terms, hardwired biological imperatives vs abstract calculations.
You are saying that abstract calculations provide the right answer, but I don’t see it as self-evident: see my post above about putting all your trust into a single maximization.