Yes, and which conclusion do you draw from this observation?
I don’t see how defining morality as the popular vote doesn’t entail moral progress being a random walk, and don’t think that that definition provides any kind of answer to most of the questions that we pose within the cultural category ‘moral philosophy’.
I am not sure I understand. Which intuitions have been explicitly stipulated away and where?
There’s implicit uncertainty about how to compare the moral weight of children and adults. Is there not always some number of adults that would be better to save than a fixed number of children? Would you sacrifice ten million adults for one child? There’s some number. People have unique intuitions about the moral weight of children, as opposed to adults, and most utilitarians don’t make any kind of concrete judgments about what the weights should be. If you throw in something like this, then you’re not countering a claim that anyone has actually made.
There are other intuitions that implicitly affect the judgment, like pleasure, social reputation, uncertainty about the assumptions themselves. In particular, it’s hard to suspend your disbelief in a thought experiment. If it really were the case that you knew with certainty that you could live and save two people instead of dying trying to save someone else and failing, then yes, you should pick the action that leads to the outcome with the greatest number of people safe. And finally, these things never actually happen. You seem to champion pragmatism constantly; I don’t see how being able to save a life for $4,000 instead of $100,000 and ignoring quirks about my ability to perceive large scopes and distant localities to come to the conclusion that, yes, in fact, I should save twenty-five lives instead of one life, is counterintuitive, unpragmatic, or morally indefensible. I see thought experiments against utilitarianism as counterintuition porn, pitting a jury-rigged human brain against the most alien, unrealistic situation that you possibly can.
I don’t see how defining morality as the popular vote doesn’t entail moral progress being a random walk
You imply that the empirically observed (“popular”) morality of different societies at different times is a random walk. Is that a bullet you wish to bite?
The point I had in mind, though, wasn’t defining morality through democracy. If you think that your moral opinions about cats on fire are better than those of some fellows a century or two ago, you have a couple of ways to argue for this.
One would be to claim that moral progress exists and is largely montonic and inescapable, thus your morality is better just because it comes later in time. Another would be to claim that you are in some way exceptional (in terms of your position in space and/or time), for example you can see the Truth better than those other folks because they were deficient in some way.
As you are probably well aware of, such claims tend to be controversial and have issues. I was wondering which path do you want to take. I’m guessing the moral progress path, am I right?
There’s implicit uncertainty about … other intuitions that implicitly affect the judgment, like pleasure …
Sure, but what has been explicitly stipulated away?
I don’t see how being able to save a life for $4,000 instead of $100,000 … is counterintuitive, unpragmatic, or morally indefensible.
That’s not what we are talking about, is it? We are talking more about immediate, visceral-reaction kinds of actions versus far-off, unconnected, and statistical-averages kinds. In some way it’s an emotion vs intellect sort of a conflict, or, put in different terms, hardwired biological imperatives vs abstract calculations.
You are saying that abstract calculations provide the right answer, but I don’t see it as self-evident: see my post above about putting all your trust into a single maximization.
Yes, and which conclusion do you draw from this observation?
I am not sure I understand. Which intuitions have been explicitly stipulated away and where?
I don’t see how defining morality as the popular vote doesn’t entail moral progress being a random walk, and don’t think that that definition provides any kind of answer to most of the questions that we pose within the cultural category ‘moral philosophy’.
There’s implicit uncertainty about how to compare the moral weight of children and adults. Is there not always some number of adults that would be better to save than a fixed number of children? Would you sacrifice ten million adults for one child? There’s some number. People have unique intuitions about the moral weight of children, as opposed to adults, and most utilitarians don’t make any kind of concrete judgments about what the weights should be. If you throw in something like this, then you’re not countering a claim that anyone has actually made.
There are other intuitions that implicitly affect the judgment, like pleasure, social reputation, uncertainty about the assumptions themselves. In particular, it’s hard to suspend your disbelief in a thought experiment. If it really were the case that you knew with certainty that you could live and save two people instead of dying trying to save someone else and failing, then yes, you should pick the action that leads to the outcome with the greatest number of people safe. And finally, these things never actually happen. You seem to champion pragmatism constantly; I don’t see how being able to save a life for $4,000 instead of $100,000 and ignoring quirks about my ability to perceive large scopes and distant localities to come to the conclusion that, yes, in fact, I should save twenty-five lives instead of one life, is counterintuitive, unpragmatic, or morally indefensible. I see thought experiments against utilitarianism as counterintuition porn, pitting a jury-rigged human brain against the most alien, unrealistic situation that you possibly can.
You imply that the empirically observed (“popular”) morality of different societies at different times is a random walk. Is that a bullet you wish to bite?
The point I had in mind, though, wasn’t defining morality through democracy. If you think that your moral opinions about cats on fire are better than those of some fellows a century or two ago, you have a couple of ways to argue for this.
One would be to claim that moral progress exists and is largely montonic and inescapable, thus your morality is better just because it comes later in time. Another would be to claim that you are in some way exceptional (in terms of your position in space and/or time), for example you can see the Truth better than those other folks because they were deficient in some way.
As you are probably well aware of, such claims tend to be controversial and have issues. I was wondering which path do you want to take. I’m guessing the moral progress path, am I right?
Sure, but what has been explicitly stipulated away?
That’s not what we are talking about, is it? We are talking more about immediate, visceral-reaction kinds of actions versus far-off, unconnected, and statistical-averages kinds. In some way it’s an emotion vs intellect sort of a conflict, or, put in different terms, hardwired biological imperatives vs abstract calculations.
You are saying that abstract calculations provide the right answer, but I don’t see it as self-evident: see my post above about putting all your trust into a single maximization.