I would assume that detecting the danger of the framing bias, such as “200 of 600 people will be saved” vs “400 of 600 people will die” is elementary enough and so is something an aspired moral philosopher ought to learn to recognize and avoid before she can be allowed to practice in the field. Otherwise all their research is very much suspect.
Realize what’s occurring here, though. It’s not that individual philosophers are being asked the question both ways and are answering differently in each case. That would be an egregious error that one would hope philosophical training would allay. What’s actually happening is that when philosophers are presented with the “save” formulation (but not the “die” formulation) they react differently than when they are presented with the “die” formulation (but not the “save” formulation). This is an error, but also an extremely insidious error, and one that is hard to correct for. I mean, I’m perfectly aware of the error, I know I wouldn’t give conflicting responses if presented with both options, but I am also reasonably confident that I would in fact make the error if presented with just one option. My responses in that case would quite probably be different than in the counterfactual where I was only provided with the other option. In each case, if you subsequently presented me with the second framing, I would immediately recognize that I ought to give the same answer as I gave for the first framing, but what that answer is would, I anticipate, be impacted by the initial framing.
I have no idea how to avoid that sort of error, beyond basing my answers on some artificially created algorithm rather than my moral judgment. I mean, I could, when presented with the “save” formulation, think to myself “What would I say in the ‘die’ formulation?” before coming up with a response, but that procedure is still susceptible to framing effects. The answer I come up with might not be the same as what I would have said if presented with the “die” formulation in the first place.
I have no idea how to avoid that sort of error, beyond basing my answers on some artificially created algorithm rather than my moral judgment.
Do you think that this is what utilitarianism is, or ought to be?
I mean, I could, when presented with the “save” formulation, think to myself “What would I say in the ‘die’ formulation?” before coming up with a response, but that procedure is still susceptible to framing effects. The answer I come up with might not be the same as what I would have said if presented with the “die” formulation in the first place.
So, do you think that, absent a formal algorithm, when presented with a “save” formulation, a (properly trained) philosopher should immediately detect the framing effect, recast the problem in the “die” formulation (or some alternative framing-free formulation), all before even attempting to solve the problem, to avoid anchoring and other biases? If so, has this approach been advocated by a moral philosopher you know of?
Do you think that this is what utilitarianism is, or ought to be?
Utilitarianism does offer the possibility of a precise, algorithmic approach to morality, but we don’t have anything close to that as of now. People disagree about what “utility” is, how it should be measured, and how it should be aggregated. And of course, even if they did agree, actually performing the calculation in most realistic cases would require powers of prediction and computation well beyond our abilities.
The reason I used the phrase “artificially created”, though, is that I think any attempt at systematization, utilitarianism included, will end up doing considerable violence to our moral intuitions. Our moral sensibilities are the product of a pretty hodge-podge process of evolution and cultural assimilation, so I don’t think there’s any reason to expect them to be neatly systematizable. One response is that the benefits of having a system (such as bias mitigation) are strong enough to justify biting the bullet, but I’m not sure that’s the right way to think about morality, especially if you’re a moral realist. In science, it might often be worthwhile using a simplified model even though you know there is a cost in terms of accuracy. In moral reasoning, though, it seems weird to say “I know this model doesn’t always correctly distinguish right from wrong, but its simplicity and precision outweigh that cost”.
So, do you think that, absent a formal algorithm, when presented with a “save” formulation, a (properly trained) philosopher should immediately detect the framing effect, recast the problem in the “die” formulation (or some alternative framing-free formulation), all before even attempting to solve the problem, to avoid anchoring and other biases?
Something like this might be useful, but I’m not at all confident it would work. Sounds like another research project for the Harvard Moral Psychology Research Lab. I’m not aware of any moral philosopher proposing something along these lines, but I’m not extremely familiar with that literature. I do philosophy of science, not moral philosophy.
I would assume that detecting the danger of the framing bias, such as “200 of 600 people will be saved” vs “400 of 600 people will die” is elementary enough and so is something an aspired moral philosopher ought to learn to recognize and avoid before she can be allowed to practice in the field. Otherwise all their research is very much suspect.
Being able to detect a bias and actually being able to circumvent it are two different skills.
Realize what’s occurring here, though. It’s not that individual philosophers are being asked the question both ways and are answering differently in each case. That would be an egregious error that one would hope philosophical training would allay. What’s actually happening is that when philosophers are presented with the “save” formulation (but not the “die” formulation) they react differently than when they are presented with the “die” formulation (but not the “save” formulation). This is an error, but also an extremely insidious error, and one that is hard to correct for. I mean, I’m perfectly aware of the error, I know I wouldn’t give conflicting responses if presented with both options, but I am also reasonably confident that I would in fact make the error if presented with just one option. My responses in that case would quite probably be different than in the counterfactual where I was only provided with the other option. In each case, if you subsequently presented me with the second framing, I would immediately recognize that I ought to give the same answer as I gave for the first framing, but what that answer is would, I anticipate, be impacted by the initial framing.
I have no idea how to avoid that sort of error, beyond basing my answers on some artificially created algorithm rather than my moral judgment. I mean, I could, when presented with the “save” formulation, think to myself “What would I say in the ‘die’ formulation?” before coming up with a response, but that procedure is still susceptible to framing effects. The answer I come up with might not be the same as what I would have said if presented with the “die” formulation in the first place.
Thanks, that makes sense.
Do you think that this is what utilitarianism is, or ought to be?
So, do you think that, absent a formal algorithm, when presented with a “save” formulation, a (properly trained) philosopher should immediately detect the framing effect, recast the problem in the “die” formulation (or some alternative framing-free formulation), all before even attempting to solve the problem, to avoid anchoring and other biases? If so, has this approach been advocated by a moral philosopher you know of?
Utilitarianism does offer the possibility of a precise, algorithmic approach to morality, but we don’t have anything close to that as of now. People disagree about what “utility” is, how it should be measured, and how it should be aggregated. And of course, even if they did agree, actually performing the calculation in most realistic cases would require powers of prediction and computation well beyond our abilities.
The reason I used the phrase “artificially created”, though, is that I think any attempt at systematization, utilitarianism included, will end up doing considerable violence to our moral intuitions. Our moral sensibilities are the product of a pretty hodge-podge process of evolution and cultural assimilation, so I don’t think there’s any reason to expect them to be neatly systematizable. One response is that the benefits of having a system (such as bias mitigation) are strong enough to justify biting the bullet, but I’m not sure that’s the right way to think about morality, especially if you’re a moral realist. In science, it might often be worthwhile using a simplified model even though you know there is a cost in terms of accuracy. In moral reasoning, though, it seems weird to say “I know this model doesn’t always correctly distinguish right from wrong, but its simplicity and precision outweigh that cost”.
Something like this might be useful, but I’m not at all confident it would work. Sounds like another research project for the Harvard Moral Psychology Research Lab. I’m not aware of any moral philosopher proposing something along these lines, but I’m not extremely familiar with that literature. I do philosophy of science, not moral philosophy.