Similarly, while ‘bayesian reasoning’ might in principle get you the right answer in ethical questions, it’s not a feasible way for humans to do it, and it will likely not help at all.
Maybe I’m missing something, but this analogy seems pretty weak. In general, I suspect that a pretty important factor in our ability to learn effective heuristics without reasoning them out from first principles is that we are consistently given clear feedback on the quality of our actions/decisions. (There’s a good bit on this in Jonah Lehrer’s, The Decisive Moment.)
It’s generally pretty obvious whether you’ve managed to catch a baseball, but there’s no equivalent feedback mechanism for making-the-right-moral-decision, so there seems little reason to think that we’ll just stumble onto good heuristics, especially outside contexts in which particular heuristics might have conferred a selection advantage.
Do you have concrete reasons for thinking that Bayesian reasoning “likely won’t help at all” in answering ethical questions such as “what steps we should take to mitigate the effects of global warming?” It seems pretty useful to me.
ethical questions such as “what steps we should take to mitigate the effects of global warming?”
While I don’t often say this, that question doesn’t strike me as an ethical question. It seems to turn entirely on questions of what steps would be most effective to producing the desired effect.
When primitives performed human sacrifice to ensure the sun will rise tomorrow, they were not mistaken about ethics—they were mistaken about astronomy.
there’s no equivalent feedback mechanism for making-the-right-moral-decision
I disagree—it’s usually pretty obvious. While I usually prefer not to talk in terms of “right moral decisions”, acting in accord with ethics gets you exactly what you’d expect from it. Ethics specifies criteria for determining what one has most reason to do or want. While what that ends up being is still a matter of disagreement, here are a couple of examples:
consequentialist: do whatever maximizes overall net utility. If you do something to make someone feel good, and you make them feel bad instead, you get immediate feedback as direct and profound as catching a baseball.
virtue ethics: act as the good man does. If you go around acting in a vicious manner, it’s obvious to all around that you’re nothing like a good person.
While I don’t often say this, that question doesn’t strike me as an ethical question. It seems to turn entirely on questions of what steps would be most effective to producing the desired effect.
Entirely? It depends on things like how we should weigh the present vs future generations, how we should weigh rich vs poor, whether we’re working under ethical constraints other than pure utility maximization. All those are ethical questions.
When primitives performed human sacrifice to ensure the sun will rise tomorrow, they were not mistaken about ethics—they were mistaken about astronomy.
If the probability of the sun rising tomorrow is something else than a unit step function of the number of humans sacrificed, ethics comes in again. Do you sacrifice victim number 386,264 for an added 0.0001% chance of sunrise? Ethical question.
Entirely? It depends on things like how we should weigh the present vs future generations, how we should weigh rich vs poor, whether we’re working under ethical constraints other than pure utility maximization. All those are ethical questions.
I’m not sure who the ‘we’ here are. Ethical questions are questions about what I should do. I see no reason to ‘weigh’ rich or poor people, or different generations.
There are political questions about what sorts of institutions should be set up, and those things might address collectives of people or whether the poor get to count for more than the rich. But while in some sense ‘what political system should I prefer’ is an ethical question, the relevant questions to analyze the problem of what institutions to set up are political.
If ethical questions are limited to determining criteria for normative evaluation, then your claim that we receive feedback on ethical issues appears false. We receive feedback on the instrumental questions (e.g. what makes people feel good), not the ethical ones.
On the other hand, adopting my broader sense of what constitutes an ethical question seems to falsify my claim that we do not get feedback on “rightness”. We do, for the reasons you explain.* (Actually, I think your virtue ethics example is weak, but the consequentialist one is enough to make your point.)
I would still claim that ethical feedback is generally weaker than in the baseball case, particularly once you’re thinking about trying to help dispersed groups of individuals with whom you do not have direct contact (e.g. future generations). But my claim that there is no feedback whatsoever was overstated.
Another question: If we define ethics as being just about criteria, is there any reason to think Bayesian reasoning, which is essentially instrumental, should help us reach answers even in principle? (I guess you might be able to make an Aumann-style agreement argument, but it’s not obvious it would work.)
* It looks like we both illegitimately altered our definition of “ethical” half way through our comments. Mmmm… irony.
EDIT:
[what to do about global warming] seems to turn entirely on questions of what steps would be most effective to producing the desired effect.
It turns pretty seriously on what you think the desired effect is as well. Indeed, much of the post-Stern debate was on exactly that issue.
Maybe I’m missing something, but this analogy seems pretty weak. In general, I suspect that a pretty important factor in our ability to learn effective heuristics without reasoning them out from first principles is that we are consistently given clear feedback on the quality of our actions/decisions. (There’s a good bit on this in Jonah Lehrer’s, The Decisive Moment.)
It’s generally pretty obvious whether you’ve managed to catch a baseball, but there’s no equivalent feedback mechanism for making-the-right-moral-decision, so there seems little reason to think that we’ll just stumble onto good heuristics, especially outside contexts in which particular heuristics might have conferred a selection advantage.
Do you have concrete reasons for thinking that Bayesian reasoning “likely won’t help at all” in answering ethical questions such as “what steps we should take to mitigate the effects of global warming?” It seems pretty useful to me.
While I don’t often say this, that question doesn’t strike me as an ethical question. It seems to turn entirely on questions of what steps would be most effective to producing the desired effect.
When primitives performed human sacrifice to ensure the sun will rise tomorrow, they were not mistaken about ethics—they were mistaken about astronomy.
I disagree—it’s usually pretty obvious. While I usually prefer not to talk in terms of “right moral decisions”, acting in accord with ethics gets you exactly what you’d expect from it. Ethics specifies criteria for determining what one has most reason to do or want. While what that ends up being is still a matter of disagreement, here are a couple of examples:
consequentialist: do whatever maximizes overall net utility. If you do something to make someone feel good, and you make them feel bad instead, you get immediate feedback as direct and profound as catching a baseball.
virtue ethics: act as the good man does. If you go around acting in a vicious manner, it’s obvious to all around that you’re nothing like a good person.
Entirely? It depends on things like how we should weigh the present vs future generations, how we should weigh rich vs poor, whether we’re working under ethical constraints other than pure utility maximization. All those are ethical questions.
If the probability of the sun rising tomorrow is something else than a unit step function of the number of humans sacrificed, ethics comes in again. Do you sacrifice victim number 386,264 for an added 0.0001% chance of sunrise? Ethical question.
I’m not sure who the ‘we’ here are. Ethical questions are questions about what I should do. I see no reason to ‘weigh’ rich or poor people, or different generations.
There are political questions about what sorts of institutions should be set up, and those things might address collectives of people or whether the poor get to count for more than the rich. But while in some sense ‘what political system should I prefer’ is an ethical question, the relevant questions to analyze the problem of what institutions to set up are political.
If ethical questions are limited to determining criteria for normative evaluation, then your claim that we receive feedback on ethical issues appears false. We receive feedback on the instrumental questions (e.g. what makes people feel good), not the ethical ones.
On the other hand, adopting my broader sense of what constitutes an ethical question seems to falsify my claim that we do not get feedback on “rightness”. We do, for the reasons you explain.* (Actually, I think your virtue ethics example is weak, but the consequentialist one is enough to make your point.)
I would still claim that ethical feedback is generally weaker than in the baseball case, particularly once you’re thinking about trying to help dispersed groups of individuals with whom you do not have direct contact (e.g. future generations). But my claim that there is no feedback whatsoever was overstated.
Another question: If we define ethics as being just about criteria, is there any reason to think Bayesian reasoning, which is essentially instrumental, should help us reach answers even in principle? (I guess you might be able to make an Aumann-style agreement argument, but it’s not obvious it would work.)
* It looks like we both illegitimately altered our definition of “ethical” half way through our comments. Mmmm… irony.
EDIT:
It turns pretty seriously on what you think the desired effect is as well. Indeed, much of the post-Stern debate was on exactly that issue.