I agree that the true PD never happens in human existence, and that’s yet another reason why I’m outraged at using a mathematically flawed decision theory to teach incoming students of rationality that they ought to betray their friends. (C, C) FTW!
(Actually, that would make a nice button.)
But I defend the use of simple models for the sake of understanding problems with mathematical clarity; if you can’t model simple hypothetical things correctly, how does it help to try to model complex real things correctly first? In real life, no one is an economic agent; in real life, no laws except basic physics and theorems therefrom have universal force; in real life, an asteroid can always strike at any time; in real life, we can never use Bayesian reasoning… but knowing a bit of math still helps, even if it never applies perfectly above the level of quarks.
Agree completely. I wasn’t advocating ignorance or promoting complex models over simple ones a priori. Only well-fitting and robust simple models over poorly fitting and brittle ones.
There are many small daily problems I can’t imagine addressing with math, and most people just cruise on intuition most of the time. Where we set the threshold for using math concepts seems to vary a lot with cognitive ability and our willingness to break out the graphing calculator when it might be of use.
It might be useful to lay down some psychological triggers so that we are reminded to be rational in situations where we too often operate intuitively. Conversely, a systematic account of things that are too trivial to rationalize and best left to our unconscious would be helpful. I’m not sure either sort of rule would be generalizable beyond the individual mind.
Conversely, a systematic account of things that are too trivial to rationalize and best left to our unconscious would be helpful.
This is only helpful if the subconscious reaction is reasonably good. Finding a way to improve the heuristics applied by the subconscious mind would be ideal for this type of thing.
But we can use judgement, a faculty that we have been developing for millennia that allows us to do amazing things that would take far more effort to work out mathematically. While it’s possible that you could catch a baseball merely using some calculus and an understanding of Newtonian physics, it’s not a feasible way for humans to do it, and ‘knowing some math’ is not likely to make you any better at it.. Similarly, while ‘bayesian reasoning’ might in principle get you the right answer in ethical questions, it’s not a feasible way for humans to do it, and it will likely not help at all.
Similarly, while ‘bayesian reasoning’ might in principle get you the right answer in ethical questions, it’s not a feasible way for humans to do it, and it will likely not help at all.
Maybe I’m missing something, but this analogy seems pretty weak. In general, I suspect that a pretty important factor in our ability to learn effective heuristics without reasoning them out from first principles is that we are consistently given clear feedback on the quality of our actions/decisions. (There’s a good bit on this in Jonah Lehrer’s, The Decisive Moment.)
It’s generally pretty obvious whether you’ve managed to catch a baseball, but there’s no equivalent feedback mechanism for making-the-right-moral-decision, so there seems little reason to think that we’ll just stumble onto good heuristics, especially outside contexts in which particular heuristics might have conferred a selection advantage.
Do you have concrete reasons for thinking that Bayesian reasoning “likely won’t help at all” in answering ethical questions such as “what steps we should take to mitigate the effects of global warming?” It seems pretty useful to me.
ethical questions such as “what steps we should take to mitigate the effects of global warming?”
While I don’t often say this, that question doesn’t strike me as an ethical question. It seems to turn entirely on questions of what steps would be most effective to producing the desired effect.
When primitives performed human sacrifice to ensure the sun will rise tomorrow, they were not mistaken about ethics—they were mistaken about astronomy.
there’s no equivalent feedback mechanism for making-the-right-moral-decision
I disagree—it’s usually pretty obvious. While I usually prefer not to talk in terms of “right moral decisions”, acting in accord with ethics gets you exactly what you’d expect from it. Ethics specifies criteria for determining what one has most reason to do or want. While what that ends up being is still a matter of disagreement, here are a couple of examples:
consequentialist: do whatever maximizes overall net utility. If you do something to make someone feel good, and you make them feel bad instead, you get immediate feedback as direct and profound as catching a baseball.
virtue ethics: act as the good man does. If you go around acting in a vicious manner, it’s obvious to all around that you’re nothing like a good person.
While I don’t often say this, that question doesn’t strike me as an ethical question. It seems to turn entirely on questions of what steps would be most effective to producing the desired effect.
Entirely? It depends on things like how we should weigh the present vs future generations, how we should weigh rich vs poor, whether we’re working under ethical constraints other than pure utility maximization. All those are ethical questions.
When primitives performed human sacrifice to ensure the sun will rise tomorrow, they were not mistaken about ethics—they were mistaken about astronomy.
If the probability of the sun rising tomorrow is something else than a unit step function of the number of humans sacrificed, ethics comes in again. Do you sacrifice victim number 386,264 for an added 0.0001% chance of sunrise? Ethical question.
Entirely? It depends on things like how we should weigh the present vs future generations, how we should weigh rich vs poor, whether we’re working under ethical constraints other than pure utility maximization. All those are ethical questions.
I’m not sure who the ‘we’ here are. Ethical questions are questions about what I should do. I see no reason to ‘weigh’ rich or poor people, or different generations.
There are political questions about what sorts of institutions should be set up, and those things might address collectives of people or whether the poor get to count for more than the rich. But while in some sense ‘what political system should I prefer’ is an ethical question, the relevant questions to analyze the problem of what institutions to set up are political.
If ethical questions are limited to determining criteria for normative evaluation, then your claim that we receive feedback on ethical issues appears false. We receive feedback on the instrumental questions (e.g. what makes people feel good), not the ethical ones.
On the other hand, adopting my broader sense of what constitutes an ethical question seems to falsify my claim that we do not get feedback on “rightness”. We do, for the reasons you explain.* (Actually, I think your virtue ethics example is weak, but the consequentialist one is enough to make your point.)
I would still claim that ethical feedback is generally weaker than in the baseball case, particularly once you’re thinking about trying to help dispersed groups of individuals with whom you do not have direct contact (e.g. future generations). But my claim that there is no feedback whatsoever was overstated.
Another question: If we define ethics as being just about criteria, is there any reason to think Bayesian reasoning, which is essentially instrumental, should help us reach answers even in principle? (I guess you might be able to make an Aumann-style agreement argument, but it’s not obvious it would work.)
* It looks like we both illegitimately altered our definition of “ethical” half way through our comments. Mmmm… irony.
EDIT:
[what to do about global warming] seems to turn entirely on questions of what steps would be most effective to producing the desired effect.
It turns pretty seriously on what you think the desired effect is as well. Indeed, much of the post-Stern debate was on exactly that issue.
I agree that the true PD never happens in human existence, and that’s yet another reason why I’m outraged at using a mathematically flawed decision theory to teach incoming students of rationality that they ought to betray their friends. (C, C) FTW!
(Actually, that would make a nice button.)
But I defend the use of simple models for the sake of understanding problems with mathematical clarity; if you can’t model simple hypothetical things correctly, how does it help to try to model complex real things correctly first? In real life, no one is an economic agent; in real life, no laws except basic physics and theorems therefrom have universal force; in real life, an asteroid can always strike at any time; in real life, we can never use Bayesian reasoning… but knowing a bit of math still helps, even if it never applies perfectly above the level of quarks.
Agree completely. I wasn’t advocating ignorance or promoting complex models over simple ones a priori. Only well-fitting and robust simple models over poorly fitting and brittle ones.
There are many small daily problems I can’t imagine addressing with math, and most people just cruise on intuition most of the time. Where we set the threshold for using math concepts seems to vary a lot with cognitive ability and our willingness to break out the graphing calculator when it might be of use.
It might be useful to lay down some psychological triggers so that we are reminded to be rational in situations where we too often operate intuitively. Conversely, a systematic account of things that are too trivial to rationalize and best left to our unconscious would be helpful. I’m not sure either sort of rule would be generalizable beyond the individual mind.
This is only helpful if the subconscious reaction is reasonably good. Finding a way to improve the heuristics applied by the subconscious mind would be ideal for this type of thing.
Well, people do do better on the Wason selection task when it’s presented in terms of ages and drinks than in terms of letters and numbers.
But we can use judgement, a faculty that we have been developing for millennia that allows us to do amazing things that would take far more effort to work out mathematically. While it’s possible that you could catch a baseball merely using some calculus and an understanding of Newtonian physics, it’s not a feasible way for humans to do it, and ‘knowing some math’ is not likely to make you any better at it.. Similarly, while ‘bayesian reasoning’ might in principle get you the right answer in ethical questions, it’s not a feasible way for humans to do it, and it will likely not help at all.
Maybe I’m missing something, but this analogy seems pretty weak. In general, I suspect that a pretty important factor in our ability to learn effective heuristics without reasoning them out from first principles is that we are consistently given clear feedback on the quality of our actions/decisions. (There’s a good bit on this in Jonah Lehrer’s, The Decisive Moment.)
It’s generally pretty obvious whether you’ve managed to catch a baseball, but there’s no equivalent feedback mechanism for making-the-right-moral-decision, so there seems little reason to think that we’ll just stumble onto good heuristics, especially outside contexts in which particular heuristics might have conferred a selection advantage.
Do you have concrete reasons for thinking that Bayesian reasoning “likely won’t help at all” in answering ethical questions such as “what steps we should take to mitigate the effects of global warming?” It seems pretty useful to me.
While I don’t often say this, that question doesn’t strike me as an ethical question. It seems to turn entirely on questions of what steps would be most effective to producing the desired effect.
When primitives performed human sacrifice to ensure the sun will rise tomorrow, they were not mistaken about ethics—they were mistaken about astronomy.
I disagree—it’s usually pretty obvious. While I usually prefer not to talk in terms of “right moral decisions”, acting in accord with ethics gets you exactly what you’d expect from it. Ethics specifies criteria for determining what one has most reason to do or want. While what that ends up being is still a matter of disagreement, here are a couple of examples:
consequentialist: do whatever maximizes overall net utility. If you do something to make someone feel good, and you make them feel bad instead, you get immediate feedback as direct and profound as catching a baseball.
virtue ethics: act as the good man does. If you go around acting in a vicious manner, it’s obvious to all around that you’re nothing like a good person.
Entirely? It depends on things like how we should weigh the present vs future generations, how we should weigh rich vs poor, whether we’re working under ethical constraints other than pure utility maximization. All those are ethical questions.
If the probability of the sun rising tomorrow is something else than a unit step function of the number of humans sacrificed, ethics comes in again. Do you sacrifice victim number 386,264 for an added 0.0001% chance of sunrise? Ethical question.
I’m not sure who the ‘we’ here are. Ethical questions are questions about what I should do. I see no reason to ‘weigh’ rich or poor people, or different generations.
There are political questions about what sorts of institutions should be set up, and those things might address collectives of people or whether the poor get to count for more than the rich. But while in some sense ‘what political system should I prefer’ is an ethical question, the relevant questions to analyze the problem of what institutions to set up are political.
If ethical questions are limited to determining criteria for normative evaluation, then your claim that we receive feedback on ethical issues appears false. We receive feedback on the instrumental questions (e.g. what makes people feel good), not the ethical ones.
On the other hand, adopting my broader sense of what constitutes an ethical question seems to falsify my claim that we do not get feedback on “rightness”. We do, for the reasons you explain.* (Actually, I think your virtue ethics example is weak, but the consequentialist one is enough to make your point.)
I would still claim that ethical feedback is generally weaker than in the baseball case, particularly once you’re thinking about trying to help dispersed groups of individuals with whom you do not have direct contact (e.g. future generations). But my claim that there is no feedback whatsoever was overstated.
Another question: If we define ethics as being just about criteria, is there any reason to think Bayesian reasoning, which is essentially instrumental, should help us reach answers even in principle? (I guess you might be able to make an Aumann-style agreement argument, but it’s not obvious it would work.)
* It looks like we both illegitimately altered our definition of “ethical” half way through our comments. Mmmm… irony.
EDIT:
It turns pretty seriously on what you think the desired effect is as well. Indeed, much of the post-Stern debate was on exactly that issue.