Are you saying that because people are affected by a bias, a moral theory that correctly predicts their feelings must be affected by the bias in the same way?
This would preclude (or falsify) many actual moral theories on the grounds that most people find them un-intuitive or simply wrong. I think most moral philosophers aren’t looking for this kind of theory, because if they were, they would agree much more by now: it shouldn’t take thousands of years to empirically discover how average people feel about proposed moral problems!
No—the feelings are not a truth-seeking device so bias is not applicable: they are part of the terrain.
it shouldn’t take thousands of years to empirically discover how average people feel about proposed moral problems!
It is not like that they were working on it every day for thousands of years. I.e. in the Christian period it mattered more what god says about morals than how people feel about it. Fairly big gaps. There is a classical era and modern era, the two adds up to a few hundreds of years with all sorts of gaps.
IMHO the core issue is that our moral feelings are inconsistent and this is why we need philosophy. If someone murders someone in a fit of rage, he still feels that most murders commited by most people are wrong, and maybe he regrets his own later on, but in that moment he did not feel it. Even the public opinion can have so wide mood swings that you cannot just reduce morality to a popularity contest—yet in essence it is so, but more of an abstract popularity contest. This is why IMHO philosophy is trying to algorithmize moral feelings.
IMHO the core issue is that our moral feelings are inconsistent and this is why we need philosophy. If someone murders someone in a fit of rage, he still feels that most murders commited by most people are wrong, and maybe he regrets his own later on, but in that moment he did not feel it. Even the public opinion can have so wide mood swings that you cannot just reduce morality to a popularity contest—yet in essence it is so, but more of an abstract popularity contest. This is why IMHO philosophy is trying to algorithmize moral feelings.
So is philosophy trying to describe moral feelings, inconsistent and biased as they are? Or is it trying to propose explicit moral rules and convince people to follow them even when they go against their feelings? Or both?
If moral philosophers are affected by presentation bias, that means they aren’t reasoning according to explicit rules. Are they trying to predict the moral feelings of others (who? the average person?)
If their meta level reasoning , their actual job, hasnt told them which rules to follow, .or has told them not to follow rules, why should they follow rules?
By “rules” I meant what the parent comment referred to as trying to “algorithmize” moral feelings.
Moral philosophers are presumably trying to answer some class of questions. These may be “what is the morally right choice?” or “what moral choice do people actually make?” or some other thing. But whatever it is, they should be consistent. If a philosopher might give a different answer every time the same question is asked of them, then surely they can’t accomplish anything useful. And to be consistent, they must follow rules, i.e. have a deterministic decision process.
These rules may not be explicitly known to themselves, but if they are in fact consistent, other people could study the answers they give and deduce these rules. The problem presented by the OP is that they are in fact giving inconsistent answers; either that, or they all happen to disagree with one another in just the way that the presentation bias would predict in this case.
A possible objection is that the presentation is an input which is allowed to affect the (correct) response. But every problem statement has some irrelevant context. No-one would argue that a moral problem might have different answers between and 2 and 3 AM, or that the solution to a moral problem should depend on the accent of the interviewer. And to understand what the problem being posed actually is (i.e. to correctly pose the same problem to different people), we need to know what is and isn’t relevant.
In this case, the philosophers act as if the choice of phrasing “200 of 600 live” vs. “400 of 600 die” is relevant to the problem. If we accepted this conclusion, we might well ask ourselves what else is relevant. Maybe one shouldn’t be a consequentialist between 2 and 3 AM?
You haven’t shown that they are producing inconsistent theories in their published work. The result only shows that, like scientists, individual philosophers can’t live up to their own cognitive standards in certain situations.
This is true. But it is significant evidence that they are inconsistent in their work too, absent an objective standard by which their work can be judged.
It can be hard to find a formalization of the empirical systems, though. Especially since formalizing is going to be very complicated and muddy in a lot of cases. That’ll cover a lot of ‘… and therefore, the right answer emerges’. Not all, to be sure, but a fair amount.
Are you saying that because people are affected by a bias, a moral theory that correctly predicts their feelings must be affected by the bias in the same way?
This would preclude (or falsify) many actual moral theories on the grounds that most people find them un-intuitive or simply wrong. I think most moral philosophers aren’t looking for this kind of theory, because if they were, they would agree much more by now: it shouldn’t take thousands of years to empirically discover how average people feel about proposed moral problems!
No—the feelings are not a truth-seeking device so bias is not applicable: they are part of the terrain.
It is not like that they were working on it every day for thousands of years. I.e. in the Christian period it mattered more what god says about morals than how people feel about it. Fairly big gaps. There is a classical era and modern era, the two adds up to a few hundreds of years with all sorts of gaps.
IMHO the core issue is that our moral feelings are inconsistent and this is why we need philosophy. If someone murders someone in a fit of rage, he still feels that most murders commited by most people are wrong, and maybe he regrets his own later on, but in that moment he did not feel it. Even the public opinion can have so wide mood swings that you cannot just reduce morality to a popularity contest—yet in essence it is so, but more of an abstract popularity contest. This is why IMHO philosophy is trying to algorithmize moral feelings.
So is philosophy trying to describe moral feelings, inconsistent and biased as they are? Or is it trying to propose explicit moral rules and convince people to follow them even when they go against their feelings? Or both?
If moral philosophers are affected by presentation bias, that means they aren’t reasoning according to explicit rules. Are they trying to predict the moral feelings of others (who? the average person?)
If their meta level reasoning , their actual job, hasnt told them which rules to follow, .or has told them not to follow rules, why should they follow rules?
By “rules” I meant what the parent comment referred to as trying to “algorithmize” moral feelings.
Moral philosophers are presumably trying to answer some class of questions. These may be “what is the morally right choice?” or “what moral choice do people actually make?” or some other thing. But whatever it is, they should be consistent. If a philosopher might give a different answer every time the same question is asked of them, then surely they can’t accomplish anything useful. And to be consistent, they must follow rules, i.e. have a deterministic decision process.
These rules may not be explicitly known to themselves, but if they are in fact consistent, other people could study the answers they give and deduce these rules. The problem presented by the OP is that they are in fact giving inconsistent answers; either that, or they all happen to disagree with one another in just the way that the presentation bias would predict in this case.
A possible objection is that the presentation is an input which is allowed to affect the (correct) response. But every problem statement has some irrelevant context. No-one would argue that a moral problem might have different answers between and 2 and 3 AM, or that the solution to a moral problem should depend on the accent of the interviewer. And to understand what the problem being posed actually is (i.e. to correctly pose the same problem to different people), we need to know what is and isn’t relevant.
In this case, the philosophers act as if the choice of phrasing “200 of 600 live” vs. “400 of 600 die” is relevant to the problem. If we accepted this conclusion, we might well ask ourselves what else is relevant. Maybe one shouldn’t be a consequentialist between 2 and 3 AM?
You haven’t shown that they are producing inconsistent theories in their published work. The result only shows that, like scientists, individual philosophers can’t live up to their own cognitive standards in certain situations.
This is true. But it is significant evidence that they are inconsistent in their work too, absent an objective standard by which their work can be judged.
It can be hard to find a formalization of the empirical systems, though. Especially since formalizing is going to be very complicated and muddy in a lot of cases. That’ll cover a lot of ‘… and therefore, the right answer emerges’. Not all, to be sure, but a fair amount.