Philosophy professors fail on basic philosophy problems
Imagine someone finding out that “Physics professors fail on basic physics problems”. This, of course, would never happen. To become a physicist in academia, one has to (among million other things) demonstrate proficiency on far harder problems than that.
Philosophy professors, however, are a different story. Cosmologist Sean Carroll tweeted a link to a paper from the Harvard Moral Psychology Research Lab, which found that professional moral philosophers are no less subject to the effects of framing and order of presentation on the Trolley Problem than non-philosophers. This seems as basic an error as, say, confusing energy with momentum, or mixing up units on a physics test.
Abstract:
We examined the effects of framing and order of presentation on professional philosophers’ judgments about a moral puzzle case (the “trolley problem”) and a version of the Tversky & Kahneman “Asian disease” scenario. Professional philosophers exhibited substantial framing effects and order effects, and were no less subject to such effects than was a comparison group of non-philosopher academic participants. Framing and order effects were not reduced by a forced delay during which participants were encouraged to consider “different variants of the scenario or different ways of describing the case”. Nor were framing and order effects lower among participants reporting familiarity with the trolley problem or with loss-aversion framing effects, nor among those reporting having had a stable opinion on the issues before participating the experiment, nor among those reporting expertise on the very issues in question. Thus, for these scenario types, neither framing effects nor order effects appear to be reduced even by high levels of academic expertise.
Some quotes (emphasis mine):
When scenario pairs were presented in order AB, participants responded differently than when the same scenario pairs were presented in order BA, and the philosophers showed no less of a shift than did the comparison groups, across several types of scenario.
[...] we could find no level of philosophical expertise that reduced the size of the order effects or the framing effects on judgments of specific cases. Across the board, professional philosophers (94% with PhD’s) showed about the same size order and framing effects as similarly educated non-philosophers. Nor were order effects and framing effects reduced by assignment to a condition enforcing a delay before responding and encouraging participants to reflect on “different variants of the scenario or different ways of describing the case”. Nor were order effects any smaller for the majority of philosopher participants reporting antecedent familiarity with the issues. Nor were order effects any smaller for the minority of philosopher participants reporting expertise on the very issues under investigation. Nor were order effects any smaller for the minority of philosopher participants reporting that before participating in our experiment they had stable views about the issues under investigation.
I am confused… I assumed that an expert in moral philosophy would not fall prey to the relevant biases so easily… What is going on?
I was under the impression that the research into biases by people like Kahneman and Tversky generally found that eliminating them was incredibly hard, and that expertise, and even familiarity with the biases in question generally didn’t help at all. So this is not a particularly surprising result; what would be more interesting is if they had found anything that actually does reduce the effect of the biases.
Overcoming these biases is very easy if you have an explicit theory which you use for moral reasoning, where results can be proved or disproved. Then you will always give the same answer, regardless of the presentation of details your moral theory doesn’t care about.
Mathematicians aren’t biased by being told “I colored 200 of 600 balls black” vs. “I colored all but 400 of 600 balls black”, because the question “how to color the most balls” has a correct answer in the model used. This is true even if the model is unique to the mathematician answering the question: the most important thing is consistency.
If a moral theory can’t prove the correctness of an answer to a very simple problem—a choice between just two alternatives, trading off in clearly morally significant issues (lives), without any complications (e.g. the different people who may die don’t have any distinguishing features) - then it probably doesn’t give clear answers to most other problems too, so what use is it?
If a moral theory can’t be proved correct in itself, what use is it? Given that theories are tested against intition, and that no theory has been shown to be completely satisfactory, it makes sense to use intuition directly.
Moral theories predict feelings, mathemathical theories predict different things. Moral philosophy assumes you already know genocide is wrong and it tries to figure out how your subconscious generates this feeling: http://lesswrong.com/lw/m8y/dissolving_philosophy/
Are you saying that because people are affected by a bias, a moral theory that correctly predicts their feelings must be affected by the bias in the same way?
This would preclude (or falsify) many actual moral theories on the grounds that most people find them un-intuitive or simply wrong. I think most moral philosophers aren’t looking for this kind of theory, because if they were, they would agree much more by now: it shouldn’t take thousands of years to empirically discover how average people feel about proposed moral problems!
No—the feelings are not a truth-seeking device so bias is not applicable: they are part of the terrain.
It is not like that they were working on it every day for thousands of years. I.e. in the Christian period it mattered more what god says about morals than how people feel about it. Fairly big gaps. There is a classical era and modern era, the two adds up to a few hundreds of years with all sorts of gaps.
IMHO the core issue is that our moral feelings are inconsistent and this is why we need philosophy. If someone murders someone in a fit of rage, he still feels that most murders commited by most people are wrong, and maybe he regrets his own later on, but in that moment he did not feel it. Even the public opinion can have so wide mood swings that you cannot just reduce morality to a popularity contest—yet in essence it is so, but more of an abstract popularity contest. This is why IMHO philosophy is trying to algorithmize moral feelings.
So is philosophy trying to describe moral feelings, inconsistent and biased as they are? Or is it trying to propose explicit moral rules and convince people to follow them even when they go against their feelings? Or both?
If moral philosophers are affected by presentation bias, that means they aren’t reasoning according to explicit rules. Are they trying to predict the moral feelings of others (who? the average person?)
If their meta level reasoning , their actual job, hasnt told them which rules to follow, .or has told them not to follow rules, why should they follow rules?
By “rules” I meant what the parent comment referred to as trying to “algorithmize” moral feelings.
Moral philosophers are presumably trying to answer some class of questions. These may be “what is the morally right choice?” or “what moral choice do people actually make?” or some other thing. But whatever it is, they should be consistent. If a philosopher might give a different answer every time the same question is asked of them, then surely they can’t accomplish anything useful. And to be consistent, they must follow rules, i.e. have a deterministic decision process.
These rules may not be explicitly known to themselves, but if they are in fact consistent, other people could study the answers they give and deduce these rules. The problem presented by the OP is that they are in fact giving inconsistent answers; either that, or they all happen to disagree with one another in just the way that the presentation bias would predict in this case.
A possible objection is that the presentation is an input which is allowed to affect the (correct) response. But every problem statement has some irrelevant context. No-one would argue that a moral problem might have different answers between and 2 and 3 AM, or that the solution to a moral problem should depend on the accent of the interviewer. And to understand what the problem being posed actually is (i.e. to correctly pose the same problem to different people), we need to know what is and isn’t relevant.
In this case, the philosophers act as if the choice of phrasing “200 of 600 live” vs. “400 of 600 die” is relevant to the problem. If we accepted this conclusion, we might well ask ourselves what else is relevant. Maybe one shouldn’t be a consequentialist between 2 and 3 AM?
You haven’t shown that they are producing inconsistent theories in their published work. The result only shows that, like scientists, individual philosophers can’t live up to their own cognitive standards in certain situations.
This is true. But it is significant evidence that they are inconsistent in their work too, absent an objective standard by which their work can be judged.
It can be hard to find a formalization of the empirical systems, though. Especially since formalizing is going to be very complicated and muddy in a lot of cases. That’ll cover a lot of ‘… and therefore, the right answer emerges’. Not all, to be sure, but a fair amount.
No. This is what theories of moral psychology do. Philosophical ethicists do not consider themselves to be in the same business.
I would assume that detecting the danger of the framing bias, such as “200 of 600 people will be saved” vs “400 of 600 people will die” is elementary enough and so is something an aspired moral philosopher ought to learn to recognize and avoid before she can be allowed to practice in the field. Otherwise all their research is very much suspect.
Being able to detect a bias and actually being able to circumvent it are two different skills.
Realize what’s occurring here, though. It’s not that individual philosophers are being asked the question both ways and are answering differently in each case. That would be an egregious error that one would hope philosophical training would allay. What’s actually happening is that when philosophers are presented with the “save” formulation (but not the “die” formulation) they react differently than when they are presented with the “die” formulation (but not the “save” formulation). This is an error, but also an extremely insidious error, and one that is hard to correct for. I mean, I’m perfectly aware of the error, I know I wouldn’t give conflicting responses if presented with both options, but I am also reasonably confident that I would in fact make the error if presented with just one option. My responses in that case would quite probably be different than in the counterfactual where I was only provided with the other option. In each case, if you subsequently presented me with the second framing, I would immediately recognize that I ought to give the same answer as I gave for the first framing, but what that answer is would, I anticipate, be impacted by the initial framing.
I have no idea how to avoid that sort of error, beyond basing my answers on some artificially created algorithm rather than my moral judgment. I mean, I could, when presented with the “save” formulation, think to myself “What would I say in the ‘die’ formulation?” before coming up with a response, but that procedure is still susceptible to framing effects. The answer I come up with might not be the same as what I would have said if presented with the “die” formulation in the first place.
Thanks, that makes sense.
Do you think that this is what utilitarianism is, or ought to be?
So, do you think that, absent a formal algorithm, when presented with a “save” formulation, a (properly trained) philosopher should immediately detect the framing effect, recast the problem in the “die” formulation (or some alternative framing-free formulation), all before even attempting to solve the problem, to avoid anchoring and other biases? If so, has this approach been advocated by a moral philosopher you know of?
Utilitarianism does offer the possibility of a precise, algorithmic approach to morality, but we don’t have anything close to that as of now. People disagree about what “utility” is, how it should be measured, and how it should be aggregated. And of course, even if they did agree, actually performing the calculation in most realistic cases would require powers of prediction and computation well beyond our abilities.
The reason I used the phrase “artificially created”, though, is that I think any attempt at systematization, utilitarianism included, will end up doing considerable violence to our moral intuitions. Our moral sensibilities are the product of a pretty hodge-podge process of evolution and cultural assimilation, so I don’t think there’s any reason to expect them to be neatly systematizable. One response is that the benefits of having a system (such as bias mitigation) are strong enough to justify biting the bullet, but I’m not sure that’s the right way to think about morality, especially if you’re a moral realist. In science, it might often be worthwhile using a simplified model even though you know there is a cost in terms of accuracy. In moral reasoning, though, it seems weird to say “I know this model doesn’t always correctly distinguish right from wrong, but its simplicity and precision outweigh that cost”.
Something like this might be useful, but I’m not at all confident it would work. Sounds like another research project for the Harvard Moral Psychology Research Lab. I’m not aware of any moral philosopher proposing something along these lines, but I’m not extremely familiar with that literature. I do philosophy of science, not moral philosophy.
I find this amusing and slightly disturbing—but the Trolley Problem seems like a terrible example. A rational person might answer based on political considerations, which “order effects” might change in everyday conversations.
Are you suggesting that moral philosophers, quizzed about their viewpoints on moral issues, answer non-truthfully in order to be politically correct or to avoid endorsing unpopular moral views?
If true, then we shouldn’t listen to anything moral philosophers ever say about their subject.
Very possibly. But I’m saying this seems more likely to happen with the Trolley Problem than with most philosophical questions, and even many disputed moral questions. It’s not a question of “endorsing unpopular moral views” in some abstract sense, but the social message that even a smart human being might take from the statement in an ordinary conversation.
So I’m guessing LW would also fail the problems.
Anyone wants to organize an experiment?
No need. Order effects are one of the biases tested on YourMorals.org: http://lesswrong.com/lw/lt3/poll_lesswrong_group_on_yourmoralsorg_2015/ http://lesswrong.com/lw/8lk/poll_lesswrong_group_on_yourmoralsorg/
Not sure, possibly. Then again, few of the regulars are professional moral philosophers.
I think some people are missing the issue. It’s not that they have a problem with the Trolley Problem, but that their answers vary according to irrelevant framing effects like order of presentation.
Where did that assumption come from?
If you ask physics professors questions that go counter to human intuition I wouldn’t be to sure that they get them right either.
This assumption comes from expecting an expert to know the basics of their field.
A trained physicist’s intuition is rather different from “human intuition” on physics problems, so that’s unlikely.
You need to do some tweaking of your faith in experts. Experts tend to be effective in fields where they get immediate and tight feedback about whether they’re right or wrong. Physics has this, philosophy does not. You should put significantly LESS faith in experts from fields where they don’t have this tight feedback loop.
That’s a good point. I’ll continue discounting anything ancient and modern moral philosophers say, then. From Aristotle to Peter Singer, they are full of it, by your criteria.
Heh, you’re right. I suppose I didn’t specify sufficient criteria.
I think that philosophers who have stood the test of time have already undergone post-hoc feedback. Aristotle, Nietzsche,Hume etc all had areas of faulty reasoning, but for the most part this has been teased out and, brought to light, and is common knowledge now. All of them were also exceptionally talented and gifted, and made better arguments than the norm. The fact that their work HAS stood the test of time is an expert vetting process in itself.
In terms of a random philosophy professor on the street, they haven’t gone through this process of post-hoc feedback to nearly the same degree, and likely haven’t gotten enough real time feedback to have developed these sorts of rationality processes automatically. Singer perhaps has had a bit more post-hoc feedback simply because he’s popular and controversial, but not nearly as much as these other philosophers, and I suspect he still has lots of faulty reasoning to be picked up on :).
Heh, you’re right, I suppose I didn’t correctly specify that criteria.
The point was, not, “every expert in these fields is untrustworthy”. Singer/Aristotle/Nietzsche etc have already been vetted by generations that their thinking is good.
However, the random philsophy professor on the street, you should be far more skeptical of, they haven’t gone through that post-hoc feedback process, and they haven’t gotten (as much of) the real time feedback that would cause them to get things right merely from their training.
I think in Aristo
I wouldn’t characterize the failure in this case as reflecting a lack of knowledge. What you have here is evidence that philosophers are just as prone to bias as non-philosophers at a similar educational level, even when the tests for bias involve examples they’re familiar with. In what sense is this a failure to “know the basics of their field”?
A relevantly similar test might involve checking whether physicists are just as prone as non-physicists to, say, the anchoring effect, when asked to estimate (without explicit calculation) some physical quantity. I’m not so sure that a trained physicist would be any less susceptible to the effect, although they might be better in general at estimating the quantity.
Take, for instance, evidence showing that medical doctors are just as susceptible to framing effects in medical treatment contexts as non-specialists. Does that indicate that doctors lack knowledge about the basics of their fields?
I think what this study suggests is that philosophical training is no more effective at de-biasing humans (at least for these particular biases) than a non-philosophical education. People have made claims to the contrary, and this is a useful corrective to that. The study doesn’t show that philosophers are unaware of the basics of their field, or that philosophical training has nothing to offer in terms of expertise or problem-solving.
There quite a difference between knowing basic on system II level and being able to apply it on system I.
So if you were to poll physicists about, say, string theory vs. quantum loop gravity, or about the interpretations of quantum mechanics, do you think there would be no order or framing effects? That would be quite surprising to me.
I didn’t realize that identifying “200 out 600 people die” with “400 of 600 people survive” requires quantum gravity-level expertize.
Maybe they just thought about it in vaguely Carrollian way, like ‘if 200 of 600 people die, then we cannot say anything about the state of the other 400, because no information is given on them’?
Is every philosopher supposed to be a moral philosopher?
Edit: Just noticed study contains this (which I missed in the OP):
...which is pretty disconcerting. However asking people to determine for themselves whether they’re experts in a particular problem area doesn’t strike me as particularly hygienic.
So here’s an article linking the poor thinking of philosophers with another study showing unscientific thought by scientists....
The under pressure qualification is really important. Its known that people don’t fire on all cylinders under pressure … its one of the bases of Derren Brown style Dark arts. Scientists and philosophers, unlike ER room doctors or soldiers, don’t produce their proffessional results as pressured individuals. The results are psychologically interesting, bug have no bearing on how well anyone is doing their job.
ETA
The under pressure qualification is really important. Its known that people don’t fire on all cylinders under pressure … its one of the bases of Derren Brown style Dark arts. Scientists and philosophers, unlike ER room doctors or soldiers, don’t produce their proffessional results as pressured individuals. The results are psychologically interesting, bug have no bearing on how well anyone is doing their job.
So what you’re saying is that 60% of the reduction in magical thinking that scientists show compared to the general population is at the 3 second level?
That… seems pretty impressive to me, but I’m not sure what I would have expected it to be.
Remember that you need to put a > in front of each paragraph to do a blockquote in comments.
There are four elephant in the room issues surrounding ratiinality.
1 [Rationality is more than one thing];
2 Biases are almost impossible to overcome;
3 Confirmation bias is adaptive to group discussion
4 If biases are so harmful, why don’t they get selected out?
If biases are so harmful, why don’t they get selected out?
We have good reason to believe that many biases are the results of cognitive shortcuts designed to speed up decisions making, but not in all cases. Mercier and Speaker’s Argumentative Theory of Rationality suggests that confirmation bias is an adaptation to arguing things out in groups: that’s why people adopt a single point of view, and stick to it in the face of almost all opposition. You don’t get good quality discussion from a bunch if people saying There are Arguments on Both Sides,
“Mercier and Sperber argue that, when you look at research that studies people in the appropriate settings, we turn out to be in fact quite good at reasoning when we are in the process of arguing; specifically, we demonstrate skill at producing arguments and at evaluating others’ arguments. M&S also plead for the “rehabilitation” of confirmation bias as playing an adaptive, useful role in the production of arguments in favor of an intuitively preferred view.”
Societies have systems and structures in place for ameliorating and leveraging confirmation bias. For instance, replication and off crosschecking in science ameliorate the tendency of research groups succumb to bias. Adversarial legal processes and party politics leverage the tendency, in ordered get good arguments made for both sides of a question. Values such as speaking ones mind (as opposed to agreeing with leaders), offering and accepting criticism also support rationality.
Now, teaching rationality, in the sense of learning to personally overcome bias has a problem in that it may not be possible to do fully, and it has a further problem in that it may not be a good idea. Teaching someone to overcome confirmation bias in a sense , to see two or more sides to the story, is, in a sense, teaching them to internalise the process of argument, to be solo rationalists. And while society perhaps needs some people like these, it perhaps also doesn’t need many. Forms of solo rationality training have existed for a long time, eg philosophy, but they are do most suit a lot of people’s preferences, and not a lot of people can succeed at them, since they are cognitively difficult
If you plug solo ration Ists into systems designed for the standard human, you are likely to get an impedance mismatch, not improved rationality. If you wanted to increase overall rationality by increasing average rationality, assuming that is feasible in the first place, you would have to redesign systems. But you could probably increase overall rationality by improving systems anyway...we live in a world where medicine, lf all things, isnt routinely based on good quality evidence
Some expansion of point 4 If biases are so harmful, why don’t they get selected out?
“During the last 25 years, researchers studying human reasoning and judgment in what has become known as the “heuristics and biases” tradition have produced an impressive body of experimental work which many have seen as having “bleak implications” for the rationality of ordinary people (Nisbett and Borgida 1975). According to one proponent of this view, when we reason about probability we fall victim to “inevitable illusions” (Piattelli-Palmarini 1994). Other proponents maintain that the human mind is prone to “systematic deviations from rationality” (Bazerman & Neale 1986) and is “not built to work by the rules of probability” (Gould 1992). It has even been suggested that human beings are “a species that is uniformly probability-blind” (Piattelli-Palmarini 1994). This provocative and pessimistic interpretation of the experimental findings has been challenged from many different directions over the years. One of the most recent and energetic of these challenges has come from the newly emerging field of evolutionary psychology, where it has been argued that it’s singularly implausible to claim that our species would have evolved with no “instinct for probability” and, hence, be “blind to chance” (Pinker 1997, 351). Though evolutionary psychologists concede that it is possible to design experiments that “trick our probability calculators,” they go on to claim that “when people are given information in a format that meshes with the way they naturally think about probability,”(Pinker 1997, 347, 351) the inevitable illusions turn out to be, to use Gerd Gigerenzer memorable term, “evitable” (Gigerenzer 1998). Indeed in many cases, evolutionary psychologists claim that the illusions simply “disappear” (Gigerenzer 1991).” http://ruccs.rutgers.edu/ArchiveFolder/Research%20Group/Publications/Wars/wars.html
Heh
I think you’re committing the category error of treating philosophy as science :-D
Nitpick: Sarunas mentioned it first
Yep.
So three people independently posted the same thing to LW: first as a comment in some thread, then as a top-level comment in the open thread, and finally as a post in Discussion :-)
Coming up: the post is promoted to Main; it is re-released as a MIRI whitepaper; Nick Bostrom publishes a book-length analysis; The New Yorker features a meandering article illustrated by a tasteful watercolor showing a trolly attacked by a Terminator.
Followed by a blockbuster movie where Hollywood kicks the tasteful watercolor to the curb and produces an hour-long battle around the trolley between a variety of Terminators, Transformers, and X-Men, led by Shodan on one side and GlaDOS on the other, while in a far-off Tibetan monastery the philosophers meditate on the meaning of the word mu.
Yes, that is funny. I’m glad the paper is garnering attention, as I think it’s a powerful reminder that we are ALL subject to simple behavioral biases.
I reject the alternative explanation that philosophy and philosophers are crackpots.
Take a field that requires a PhD to work in, purports to do research, has multiple journals with peer-reviewed publications, runs multiple conferences… would you characterize a field like that as art or science?
All of these are plausibly true of art departments at universities as well. (The first two are a bit iffy.)
Let me remind you of the Feynman’s description:
As to
Neither. I would call it mental masturbation.
Framing effect in math:
It might be worth saying explicitly what these three (equivalent) axioms say.
Axiom of choice: if you have a set A of nonempty sets, then there’s a function that maps each element a of A to an element of a. (I.e., a way of choosing one element f(a) from each set a in A.)
Well-ordering principle: every set can be well-ordered: that is, you can put a (total) ordering on it with the property that there are no infinite descending sequences. E.g., < is a well-ordering on the positive integers but not on all the integers, but you can replace it with an ordering where 0 < −1 < 1 < −2 < 2 < −3 < 3 < −4 < 4 < … which is a well-ordering. The well-ordering principle implies, e.g., that there’s a well-ordering on the real numbers, or the set of sets of real numbers.
Zorn’s lemma: if you have a partially ordered set, and every subset of it on which the partial order is actually total has an upper bound, then the whole thing has a maximal element.
The best way to explain what Zorn’s lemma is saying is to give an example, so let me show that Zorn’s lemma implies the (“obviously false”) well-ordering principle. Let A be any set. We’ll try to find a well-ordering of it. Let O be the set of well-orderings of subsets of A. Given two of these—say, o1 and o2 -- say that o1 ⇐ o2 if o2 is an “extension” of o1 -- that is, o2 is a well-ordering of a superset of whatever o1 is a well-ordering of, and o1 and o2 agree where both are defined. Now, this satisfies the condition in Zorn’s lemma: if you have a subset of O on which ⇐ is a total order, this means that for any two things in the subset one is an extension of the other, and then the union of all of them is an upper bound. So if Zorn’s lemma is true then O has a maximal element, i.e. a well-ordering of some subset of A that extends every possible well-ordering of any subset of A. Call this W. Now W must actually be defined on the whole of A, because for every element a of A there’s a “trivial” well-ordering of {a}, and W must extend this, which requires a to be in W’s domain.
(A few bits of terminology that I didn’t digress to define above. A total ordering on a set is a relation < for which if a<b and b<c then a<c, and for which exactly one of a<b, b<a, a=b holds for any a,b. OR a relation ⇐ for which if a<=b and b<=c then a<=c, and for which for any a,b either a<=b or b<=a, and for which a<=b and b<=a imply a=b. A partial ordering is similar except that you’re allowed to have pairs for which a<b and b<a (OR: a<=b and b<=a) both fail. We can translate between the “<” versions and the “<=” versions: “<” means “<= but not =”, or “<=” means “< or =”. Given a partial ordering, an upper bound for a set A is an element b for which a<=b for every a in A. A maximal element in a partially ordered set is an element of the set that’s an upper bound for the whole set.)
This doesn’t really bother me. Philosophers’ expertise is not in making specific moral judgements, but in making arguments and counterarguments. I think that is a useful skill that collectively gets us closer to the truth.
Do you think there is a right answer to the Trolley problem?
I’ll let shminux answer that, but it’s worth pointing out that the answer doesn’t need to be yes for the results in this paper to indicate a problem. The point isn’t that they gave bad answers, it’s that their answers were strongly affected by demonstrably irrelevant things.
Unless your carefully considered preference between one death caused by you and five deaths not caused by you in the trolley scenario is that which happens should depend on whether you were asked about some other scenario first, or that which happens should depend on exactly how the situation was described to you, then something is wrong with your thinking if you give the answers the philosophers did, even if your preferences are facts only about you and not about any sort of external objective moral reality.
And the other issue is that overcoming those biases is regarded as all but impossible by experts in the field of cognitive bias.....but I guess that “philosophers imperfect rationalists, along with everybody else” isnt such a punchy headline,
Whatever the reason, if they cannot overcome it, doesn’t that make all their professional output similarly useless?
However, I don’t agree with what you’re saying; overcoming these biases is very easy. Just have an explicit theory which you use for moral reasoning, where results can be proved or disproved. Then you will always give the same answer, regardless of the presentation of details your moral theory doesn’t care about.
After all, mathematicians aren’t confused by being told “I colored 200 of 600 balls black” and “I colored all but 400 of 600 balls black”.
If no one can overcome bias, does that make all their professional output useless? Do you want to buy “philosophers are crap” at the expense of “everyone is crap”?
That’s the consistency. What about the correctness?
Note that biases might affect the meta-level reasoning that leads to the choice of algorithm. Unless you think it’s algorithms all the way down.
Which would make mathematicians the logical choice to solve all real world problems....if only real world problems were as explicitly and unambiguous statable, as free indeterminism , as fee of incomplete information and mess, as math problems.
No, for just the reason I pointed out. Mathematicians, “hard” scientists, engineers, etc. all have objective measures of correctness. They converge towards truth (according to their formal model). They can and do disprove wrong, biased results. And they certainly can’t fall prey to a presentation bias that makes them give different answers to the same, simple, highly formalized question. If such a thing happened, and if they cared about the question, they would arrive at the correct answer.
Consistency is more important than correctness. If you believe you theory is right, you may be wrong, and if you discover this (because it makes wrong predictions) you can fix it. But if you accept inconsistent predictions from your theory, you can never fix it.
A problem, or area of study, may require a lot more knowledge than that of simple logic. But it shouldn’t ever be contrary to simple logic.
I think I’m going to disagree with that.
Why?
Because correct results or forecasts are useful and incorrect are useless or worse, actively misleading.
I can use a theory which gives inconsistent but mostly correct results right now. A theory which is consistent but gives wrong results is entirely useless. And if you can fix an incorrect theory to make it right, in the same way you can fix an inconsistent theory to make it consistent.
Besides, it’s trivially easy to generate false but consistent theories.
Within their domains.
So when kahneman et al tested hard scientists foe presentation bias, they found them, out of the population, to be uniquely free from it? I don’t recall hearing that result.
You are not comparing like with like. You are saying that science as a whole, over the long term, is able to correct it’s biases, but you know perfectly well that in the short term, bad papers got published. Interviewing individual philosophers isnt comparable to the long term, en masse behaviour of science,
Even if it’s too simple?
Where is the evidence that philosophy, as a field, has converged towards correctness over time?
Where is the need for it? The question us whether philosophers are doing their jobs competently. Can you fail at something you don’t claim to be doing? Do philosophers claim have The Truth?
That’s basically what they’re for, yes, and certainly they claim to have more Truth than any other field, such as “mere” sciences.
Is that what they say?
ETA
Socrates rather famous said the opposite...he only knows that he does not know.
The claim that philosophers sometimes make is that you can’t just substitute science for philosophy because philosophy deals with a wider range of problems. But that isnt the same as claiming to have The Truth about them all.
Consistency shouldn’t be regarded as more important than correctness, in the sense that you check for consistency, and stop.
But the inconsistency isnt in the theory, and, in all likelihood, they are not .running off an explicit theory ITFP.
Exactly. And if philosophers don’t have such measures within their domain of philosophy, why should I pay any attention to what they say?
I haven’t checked, but I strongly expect that hard scientists would be relatively free of presentation bias in answering well-formed questions (that have universally agreed correct answers) within their domain. Perhaps not totally free, but very little affected by it. I keep returning to the same example: you can’t confuse a mathematician, or a physicist or engineer, by saying “400 out of 600 are white” instead of “200 out of 600 are black”.
What results has moral philosophy, as a whole, achieved in the long term? What is as universally agreed on as first-order logic or natural selection?
If moral philosophers claim that uniquely of all human fields of knowledge, their requires not just going beyond formal logic but being contrary to it, I’d expect to see some very extraordinary evidence. “We haven’t been able to make progress otherwise” isn’t quite enough; what are the results they’ve accomplished with whatever a-logical theories they’ve built?
The critical question is whether they could have such measures.
That’s completely beside the point. The point is that you allow that the system cam outperform the individuals in the one case, but not the other.
Do you mean they might create such measures in the future, and therefore we should keep funding them? But without such measures today, how do we know if they’re moving towards that goal? And what’s the basis for thinking it’s achievable?
Is there an empirical or objective standard by which the work of moral philosophers is judged for correctness or value, something that can be formulated explicitly? And if not, how can ‘the system’ converge on good results?
Of course it’s algorithms all the way down! “Lens That Sees Its Flaws” and all that, remember?
How is a process of reasoning based on an infinite stack of algorithms concluded in a finite amount of time?
You can stop recursing whenever you have sufficiently high confidence, which means that your algorithm terminates in finite time with probability 1, while also querying each algorithm in the infinite stack with non-zero probability.
Bingo. And combining that with a good formalization of bounded rationality tells you how deep you can afford to go.
But of course, you’re the expert, so you know that ^_^.
Re: everyone is crap
But that is not a problem. Iff everyone is crap, I want to believe that everyone is crap.
Its a problem, if you want to bash one particular group.
My professional input does not depend on bias in moral (or similarly fuzzy) questions. As for other biases, I definitively determine success or failure on a time scale ranging from minutes to weeks.
These are rather different from how a philosopher can operate.
But that doesn’t make philosophy uniquely broken. If anything it is the other way around: disciplines that deal with the kind of well-defined abstract problems where biases can’t get a grip, are exceptional.
“Can operate” was carefully phrased. If the main role of philosophers is to answer urgent object level moral quandaries, then the OP would have pointed out a serious real world problem....but philosophers typically don’t do that, they typically engage in long term meta level thought on a variety of topics,
Philosophers can operates in a way that approximates the OP scenario, for instance, when they sit on ethics committees. Of course, they sit alongside society’s actual go-to experts on object level ethics, religious professionals, who are unlikely to be less biased.
Philosophers aren’t the most biased or most impactive people in society....worry about the biases of politicians, doctors, and financiers.
I can’t dismiss politicians, doctors and financiers. I can dismiss philosophers, so I’m asking why should I listen to them.
You can dismiss philosophy, if it doesn’t suit your purposes, but that is not at all the same as the original claim that philosophers are somehow doing their job badly. Dismissing philosophers without dismissing philosophy is dangerous, as it means you are doing philosophy without knowing how. You are unlikely to be less biased, whilst being likely to misunderstand questions, reinvent broken solutions, and so on. Consistently avoiding philosophy is harder than it seems. You are likely be making a philosophical claim when you say scientists and mathematicians converge on truth.
I didn’t mean to dismiss moral philosophy; I agree that it asks important questions, including “should we apply a treatment where 400 of 600 survive?” and “do such-and-such people actually choose to apply this treatment?” But I do dismiss philosophers who can’t answer these questions free of presentation bias, because even I myself can do better. Hopefully there are other moral philosophers out there who are both specialists and free of bias. The OP’s suggestion that philosophers are untrustworthy obviously depends on how representative that survey is of philosophers in general. However, I don’t believe that it’s not representative merely because a PHD in moral philosophy sounds very wise.
Meaning you dismiss their output, even though it isnt prepared under those conditions and is prepared under conditions allowing bias reduction, eg by cross checking.
Under the same conditions? Has that been tested?
Scientists have been shown to have failings of their own, under similarly artificial conditions. Are you going to to reject scientists, because of their individual untrustworthiness...or trust the system?
It hasn’t been tested, but I’m reasonably confident in my prediction. Because, if I were answering moral dilemmas, and explicitly reasoning in far mode, I would try to follow some kind of formal system, where presentation doesn’t matter, and where answers can be checked for correctness.
Granted, I would need some time to prepare such a system, to practice with it. And I’m well aware that all actually proposed formal moral systems go against moral intuitions in some cases. So my claim to counterfactually be a better moral philosopher is really quite contingent.
Other sciences deal with human fallibility by having an objective standard of truth against which individual beliefs can be measured. Mathematical theories have formal proofs, and with enough effort the proofs can even be machine-checked. Physical, etc. theories produce empirical predictions that can be independently verified. What is the equivalent in moral philosophy?
So in short, you are answering your rhetorical question with ‘no’, which rather undermines your earlier point—no, DanArmak did not ‘prove too much’.
Shminux did.
If you answer the rhetorical question as ‘no’ then no, Shminux didn’t prove too much either.
This is roughly the point where some bloody philosopher invokes Hume’s Fork, mutters something about meta-ethics, and tells you to fuck off back to the science departments where you came from.
One might reasonably hope that professional philosophers would be better reasoners than the population at large. That is, after all, a large fraction of their job.
Overcoming these biases completely may well be impossible, but should we really expect that years of training in careful thinking, plus further years of practice, on a population that’s supposedly selected for aptitude in thinking, would fail to produce any improvement?
(Maybe we should, either on the grounds that these biases really are completely unfixable or on the grounds that everyone knows academic philosophy is totally broken and isn’t either selecting or training for clearer more careful thinking. I think either would be disappointing.)
Well, if they weren’t explicitly trained to deal with cognitive biases, we shouldn’t expect that they’ve magically acquired such a skill from thin air.
Yes: what we learn from trolley problems is that human moral intuitions are absolute crap (technical term). Starting with even the simplest trolley problems, you find that many people have very strong but inconsistent moral intuitions. Others immediately go to a blue screen when presented with a moral problem with any causal complexity. The answer is that trolley problems are primarily system diagnostic tools that identify corrupt software behaving inconsistently.
Back to the object level, the right answer is dependent on other assumptions. Unless someone wants to have claimed to have solved all meta-ethical problems and have the right ethical system, “a right answer” is the correct framing rather than “the right answer,” because the answer is only right in a given ethical framework. Almost any consequentialist system will output “save the most lives/QALYs.”
I remember long ago, when somebody wanted to emulate a small routine for adding big numbers, using a crowd of people as an arithmetic unit. The task was simple for everyone in the crowd. Like just add two given numbers from 0 to 9 and report the integer part of the result divided by 10 to the next one in the crowd and remember your result modulo 10.
The crowd was assembled of mathematicians. Still, at every attempt someone made an error, while adding 5 and 7 or something.
Oh, and “94% of professors report that they are above average teachers, …”
Sure, but this is a different issue, experts being untrustworthy in a different field, in this case evaluating teaching skills.
Oh, and “94% of professors report that they are above average teachers, …”
Why do you think it’s possible to be an expert at a barely-coherent subject?
So here’s an article linking the poor thinking of philosophers with another study showing scientific think by scientists....
The under pressure qualification is really important. Its known that people don’t fire on all cylinders under pressure … its one of the bases of Derren Brown style Dark arts. Scientists and philosophers, unlike ER room doctors or soldiers, don’t produce their proffessional results as pressured individuals. The results are psychologically interesting, bug have no bearing on how well anyone is doing their job.
On the other hand, in the last 100-120 years very few interesting philosophy was produced by non-professors. My favorites are Thomas Nagel, Philippa Foot etc. are/were all profs. Seems like it is a necessary, but not sufficient condition. Or maybe not as much as a condition as universities being good at recognizing good ones and throwing jobs at them, but they seem to have too many jobs and not enough good candidates.
It might be necessary for making your philosophical thoughts visible. I dare say Bill Gates has given some thought to philosophical questions. For all I know, he may have had exceptionally clear and original thoughts about them. But I’ve read books about philosophy by Nagel and Foot and not by Gates because Nagel and Foot have had philosophy books published. Bill Gates probably hasn’t had time to write a philosophy book, and would have more difficulty than Nagel and Foot in getting one published by the sort of publisher readers take seriously.
… Actually, maybe Gates is famous enough that he could find a good publisher for anything he wants; I don’t know. So maybe choose someone a few notches down in fame and influence, but still exceptionally smart. Random examples: Bill Atkinson (software guy; wrote a lot of the graphics code in the original Apple Macintosh), Thomas Ades (composer; any serious classical music aficionado will know who he is, but scarcely anyone else), Vaughan Jones (Fields-medal-winning mathematician). If any of those had done first-rate philosophical thinking, I bet no one would know.
Why not give a precise title??
OK, I’ll fix that. Just wanted to show the contrast.
Professional physicists are empirically no less likely to fail to solve quantum gravity than non-physicists.
No it does not. The trolley problem is a genuinely hard problem with no generally accepted satisfactory solution.
They weren’t testing for the ability to solve the trolley problem. They were testing for framing effects. You can’t test for framing effects if everybody gives the same answer, so they had to use an unsolved problem to test for the solved problem.
But if you were to test physicists on an unsolved physics problem, would you detect no framing effects? This seems not obvious to me.
I bet you would. It wouldn’t have to be an unsolved problem; one to which they couldn’t too-quickly work out the answer would suffice. The sort of problem you’d need would be one for which there’s a plausible-seeming argument for each of two conclusions—e.g., the “Feynman sprinkler” problem—and then you’d frame the question so as to suggest one or other of the arguments.
But it would be disappointing and surprising if physics professors turned out to do no better at such questions than people with no training in physics.
(If you make the question difficult enough and give them little enough time, that might happen. Maybe the Feynman sprinkler problem with 30 seconds’ thinking time would do. Question: How closely analogous is this to the trolley problem for philosophers? Question: If you repeat the study we’re describing here but encourage the philosophers to spend several minutes thinking about each question, do the framing effects decrease a lot? More or less than for people who aren’t professional philosophers?)
Physicists don’t claim to be experts in logical consistency.