I’ll let shminux answer that, but it’s worth pointing out that the answer doesn’t need to be yes for the results in this paper to indicate a problem. The point isn’t that they gave bad answers, it’s that their answers were strongly affected by demonstrably irrelevant things.
Unless your carefully considered preference between one death caused by you and five deaths not caused by you in the trolley scenario is that which happens should depend on whether you were asked about some other scenario first, or that which happens should depend on exactly how the situation was described to you, then something is wrong with your thinking if you give the answers the philosophers did, even if your preferences are facts only about you and not about any sort of external objective moral reality.
And the other issue is that overcoming those biases is regarded as all but impossible by experts in the field of cognitive bias.....but I guess that “philosophers imperfect rationalists, along with everybody else” isnt such a punchy headline,
Whatever the reason, if they cannot overcome it, doesn’t that make all their professional output similarly useless?
However, I don’t agree with what you’re saying; overcoming these biases is very easy. Just have an explicit theory which you use for moral reasoning, where results can be proved or disproved. Then you will always give the same answer, regardless of the presentation of details your moral theory doesn’t care about.
After all, mathematicians aren’t confused by being told “I colored 200 of 600 balls black” and “I colored all but 400 of 600 balls black”.
Whatever the reason, if they cannot overcome it, doesn’t that make all their professional output similarly useless?
If no one can overcome bias, does that make all their professional output useless? Do you want to buy “philosophers are crap” at the expense of “everyone is crap”?
However, I don’t agree with what you’re saying; overcoming these biases is very easy. Just have an explicit theory which you use for moral reasoning, where results can be proved or disproved. Then you will always give the same answer, regardless of the presentation of details your moral theory doesn’t care about.
That’s the consistency. What about the correctness?
Note that biases might affect the meta-level reasoning that leads to the choice of algorithm. Unless you think it’s algorithms all the way down.
After all, mathematicians aren’t confused by being told “I colored 200 of 600 balls black” and “I colored all but 400 of 600 balls black
Which would make mathematicians the logical choice to solve all real world problems....if only real world problems were as explicitly and unambiguous statable, as free indeterminism , as fee of incomplete information and mess, as math problems.
If no one can overcome bias, does that make all their professional output useless? Do you want to buy “philosophers are crap” at the expense of “everyone is crap”?
No, for just the reason I pointed out. Mathematicians, “hard” scientists, engineers, etc. all have objective measures of correctness. They converge towards truth (according to their formal model). They can and do disprove wrong, biased results. And they certainly can’t fall prey to a presentation bias that makes them give different answers to the same, simple, highly formalized question. If such a thing happened, and if they cared about the question, they would arrive at the correct answer.
That’s the consistency. What about the correctness?
Consistency is more important than correctness. If you believe you theory is right, you may be wrong, and if you discover this (because it makes wrong predictions) you can fix it. But if you accept inconsistent predictions from your theory, you can never fix it.
Which would make mathematicians the logical choice to solve all real world problems....if only real world problems were as explicitly and unambiguous statable, as free indeterminism , as fee of incomplete information and mess, as math problems.
A problem, or area of study, may require a lot more knowledge than that of simple logic. But it shouldn’t ever be contrary to simple logic.
Because correct results or forecasts are useful and incorrect are useless or worse, actively misleading.
I can use a theory which gives inconsistent but mostly correct results right now. A theory which is consistent but gives wrong results is entirely useless. And if you can fix an incorrect theory to make it right, in the same way you can fix an inconsistent theory to make it consistent.
Besides, it’s trivially easy to generate false but consistent theories.
No, for just the reason I pointed out. Mathematicians, “hard” scientists, engineers, etc. all have objective measures of correctness.
Within their domains.
They can and do disprove wrong, biased results. And they certainly can’t fall prey to a presentation bias that makes them give different answers to the same, simple, highly formalized question.
So when kahneman et al tested hard scientists foe presentation bias, they found them, out of the population, to be uniquely free from it? I don’t recall hearing that result.
You are not comparing like with like. You are saying that science as a whole, over the long term, is able to correct it’s biases, but you know perfectly well that in the short term, bad papers got published. Interviewing individual philosophers isnt comparable to the long term, en masse behaviour of science,
A problem, or area of study, may require a lot more knowledge than that of simple logic. But it shouldn’t ever be contrary to simple logic.
You are not comparing like with like. You are saying that science as a whole, over the long term, is able to correct it’s biases, but you know perfectly well that in the short term, bad papers got published. Interviewing individual philosophers isnt comparable to the long term, en masse behaviour of science,
Where is the evidence that philosophy, as a field, has converged towards correctness over time?
Where is the need for it? The question us whether philosophers are doing their jobs competently. Can you fail at something you don’t claim to be doing? Do philosophers claim have The Truth?
Socrates rather famous said the opposite...he only knows that he does not know.
The claim that philosophers sometimes make is that you can’t just substitute science for philosophy because philosophy deals with a wider range of problems. But that isnt the same as claiming to have The Truth about them all.
Consistency shouldn’t be regarded as more important than correctness, in the sense that you check for consistency, and stop.
f you believe you theory is right, you may be wrong, and if you discover this (because it makes wrong predictions) you can fix it. But if you accept inconsistent predictions from your theory, you can never fix it..
But the inconsistency isnt in the theory, and, in all
likelihood, they are not .running off an explicit theory ITFP.
Exactly. And if philosophers don’t have such measures within their domain of philosophy, why should I pay any attention to what they say?
So when kahneman et al tested hard scientists foe presentation bias, they found them, out of the population, to be uniquely free from it? I don’t recall hearing that result.
I haven’t checked, but I strongly expect that hard scientists would be relatively free of presentation bias in answering well-formed questions (that have universally agreed correct answers) within their domain. Perhaps not totally free, but very little affected by it. I keep returning to the same example: you can’t confuse a mathematician, or a physicist or engineer, by saying “400 out of 600 are white” instead of “200 out of 600 are black”.
You are not comparing like with like. You are saying that science as a whole, over the long term, is able to correct it’s biases, but you know perfectly well that in the short term, bad papers got published. Interviewing individual philosophers isnt comparable to the long term, en masse behaviour of science,
What results has moral philosophy, as a whole, achieved in the long term? What is as universally agreed on as first-order logic or natural selection?
A problem, or area of study, may require a lot more knowledge than that of simple logic. But it shouldn’t ever be contrary to simple logic.
Even if it’s too simple?
If moral philosophers claim that uniquely of all human fields of knowledge, their requires not just going beyond formal logic but being contrary to it, I’d expect to see some very extraordinary evidence. “We haven’t been able to make progress otherwise” isn’t quite enough; what are the results they’ve accomplished with whatever a-logical theories they’ve built?
Exactly. And if philosophers don’t have such measures within their domain of philosophy, why should I pay any attention to what they say?
The critical question is whether they could have such measures.
You are not comparing like with like. You are saying that science as a whole, over the long term, is able to correct it’s biases, but you know perfectly well that in the short term, bad papers got published. Interviewing individual philosophers isnt comparable to the long term, en masse behaviour of science,
What results has moral philosophy, as a whole, achieved in the long term? What is as universally agreed on as first-order logic or natural selection?
That’s completely beside the point. The point is that you allow that the system cam outperform the individuals in the one case, but not the other.
The critical question is whether they could have such measures.
Do you mean they might create such measures in the future, and therefore we should keep funding them? But without such measures today, how do we know if they’re moving towards that goal? And what’s the basis for thinking it’s achievable?
That’s completely beside the point. The point is that you allow that the system cam outperform the individuals in the one case, but not the other.
Is there an empirical or objective standard by which the work of moral philosophers is judged for correctness or value, something that can be formulated explicitly? And if not, how can ‘the system’ converge on good results?
You can stop recursing whenever you have sufficiently high confidence, which means that your algorithm terminates in finite time with probability 1, while also querying each algorithm in the infinite stack with non-zero probability.
If no one can overcome bias, does that make all their professional output useless?
My professional input does not depend on bias in moral (or similarly fuzzy) questions. As for other biases, I definitively determine success or failure on a time scale ranging from minutes to weeks.
These are rather different from how a philosopher can operate.
My professional input does not depend on bias in moral (or similarly fuzzy) questions.
But that doesn’t make philosophy uniquely broken. If anything it is the other way around: disciplines that deal with the kind of well-defined abstract problems where biases can’t get a grip, are exceptional.
As for other biases, I definitively determine success or failure on a time scale ranging from minutes to weeks.These are rather different from how a philosopher can operate.
“Can operate” was carefully phrased. If the main role of philosophers is to answer urgent object level moral quandaries, then the OP would have pointed out a serious real world problem....but philosophers typically don’t do that, they typically engage in long term meta level thought on a variety of topics,
Philosophers can operates in a way that approximates the OP scenario, for instance, when they sit on ethics committees. Of course, they sit alongside society’s actual go-to experts on object level ethics, religious professionals, who are unlikely to be less biased.
Philosophers aren’t the most biased or most impactive people in society....worry about the biases of politicians, doctors, and financiers.
You can dismiss philosophy, if it doesn’t suit your purposes, but that is not at all the same as the original claim that philosophers are somehow doing their job badly. Dismissing philosophers without dismissing philosophy is dangerous, as it means you are doing philosophy without knowing how. You are unlikely to be less biased, whilst being likely to misunderstand questions, reinvent broken solutions, and so on. Consistently avoiding philosophy is harder than it seems. You are likely be making a philosophical claim when you say scientists and mathematicians converge on truth.
You can dismiss philosophy, if it doesn’t suit your purposes, but that is not at all the same as the original claim that philosophers are somehow doing their job badly
I didn’t mean to dismiss moral philosophy; I agree that it asks important questions, including “should we apply a treatment where 400 of 600 survive?” and “do such-and-such people actually choose to apply this treatment?” But I do dismiss philosophers who can’t answer these questions free of presentation bias, because even I myself can do better. Hopefully there are other moral philosophers out there who are both specialists and free of bias. The OP’s suggestion that philosophers are untrustworthy obviously depends on how representative that survey is of philosophers in general. However, I don’t believe that it’s not representative merely because a PHD in moral philosophy sounds very wise.
I didn’t mean to dismiss moral philosophy; I agree that it asks important questions, including “should we apply a treatment where 400 of 600 survive?” and “do such-and-such people actually choose to apply this treatment?” But I do dismiss philosophers who can’t answer these questions free of presentation bias,
Meaning you dismiss their output, even though it isnt prepared under those conditions and is prepared under conditions allowing bias reduction, eg by cross checking.
because even I myself can do better.
Under the same conditions? Has that been tested?
Hopefully there are other moral philosophers out there who are both specialists and free of bias. The OP’s suggestion that philosophers are untrustworthy obviously depends on how representative that survey is of philosophers in general. However, I don’t believe that it’s not representative merely because a PHD in moral philosophy sounds very wise.
Scientists have been shown to have failings of their own, under similarly artificial conditions. Are you going to to reject scientists, because of their individual untrustworthiness...or trust the system?
It hasn’t been tested, but I’m reasonably confident in my prediction. Because, if I were answering moral dilemmas, and explicitly reasoning in far mode, I would try to follow some kind of formal system, where presentation doesn’t matter, and where answers can be checked for correctness.
Granted, I would need some time to prepare such a system, to practice with it. And I’m well aware that all actually proposed formal moral systems go against moral intuitions in some cases. So my claim to counterfactually be a better moral philosopher is really quite contingent.
Scientists have been shown to have failings of their own, under similarly artificial conditions. Are you going to to reject scientists, because of their individual untrustworthiness...or trust the system?
Other sciences deal with human fallibility by having an objective standard of truth against which individual beliefs can be measured. Mathematical theories have formal proofs, and with enough effort the proofs can even be machine-checked. Physical, etc. theories produce empirical predictions that can be independently verified. What is the equivalent in moral philosophy?
Just have an explicit theory which you use for moral reasoning, where results can be proved or disproved. Then you will always give the same answer, regardless of the presentation of details your moral theory doesn’t care about.
This is roughly the point where some bloody philosopher invokes Hume’s Fork, mutters something about meta-ethics, and tells you to fuck off back to the science departments where you came from.
One might reasonably hope that professional philosophers would be better reasoners than the population at large. That is, after all, a large fraction of their job.
Overcoming these biases completely may well be impossible, but should we really expect that years of training in careful thinking, plus further years of practice, on a population that’s supposedly selected for aptitude in thinking, would fail to produce any improvement?
(Maybe we should, either on the grounds that these biases really are completely unfixable or on the grounds that everyone knows academic philosophy is totally broken and isn’t either selecting or training for clearer more careful thinking. I think either would be disappointing.)
Well, if they weren’t explicitly trained to deal with cognitive biases, we shouldn’t expect that they’ve magically acquired such a skill from thin air.
I’ll let shminux answer that, but it’s worth pointing out that the answer doesn’t need to be yes for the results in this paper to indicate a problem. The point isn’t that they gave bad answers, it’s that their answers were strongly affected by demonstrably irrelevant things.
Unless your carefully considered preference between one death caused by you and five deaths not caused by you in the trolley scenario is that which happens should depend on whether you were asked about some other scenario first, or that which happens should depend on exactly how the situation was described to you, then something is wrong with your thinking if you give the answers the philosophers did, even if your preferences are facts only about you and not about any sort of external objective moral reality.
And the other issue is that overcoming those biases is regarded as all but impossible by experts in the field of cognitive bias.....but I guess that “philosophers imperfect rationalists, along with everybody else” isnt such a punchy headline,
Whatever the reason, if they cannot overcome it, doesn’t that make all their professional output similarly useless?
However, I don’t agree with what you’re saying; overcoming these biases is very easy. Just have an explicit theory which you use for moral reasoning, where results can be proved or disproved. Then you will always give the same answer, regardless of the presentation of details your moral theory doesn’t care about.
After all, mathematicians aren’t confused by being told “I colored 200 of 600 balls black” and “I colored all but 400 of 600 balls black”.
If no one can overcome bias, does that make all their professional output useless? Do you want to buy “philosophers are crap” at the expense of “everyone is crap”?
That’s the consistency. What about the correctness?
Note that biases might affect the meta-level reasoning that leads to the choice of algorithm. Unless you think it’s algorithms all the way down.
Which would make mathematicians the logical choice to solve all real world problems....if only real world problems were as explicitly and unambiguous statable, as free indeterminism , as fee of incomplete information and mess, as math problems.
No, for just the reason I pointed out. Mathematicians, “hard” scientists, engineers, etc. all have objective measures of correctness. They converge towards truth (according to their formal model). They can and do disprove wrong, biased results. And they certainly can’t fall prey to a presentation bias that makes them give different answers to the same, simple, highly formalized question. If such a thing happened, and if they cared about the question, they would arrive at the correct answer.
Consistency is more important than correctness. If you believe you theory is right, you may be wrong, and if you discover this (because it makes wrong predictions) you can fix it. But if you accept inconsistent predictions from your theory, you can never fix it.
A problem, or area of study, may require a lot more knowledge than that of simple logic. But it shouldn’t ever be contrary to simple logic.
I think I’m going to disagree with that.
Why?
Because correct results or forecasts are useful and incorrect are useless or worse, actively misleading.
I can use a theory which gives inconsistent but mostly correct results right now. A theory which is consistent but gives wrong results is entirely useless. And if you can fix an incorrect theory to make it right, in the same way you can fix an inconsistent theory to make it consistent.
Besides, it’s trivially easy to generate false but consistent theories.
Within their domains.
So when kahneman et al tested hard scientists foe presentation bias, they found them, out of the population, to be uniquely free from it? I don’t recall hearing that result.
You are not comparing like with like. You are saying that science as a whole, over the long term, is able to correct it’s biases, but you know perfectly well that in the short term, bad papers got published. Interviewing individual philosophers isnt comparable to the long term, en masse behaviour of science,
Even if it’s too simple?
Where is the evidence that philosophy, as a field, has converged towards correctness over time?
Where is the need for it? The question us whether philosophers are doing their jobs competently. Can you fail at something you don’t claim to be doing? Do philosophers claim have The Truth?
That’s basically what they’re for, yes, and certainly they claim to have more Truth than any other field, such as “mere” sciences.
Is that what they say?
ETA
Socrates rather famous said the opposite...he only knows that he does not know.
The claim that philosophers sometimes make is that you can’t just substitute science for philosophy because philosophy deals with a wider range of problems. But that isnt the same as claiming to have The Truth about them all.
Consistency shouldn’t be regarded as more important than correctness, in the sense that you check for consistency, and stop.
But the inconsistency isnt in the theory, and, in all likelihood, they are not .running off an explicit theory ITFP.
Exactly. And if philosophers don’t have such measures within their domain of philosophy, why should I pay any attention to what they say?
I haven’t checked, but I strongly expect that hard scientists would be relatively free of presentation bias in answering well-formed questions (that have universally agreed correct answers) within their domain. Perhaps not totally free, but very little affected by it. I keep returning to the same example: you can’t confuse a mathematician, or a physicist or engineer, by saying “400 out of 600 are white” instead of “200 out of 600 are black”.
What results has moral philosophy, as a whole, achieved in the long term? What is as universally agreed on as first-order logic or natural selection?
If moral philosophers claim that uniquely of all human fields of knowledge, their requires not just going beyond formal logic but being contrary to it, I’d expect to see some very extraordinary evidence. “We haven’t been able to make progress otherwise” isn’t quite enough; what are the results they’ve accomplished with whatever a-logical theories they’ve built?
The critical question is whether they could have such measures.
That’s completely beside the point. The point is that you allow that the system cam outperform the individuals in the one case, but not the other.
Do you mean they might create such measures in the future, and therefore we should keep funding them? But without such measures today, how do we know if they’re moving towards that goal? And what’s the basis for thinking it’s achievable?
Is there an empirical or objective standard by which the work of moral philosophers is judged for correctness or value, something that can be formulated explicitly? And if not, how can ‘the system’ converge on good results?
Of course it’s algorithms all the way down! “Lens That Sees Its Flaws” and all that, remember?
How is a process of reasoning based on an infinite stack of algorithms concluded in a finite amount of time?
You can stop recursing whenever you have sufficiently high confidence, which means that your algorithm terminates in finite time with probability 1, while also querying each algorithm in the infinite stack with non-zero probability.
Bingo. And combining that with a good formalization of bounded rationality tells you how deep you can afford to go.
But of course, you’re the expert, so you know that ^_^.
Re: everyone is crap
But that is not a problem. Iff everyone is crap, I want to believe that everyone is crap.
Its a problem, if you want to bash one particular group.
My professional input does not depend on bias in moral (or similarly fuzzy) questions. As for other biases, I definitively determine success or failure on a time scale ranging from minutes to weeks.
These are rather different from how a philosopher can operate.
But that doesn’t make philosophy uniquely broken. If anything it is the other way around: disciplines that deal with the kind of well-defined abstract problems where biases can’t get a grip, are exceptional.
“Can operate” was carefully phrased. If the main role of philosophers is to answer urgent object level moral quandaries, then the OP would have pointed out a serious real world problem....but philosophers typically don’t do that, they typically engage in long term meta level thought on a variety of topics,
Philosophers can operates in a way that approximates the OP scenario, for instance, when they sit on ethics committees. Of course, they sit alongside society’s actual go-to experts on object level ethics, religious professionals, who are unlikely to be less biased.
Philosophers aren’t the most biased or most impactive people in society....worry about the biases of politicians, doctors, and financiers.
I can’t dismiss politicians, doctors and financiers. I can dismiss philosophers, so I’m asking why should I listen to them.
You can dismiss philosophy, if it doesn’t suit your purposes, but that is not at all the same as the original claim that philosophers are somehow doing their job badly. Dismissing philosophers without dismissing philosophy is dangerous, as it means you are doing philosophy without knowing how. You are unlikely to be less biased, whilst being likely to misunderstand questions, reinvent broken solutions, and so on. Consistently avoiding philosophy is harder than it seems. You are likely be making a philosophical claim when you say scientists and mathematicians converge on truth.
I didn’t mean to dismiss moral philosophy; I agree that it asks important questions, including “should we apply a treatment where 400 of 600 survive?” and “do such-and-such people actually choose to apply this treatment?” But I do dismiss philosophers who can’t answer these questions free of presentation bias, because even I myself can do better. Hopefully there are other moral philosophers out there who are both specialists and free of bias. The OP’s suggestion that philosophers are untrustworthy obviously depends on how representative that survey is of philosophers in general. However, I don’t believe that it’s not representative merely because a PHD in moral philosophy sounds very wise.
Meaning you dismiss their output, even though it isnt prepared under those conditions and is prepared under conditions allowing bias reduction, eg by cross checking.
Under the same conditions? Has that been tested?
Scientists have been shown to have failings of their own, under similarly artificial conditions. Are you going to to reject scientists, because of their individual untrustworthiness...or trust the system?
It hasn’t been tested, but I’m reasonably confident in my prediction. Because, if I were answering moral dilemmas, and explicitly reasoning in far mode, I would try to follow some kind of formal system, where presentation doesn’t matter, and where answers can be checked for correctness.
Granted, I would need some time to prepare such a system, to practice with it. And I’m well aware that all actually proposed formal moral systems go against moral intuitions in some cases. So my claim to counterfactually be a better moral philosopher is really quite contingent.
Other sciences deal with human fallibility by having an objective standard of truth against which individual beliefs can be measured. Mathematical theories have formal proofs, and with enough effort the proofs can even be machine-checked. Physical, etc. theories produce empirical predictions that can be independently verified. What is the equivalent in moral philosophy?
So in short, you are answering your rhetorical question with ‘no’, which rather undermines your earlier point—no, DanArmak did not ‘prove too much’.
Shminux did.
If you answer the rhetorical question as ‘no’ then no, Shminux didn’t prove too much either.
This is roughly the point where some bloody philosopher invokes Hume’s Fork, mutters something about meta-ethics, and tells you to fuck off back to the science departments where you came from.
One might reasonably hope that professional philosophers would be better reasoners than the population at large. That is, after all, a large fraction of their job.
Overcoming these biases completely may well be impossible, but should we really expect that years of training in careful thinking, plus further years of practice, on a population that’s supposedly selected for aptitude in thinking, would fail to produce any improvement?
(Maybe we should, either on the grounds that these biases really are completely unfixable or on the grounds that everyone knows academic philosophy is totally broken and isn’t either selecting or training for clearer more careful thinking. I think either would be disappointing.)
Well, if they weren’t explicitly trained to deal with cognitive biases, we shouldn’t expect that they’ve magically acquired such a skill from thin air.