You can dismiss philosophy, if it doesn’t suit your purposes, but that is not at all the same as the original claim that philosophers are somehow doing their job badly
I didn’t mean to dismiss moral philosophy; I agree that it asks important questions, including “should we apply a treatment where 400 of 600 survive?” and “do such-and-such people actually choose to apply this treatment?” But I do dismiss philosophers who can’t answer these questions free of presentation bias, because even I myself can do better. Hopefully there are other moral philosophers out there who are both specialists and free of bias. The OP’s suggestion that philosophers are untrustworthy obviously depends on how representative that survey is of philosophers in general. However, I don’t believe that it’s not representative merely because a PHD in moral philosophy sounds very wise.
I didn’t mean to dismiss moral philosophy; I agree that it asks important questions, including “should we apply a treatment where 400 of 600 survive?” and “do such-and-such people actually choose to apply this treatment?” But I do dismiss philosophers who can’t answer these questions free of presentation bias,
Meaning you dismiss their output, even though it isnt prepared under those conditions and is prepared under conditions allowing bias reduction, eg by cross checking.
because even I myself can do better.
Under the same conditions? Has that been tested?
Hopefully there are other moral philosophers out there who are both specialists and free of bias. The OP’s suggestion that philosophers are untrustworthy obviously depends on how representative that survey is of philosophers in general. However, I don’t believe that it’s not representative merely because a PHD in moral philosophy sounds very wise.
Scientists have been shown to have failings of their own, under similarly artificial conditions. Are you going to to reject scientists, because of their individual untrustworthiness...or trust the system?
It hasn’t been tested, but I’m reasonably confident in my prediction. Because, if I were answering moral dilemmas, and explicitly reasoning in far mode, I would try to follow some kind of formal system, where presentation doesn’t matter, and where answers can be checked for correctness.
Granted, I would need some time to prepare such a system, to practice with it. And I’m well aware that all actually proposed formal moral systems go against moral intuitions in some cases. So my claim to counterfactually be a better moral philosopher is really quite contingent.
Scientists have been shown to have failings of their own, under similarly artificial conditions. Are you going to to reject scientists, because of their individual untrustworthiness...or trust the system?
Other sciences deal with human fallibility by having an objective standard of truth against which individual beliefs can be measured. Mathematical theories have formal proofs, and with enough effort the proofs can even be machine-checked. Physical, etc. theories produce empirical predictions that can be independently verified. What is the equivalent in moral philosophy?
I didn’t mean to dismiss moral philosophy; I agree that it asks important questions, including “should we apply a treatment where 400 of 600 survive?” and “do such-and-such people actually choose to apply this treatment?” But I do dismiss philosophers who can’t answer these questions free of presentation bias, because even I myself can do better. Hopefully there are other moral philosophers out there who are both specialists and free of bias. The OP’s suggestion that philosophers are untrustworthy obviously depends on how representative that survey is of philosophers in general. However, I don’t believe that it’s not representative merely because a PHD in moral philosophy sounds very wise.
Meaning you dismiss their output, even though it isnt prepared under those conditions and is prepared under conditions allowing bias reduction, eg by cross checking.
Under the same conditions? Has that been tested?
Scientists have been shown to have failings of their own, under similarly artificial conditions. Are you going to to reject scientists, because of their individual untrustworthiness...or trust the system?
It hasn’t been tested, but I’m reasonably confident in my prediction. Because, if I were answering moral dilemmas, and explicitly reasoning in far mode, I would try to follow some kind of formal system, where presentation doesn’t matter, and where answers can be checked for correctness.
Granted, I would need some time to prepare such a system, to practice with it. And I’m well aware that all actually proposed formal moral systems go against moral intuitions in some cases. So my claim to counterfactually be a better moral philosopher is really quite contingent.
Other sciences deal with human fallibility by having an objective standard of truth against which individual beliefs can be measured. Mathematical theories have formal proofs, and with enough effort the proofs can even be machine-checked. Physical, etc. theories produce empirical predictions that can be independently verified. What is the equivalent in moral philosophy?