On rationalization, aka the giant sucking cognitive black hole.
Though [Ben Franklin] had been a vegetarian on principle, on one long sea crossing the men were grilling fish, and his mouth started watering:
I balanc’d some time between principle and inclination, till I recollectd that, when the fish were opened, I saw smaller fish taken out of their stomachs; then thought I, “if you eat one another, I don’t see why we mayn’t eat you.” So I din’d upon cod very heartily, and continued to eat with other people, returning only now and then occasionally to a vegetable diet.
Franklin concluded: “So convenient a thing it is to be a reasonable creature, since it enables one to find or make a reason for every thing one has a mind to do.”
I really like the quote about cod but I’m not particularly inspired by the moral given for the story. I’d prefer “I eliminated a non-terminal ethical principal when I realised my thinking was pretentious bullshit, moving towards a more coherent ethical framework. Yay me!”
I noticed that too; of course not eating fish is an ethical non-issue given how much other low-hanging consequentialist fruit there is.
However, note that his justification for his change of heart is pure rationalization. Whatever good reasons there might be for eating fish, or for abandoning vegetarianism, “they eat each other” is a bad one, a confabulation.
Fish and other animals are not capable of reflecting ethically on their actions, so they are ethically blameless for whatever they do. That does not mean their suffering doesn’t count. Franklin knew that.
I know I’m bringing Drescher up a lot recently, but this exchange reminds me of some of his points, and how, after reading Good and Real, I see Haidt’s work (among other people’s) in a different light.
Drescher’s theory of ethics and decision making is, “You should do what you [self-interestedly] wish all similarly situated beings would do” on the basis that “if you would regard it as the optimal thing to do, then-counterfactually they would too”.
He claims it implies you should cast a wide net in terms of which beings you grant moral status, but not too wide: you draw the line at beings that don’t make choices (in the sense of evaluating alternatives and picking one for the sake of a goal), as that breaks a critical symmetry between you and them.
Taking your premise that fish don’t reflect on their actions, this account would claim that they likewise do not have the moral status of humans. But it would also agree with you that it’s insufficient to point to how they eat each other, because “I would not want some superbeing to eat me simply on the basis that I eat less intelligent beings.”
Also, Drescher accounts for our moral intuitions by saying that they are a case of us being choice machines which recognize acausal means-end links (i.e. relationships between our choices and the achievement of goals that do not require the choice to [futurewardly] cause the goal). This doesn’t necessarily contradict Haidt’s argument that we judge things as right because of e.g. ingroup/outgroup distinctions (he says that functional equivalence to acausal means-ends links is all that matters, even if the agent simply feels that they “care” about others), but it does tend to obviate that kind of supposition. [/show-off]
I’ve got Good and Real on hold at the library. :) Currently working through Cialdini’s Influence, muahaha...
Drescher’s theory of ethics and decision making is, “You should do what you [self-interestedly] wish all similarly situated beings would do” on the basis that “if you would regard it as the optimal thing to do, then-counterfactually they would too”.
He claims it implies you should cast a wide net in terms of which beings you grant moral status, but not too wide: you draw the line at beings that don’t make choices (in the sense of evaluating alternatives and picking one for the sake of a goal), as that breaks a critical symmetry between you and them.
This sounds to me like a modernized version of Kantian deontology… interesting.
Where I really trip up with this argument is in the ‘granting moral status’ step. What does it mean if I decide to say ‘a fish has no moral status?’
Let’s do a reductio. Say fish have no moral status. Does that mean it’s permissible to torture them, say by superstimulating pain centres in their brains? I don’t think so, even if the torture achieved some small useful end.
I don’t think suffering should be taken out of the equation in favour of symmetries. The latter have no obvious moral weight.
I don’t have a good answer for the rest of your comment, but I can answer this:
Where I really trip up with this argument is in the ‘granting moral status’ step. What does it mean if I decide to say ‘a fish has no moral status?’
Drescher does a good job of making sure that nothing depends on choice of terminology. In this case, “a fish has no moral status” cashes out to “I should not count a fish’s disutility/pain/etc. against the optimality of actions I am considering.”
You can take “should” to mean anything under Drescher’s account, and, as long as you’re consistent with its usage, it has non-absurd implications. Under common parlance, you can take “should” to mean “the action that I will choose” or “the action I regard as optimal”. Then, you can see how this sense of the term applies:
“If I would regard it as optimal to kill weaker beings, then-counterfactually beings who are stronger than me would regard it as optimal to kill me, to the extent that their relation to me mirrors my relation to the weaker beings under consideration.”
I didn’t give a full exposition of how exactly you apply such reasoning to fish, but under this account, you would need to look at what is counterfactually entailed by your reasoning to cause pain to fish.
Whatever good reasons there might be for eating fish, or for abandoning vegetarianism, “they eat each other” is a bad one, a confabulation.
No, that isn’t implied. There are all sorts of coherent value systems which make ethical distinctions between killing things that kill other things and killing things that don’t kill other things. Maybe Franklin was confabulating, but again, that moral does not inspire me. In most cases the reasoning is sound and does move the values a step towards coherency.
There is a difference between dastardly rationalisation and updating your ethical position by eliminating obviously poor thinking.
Fish and other animals are not capable of reflecting ethically on their actions, so they are ethically blameless for whatever they do.
A lot of people are good at not reflecting ethically too, and it does help them get away with stuff (via more effective signalling). This is not a feature of the universe over which I rejoice and nor is it one that I encourage via my ethical signalling.
His comment on the matter suggests he thought he was.
Yes. Hence the lack of inspiration. It’s the same old moral: “Thoughts and ethical intuitions are enemies. Ethical intuitions are good and you should follow them. Thinking your ethics through is bad. Submit to the will of the tribe!”
I say if subjecting your ethical intuitions to rational analysis doesn’t lead you to change them in some way then you are probably doing it wrong.
How subject ethical intuitions should be to rational analysis (in the sense of being changed by them) depends on how much you endorse the fact-value distinction and how fundamental the intuition is.
Reason leads me (though perhaps my reasoning is flawed) to conclude that “others’ abject suffering is bad” isn’t any more justified a desire than “others’ abject suffering is good;” they’re as equivalent as a preference for chocolate or vanilla ice cream. But so what? I don’t abandon my preference for vanilla just because it doesn’t follow from reason. Morality works the same way, except that ideally, I care about it enough to force my preferences on others.
How subject ethical intuitions should be to rational analysis (in the sense of being changed by them) depends on how much you endorse the fact-value distinction and how fundamental the intuition is.
Yes. It is non-terminal ethical intuitions that I expect to be updated. “Should not do X because Y” should be discarded when it becomes obvious that Y is bullshit.
On rationalization, aka the giant sucking cognitive black hole.
-Jonathan Haidt, “The Happiness Hypothesis”
I really like the quote about cod but I’m not particularly inspired by the moral given for the story. I’d prefer “I eliminated a non-terminal ethical principal when I realised my thinking was pretentious bullshit, moving towards a more coherent ethical framework. Yay me!”
I noticed that too; of course not eating fish is an ethical non-issue given how much other low-hanging consequentialist fruit there is.
However, note that his justification for his change of heart is pure rationalization. Whatever good reasons there might be for eating fish, or for abandoning vegetarianism, “they eat each other” is a bad one, a confabulation.
Fish and other animals are not capable of reflecting ethically on their actions, so they are ethically blameless for whatever they do. That does not mean their suffering doesn’t count. Franklin knew that.
I know I’m bringing Drescher up a lot recently, but this exchange reminds me of some of his points, and how, after reading Good and Real, I see Haidt’s work (among other people’s) in a different light.
Drescher’s theory of ethics and decision making is, “You should do what you [self-interestedly] wish all similarly situated beings would do” on the basis that “if you would regard it as the optimal thing to do, then-counterfactually they would too”.
He claims it implies you should cast a wide net in terms of which beings you grant moral status, but not too wide: you draw the line at beings that don’t make choices (in the sense of evaluating alternatives and picking one for the sake of a goal), as that breaks a critical symmetry between you and them.
Taking your premise that fish don’t reflect on their actions, this account would claim that they likewise do not have the moral status of humans. But it would also agree with you that it’s insufficient to point to how they eat each other, because “I would not want some superbeing to eat me simply on the basis that I eat less intelligent beings.”
Also, Drescher accounts for our moral intuitions by saying that they are a case of us being choice machines which recognize acausal means-end links (i.e. relationships between our choices and the achievement of goals that do not require the choice to [futurewardly] cause the goal). This doesn’t necessarily contradict Haidt’s argument that we judge things as right because of e.g. ingroup/outgroup distinctions (he says that functional equivalence to acausal means-ends links is all that matters, even if the agent simply feels that they “care” about others), but it does tend to obviate that kind of supposition. [/show-off]
I’ve got Good and Real on hold at the library. :) Currently working through Cialdini’s Influence, muahaha...
This sounds to me like a modernized version of Kantian deontology… interesting.
Where I really trip up with this argument is in the ‘granting moral status’ step. What does it mean if I decide to say ‘a fish has no moral status?’
Let’s do a reductio. Say fish have no moral status. Does that mean it’s permissible to torture them, say by superstimulating pain centres in their brains? I don’t think so, even if the torture achieved some small useful end.
I don’t think suffering should be taken out of the equation in favour of symmetries. The latter have no obvious moral weight.
I don’t have a good answer for the rest of your comment, but I can answer this:
Drescher does a good job of making sure that nothing depends on choice of terminology. In this case, “a fish has no moral status” cashes out to “I should not count a fish’s disutility/pain/etc. against the optimality of actions I am considering.”
You can take “should” to mean anything under Drescher’s account, and, as long as you’re consistent with its usage, it has non-absurd implications. Under common parlance, you can take “should” to mean “the action that I will choose” or “the action I regard as optimal”. Then, you can see how this sense of the term applies:
“If I would regard it as optimal to kill weaker beings, then-counterfactually beings who are stronger than me would regard it as optimal to kill me, to the extent that their relation to me mirrors my relation to the weaker beings under consideration.”
I didn’t give a full exposition of how exactly you apply such reasoning to fish, but under this account, you would need to look at what is counterfactually entailed by your reasoning to cause pain to fish.
No, that isn’t implied. There are all sorts of coherent value systems which make ethical distinctions between killing things that kill other things and killing things that don’t kill other things. Maybe Franklin was confabulating, but again, that moral does not inspire me. In most cases the reasoning is sound and does move the values a step towards coherency.
There is a difference between dastardly rationalisation and updating your ethical position by eliminating obviously poor thinking.
A lot of people are good at not reflecting ethically too, and it does help them get away with stuff (via more effective signalling). This is not a feature of the universe over which I rejoice and nor is it one that I encourage via my ethical signalling.
His comment on the matter suggests he thought he was.
The context does not record whether he returned to vegetarianism once away from the temptation.
Yes. Hence the lack of inspiration. It’s the same old moral: “Thoughts and ethical intuitions are enemies. Ethical intuitions are good and you should follow them. Thinking your ethics through is bad. Submit to the will of the tribe!”
I say if subjecting your ethical intuitions to rational analysis doesn’t lead you to change them in some way then you are probably doing it wrong.
How subject ethical intuitions should be to rational analysis (in the sense of being changed by them) depends on how much you endorse the fact-value distinction and how fundamental the intuition is.
Reason leads me (though perhaps my reasoning is flawed) to conclude that “others’ abject suffering is bad” isn’t any more justified a desire than “others’ abject suffering is good;” they’re as equivalent as a preference for chocolate or vanilla ice cream. But so what? I don’t abandon my preference for vanilla just because it doesn’t follow from reason. Morality works the same way, except that ideally, I care about it enough to force my preferences on others.
Yes. It is non-terminal ethical intuitions that I expect to be updated. “Should not do X because Y” should be discarded when it becomes obvious that Y is bullshit.