Doing things that are virtuous tends to lead to good outcomes. Doing things that aren’t virtuous tends to lead to bad outcomes. For you, and for others. It’s hard to predict what those outcomes—good and bad—actually are. If you were a perfect Bayesian with unlimited information, time and computing power, then yes, go ahead and do the consequentialist calculus. But for humans, we are lacking in those things. Enough so that consequentalist calculus frequently becomes challenging, and the good track record of virtue becomes a huge consideration.
So, I agree with you that “lying leads to mistrust” is one of the reasons why vegan advocates shouldn’t lie. But I think that the main reason they shouldn’t lie is simply that lying has a pretty bad track record.
And then another huge consideration is that people who come up with reasons why they, at least in this particular circumstance, are a special snowflake and are justified in lying, frequently are deluding themselves.[3]
Well, that post is about ethics. And I think the conversation we’re having isn’t really limited to ethics. It’s more about just, pragmatically, what should the EA community do if they want to win.
I understood the original comment to be making essentially the same point you’re making—that lying has a bad track record, where ‘lying has a bad track record of causing mistrust’ is a case of this. In what way do you see them as distinct reasons?
I see them as distinct because what I’m saying is that lying generally tends to lead to bad outcomes (for both the liar and society at large) whereas mistrust specifically is just one component of the bad outcomes.
Other components that come to my mind:
People don’t end up with accurate information.
Expectations that people will cooperate (different from “tell you the truth”) go down.
Expectations that people will do things because they are virtuous go down.
But a big thing here is that it’s difficult to know why exactly it will lead to bad outcomes. The gears are hard to model. However, I think there’s solid evidence that it leads to bad outcomes.
I hear ya, but I think this is missing something important. Basically, I’m thinking of the post Ends Don’t Justify Means (Among Humans).[1][2]
Doing things that are virtuous tends to lead to good outcomes. Doing things that aren’t virtuous tends to lead to bad outcomes. For you, and for others. It’s hard to predict what those outcomes—good and bad—actually are. If you were a perfect Bayesian with unlimited information, time and computing power, then yes, go ahead and do the consequentialist calculus. But for humans, we are lacking in those things. Enough so that consequentalist calculus frequently becomes challenging, and the good track record of virtue becomes a huge consideration.
So, I agree with you that “lying leads to mistrust” is one of the reasons why vegan advocates shouldn’t lie. But I think that the main reason they shouldn’t lie is simply that lying has a pretty bad track record.
And then another huge consideration is that people who come up with reasons why they, at least in this particular circumstance, are a special snowflake and are justified in lying, frequently are deluding themselves.[3]
Well, that post is about ethics. And I think the conversation we’re having isn’t really limited to ethics. It’s more about just, pragmatically, what should the EA community do if they want to win.
Here’s my slightly different(?) take, if anyone’s interested: Reflective Consequentialism.
I cringe at how applause light-y this comment is. Please don’t upvote if you feel like you might be non-trivially reacting to an applause light.
I understood the original comment to be making essentially the same point you’re making—that lying has a bad track record, where ‘lying has a bad track record of causing mistrust’ is a case of this. In what way do you see them as distinct reasons?
I see them as distinct because what I’m saying is that lying generally tends to lead to bad outcomes (for both the liar and society at large) whereas mistrust specifically is just one component of the bad outcomes.
Other components that come to my mind:
People don’t end up with accurate information.
Expectations that people will cooperate (different from “tell you the truth”) go down.
Expectations that people will do things because they are virtuous go down.
But a big thing here is that it’s difficult to know why exactly it will lead to bad outcomes. The gears are hard to model. However, I think there’s solid evidence that it leads to bad outcomes.