I don’t have a good handle on what you’re saying here—is it the old epistemic vs. instrumental rationality thing?
I’ve been accused of using “Dark Arts” when I’m not necessarily trying to deceive my readers. I believe I was making the argument that “You will have better luck getting your aims achieved if you talk about them with great confidence, as if you know you’re right.” I think this is absolutely true. On the other hand, yeah, I’m endorsing lying. (In this case, a subtle lie, a lie about philosophy, not a lie like “Officer, there’s nothing in the trunk of my car.”)
I’ve been accused of using “Dark Arts” when I’m not necessarily trying to deceive my readers. I believe I was making the argument that “You will have better luck getting your aims achieved if you talk about them with great confidence, as if you know you’re right.” I think this is absolutely true. On the other hand, yeah, I’m endorsing lying. (In this case, a subtle lie, a lie about philosophy, not a lie like “Officer, there’s nothing in the trunk of my car.”)
I’ve been grappling with this matter for years. Some points:
I think that whether or not talking about one’s beliefs with great confidence, as if one knows one is right is conducive to achieving one’s aims is situation dependent. Sometimes presenting a one-sided or overstated view can damage one’s credibility and make others less receptive to what one has to say.
I think that presenting information asymmetrically can promote epistemic rationality. It sometimes happens that people have previously been exposed to information asymmetrically so that presenting information asymmetrically in the opposite direction is a faster way of conveying accurate information to the said people than presenting a balanced view would be.
I think that people who have high credibility/influence/authority should hesitate to make statements that they are uncertain about with great confidence as there’s a danger of such people being taken more seriously than others who have better information than they do.
I would add to multifoliaterose’s points that lying for the greater good works best when you are very confident that you won’t be found out. It sounds like someone noticed your exaggeration of confidence and called you on it, and that undermined what you were trying to achieve. This is usually the risk of lying.
On a side note, I wonder about the situation where one is so confident in one’s goal as to be willing to bend the truth to accomplish it, but not so confident that one can convince anyone else to help without bending the truth.
On a side note, I wonder about the situation where one is so confident in one’s goal as to be willing to bend the truth to accomplish it, but not so confident that one can convince anyone else to help without bending the truth.
I wonder about this too.
A possible source of examples of net harm done by the practice of distorting the truth ostensibly for the greater good is provided by the fact that many charities distort the truth to fundraise. See for example the GiveWell blog postings:
Presumably the people responsible for these illusions tell themselves that using them is justified for the greater good. I suspect that the use of these illusions does more harm than good (according to some sort of utilitarian metric) on account of resulting in misdirected funds and damaging the philanthropic sector’s credibility. As Elie Hassenfeld says in Why are we always criticizing charities?
The problem is: because the nonprofit sector is saturated with unsubstantiated claims of impact and cost-effectiveness, it’s easy to ignore me when I tell you (for example), “Give $1,000 to the Stop Tuberculosis Partnership, and you’ll likely save someone’s life (perhaps 2 or 3 lives).” It’s easy to respond, “You’re just a cheerleader” or “Why give there when Charity X makes an [illusory] promise of even better impact?”
On the other hand, maybe my belief here is influenced by generalizing from one example and selection effects which have given me a misleading impression of what most donors are like—not sure.
I also object to the idea that any time you approach a question like “will arguing X right now advance my goals?” by rationally evaluating all terms in the expected utility equation, you’re being evil, a la “dark arts”.
I admit I wasn’t very clear—let me clarify: I see people making decisions to act based solely on their degree of belief in a particular statement (see the four examples in the original post). To figure out whether a particular action is in your interests, it’s never sufficient just to evaluate probabilities: you can see in the expected utility equation that you simply can’t get away from evaluating your utility function.
I don’t have a good handle on what you’re saying here—is it the old epistemic vs. instrumental rationality thing?
I’ve been accused of using “Dark Arts” when I’m not necessarily trying to deceive my readers. I believe I was making the argument that “You will have better luck getting your aims achieved if you talk about them with great confidence, as if you know you’re right.” I think this is absolutely true. On the other hand, yeah, I’m endorsing lying. (In this case, a subtle lie, a lie about philosophy, not a lie like “Officer, there’s nothing in the trunk of my car.”)
I’ve been grappling with this matter for years. Some points:
I think that whether or not talking about one’s beliefs with great confidence, as if one knows one is right is conducive to achieving one’s aims is situation dependent. Sometimes presenting a one-sided or overstated view can damage one’s credibility and make others less receptive to what one has to say.
I think that presenting information asymmetrically can promote epistemic rationality. It sometimes happens that people have previously been exposed to information asymmetrically so that presenting information asymmetrically in the opposite direction is a faster way of conveying accurate information to the said people than presenting a balanced view would be.
I think that people who have high credibility/influence/authority should hesitate to make statements that they are uncertain about with great confidence as there’s a danger of such people being taken more seriously than others who have better information than they do.
I would add to multifoliaterose’s points that lying for the greater good works best when you are very confident that you won’t be found out. It sounds like someone noticed your exaggeration of confidence and called you on it, and that undermined what you were trying to achieve. This is usually the risk of lying.
On a side note, I wonder about the situation where one is so confident in one’s goal as to be willing to bend the truth to accomplish it, but not so confident that one can convince anyone else to help without bending the truth.
I wonder about this too.
A possible source of examples of net harm done by the practice of distorting the truth ostensibly for the greater good is provided by the fact that many charities distort the truth to fundraise. See for example the GiveWell blog postings:
•Donor Illusions
•When is a charity’s logo a donor illusion?
•Robin Hood, Smile Train and the “0% overhead” donor illusion
Presumably the people responsible for these illusions tell themselves that using them is justified for the greater good. I suspect that the use of these illusions does more harm than good (according to some sort of utilitarian metric) on account of resulting in misdirected funds and damaging the philanthropic sector’s credibility. As Elie Hassenfeld says in Why are we always criticizing charities?
On the other hand, maybe my belief here is influenced by generalizing from one example and selection effects which have given me a misleading impression of what most donors are like—not sure.
I also object to the idea that any time you approach a question like “will arguing X right now advance my goals?” by rationally evaluating all terms in the expected utility equation, you’re being evil, a la “dark arts”.
I admit I wasn’t very clear—let me clarify: I see people making decisions to act based solely on their degree of belief in a particular statement (see the four examples in the original post). To figure out whether a particular action is in your interests, it’s never sufficient just to evaluate probabilities: you can see in the expected utility equation that you simply can’t get away from evaluating your utility function.