I agree with most of this, but I think the “Let me call this for what it is: lying for personal gain” section is silly and doesn’t help your case.
The only sense in which it’s clear that it’s “for personal gain” is that it’s lying to get what you want. Sure, I’m with you that far—but if what someone wants is [a wonderful future for everyone], then that’s hardly what most people would describe as “for personal gain”. By this logic, any instrumental action taken towards an altruistic goal would be “for personal gain”.
That’s just silly. It’s unhelpful too, since it gives people a somewhat legitimate reason to dismiss the broader point.
Of course it’s possible that the longer-term altruistic goal is just a rationalization, and people are after power for its own sake, but I don’t buy that this is often true—at least not in any clean [they’re doing this and only this] sense. (one could have similar altruistic-goal-is-rationalization suspicions about your actions too)
In many cases, I think overconfidence is sufficient explanation. And if we get into “Ah, but isn’t it interesting that this overconfidence leads to power gain”, then I’d agree—but then I claim that you should distinguish [conscious motivations] from [motivations we might infer by looking at the human as a whole, deep shadowy subconscious included]. If you’re pointing at the latter, please make that clear. (and we might also ask “What actions are not for personal gain, in this sense?”)
Again, entirely with you on the rest. I’m not against accusations that may hurt feelings—but I think that more precision would be preferable here.
The only sense in which it’s clear that it’s “for personal gain” is that it’s lying to get what you want. Sure, I’m with you that far—but if what someone wants is [a wonderful future for everyone], then that’s hardly what most people would describe as “for personal gain”.
If Alice lies in order to get influence, with the hope of later using that influence for altruistic ends, it seems fair to call the influence Alice gets ‘personal gain’. After all, it’s her sense of altruism that will be promoted, not a generic one.
This is not what most people mean by “for personal gain”. (I’m not disputing that Alice gets personal gain)
Insofar as the influence is required for altruistic ends, aiming for it doesn’t imply aiming for personal gain. Insofar as the influence is not required for altruistic ends, we have no basis to believe Alice was aiming for it.
“You’re just doing that for personal gain!” is not generally taken to mean that you may be genuinely doing your best to create a better world for everyone, as you see it, in a way that many would broadly endorse.
In this context, an appropriate standard is the post’s own: Does this “predictably lead people to believe false things”? Yes, it does. (if they believe it)
“Lying for personal gain” is a predictably misleading description, unless much stronger claims are being made about motivation (and I don’t think there’s sufficient evidence to back those up).
The “lying” part I can mostly go along with. (though based on a contextual ‘duty’ to speak out when it’s unusually important; and I think I’d still want to label the two situations differently: [not speaking out] and [explicitly lying] may both be undesirable, but they’re not the same thing) (I don’t really think in terms of duties, but it’s a reasonable shorthand here)
By this logic, any instrumental action taken towards an altruistic goal would be “for personal gain”.
I think you are making a genuine mistake, and that I could have been clearer.
There are instrumental actions that favour everyone (raising epistemic standards), and instrumental actions that favour you (making money).
The latter are for personal gains, regardless of your end goals.
Sorry for not getting deeper into it in this comment. This is quite a vast topic. I might instead write a longer post about the interactions of deontology & consequentialism, and egoism & altruism.
(With “this logic” I meant to refer to [“for personal gain” = “getting what you want”]. But this isn’t important)
If we’re sticking to instrumental actions that do favour you (among other things), then the post is still incorrect:
[y is one consequence of x] does not imply [x is for y]
The “for” says something about motivation. Is an action that happens to be to my benefit necessarily motivated by that? No. (though more often than I’d wish to admit, of course)
If you want to claim that it’s bad to [Lie in such a way that you get something that benefits you], then make that claim (even though it’d be rather silly—just “lying is bad” is simpler and achieves the same thing).
If you’re claiming that people doing this are necessarily lying in order to benefit themselves, then you are wrong. (or at least the only way you’d be right is by saying that essentially all actions are motivated by personal gain)
If you’re claiming that people doing this are in fact lying in order to benefit themselves, then you should either provide some evidence, or lower your confidence in the claim.
If it’s clearer with an example, suppose that the first action on the [most probable to save the world] path happens to get me a million dollars. Suppose that I take this action.
Should we then say that I did it “for personal gain”? That I can only have done it “for personal gain”?
This seems clearly foolish. That I happen to have gained from an instrumentally-useful-for-the-world action, does not imply that this motivated me. The same applies if I only think this path is the best for the world.
I think it still makes sense to have a heuristic of the form “I should have a particularly high bar of confidence If I do something deontologically bad that happens to be good for me personally”
Agreed—though I wouldn’t want to trust that heuristic alone in this area, since in practice the condition won’t be [if I do something deontologically bad] but rather something like [if I notice that I’m doing something that I’m inclined to classify as deontologically bad].
I agree with most of this, but I think the “Let me call this for what it is: lying for personal gain” section is silly and doesn’t help your case.
The only sense in which it’s clear that it’s “for personal gain” is that it’s lying to get what you want.
Sure, I’m with you that far—but if what someone wants is [a wonderful future for everyone], then that’s hardly what most people would describe as “for personal gain”.
By this logic, any instrumental action taken towards an altruistic goal would be “for personal gain”.
That’s just silly.
It’s unhelpful too, since it gives people a somewhat legitimate reason to dismiss the broader point.
Of course it’s possible that the longer-term altruistic goal is just a rationalization, and people are after power for its own sake, but I don’t buy that this is often true—at least not in any clean [they’re doing this and only this] sense. (one could have similar altruistic-goal-is-rationalization suspicions about your actions too)
In many cases, I think overconfidence is sufficient explanation.
And if we get into “Ah, but isn’t it interesting that this overconfidence leads to power gain”, then I’d agree—but then I claim that you should distinguish [conscious motivations] from [motivations we might infer by looking at the human as a whole, deep shadowy subconscious included]. If you’re pointing at the latter, please make that clear. (and we might also ask “What actions are not for personal gain, in this sense?”)
Again, entirely with you on the rest.
I’m not against accusations that may hurt feelings—but I think that more precision would be preferable here.
If Alice lies in order to get influence, with the hope of later using that influence for altruistic ends, it seems fair to call the influence Alice gets ‘personal gain’. After all, it’s her sense of altruism that will be promoted, not a generic one.
This is not what most people mean by “for personal gain”. (I’m not disputing that Alice gets personal gain)
Insofar as the influence is required for altruistic ends, aiming for it doesn’t imply aiming for personal gain.
Insofar as the influence is not required for altruistic ends, we have no basis to believe Alice was aiming for it.
“You’re just doing that for personal gain!” is not generally taken to mean that you may be genuinely doing your best to create a better world for everyone, as you see it, in a way that many would broadly endorse.
In this context, an appropriate standard is the post’s own:
Does this “predictably lead people to believe false things”?
Yes, it does. (if they believe it)
“Lying for personal gain” is a predictably misleading description, unless much stronger claims are being made about motivation (and I don’t think there’s sufficient evidence to back those up).
The “lying” part I can mostly go along with. (though based on a contextual ‘duty’ to speak out when it’s unusually important; and I think I’d still want to label the two situations differently: [not speaking out] and [explicitly lying] may both be undesirable, but they’re not the same thing)
(I don’t really think in terms of duties, but it’s a reasonable shorthand here)
I think you are making a genuine mistake, and that I could have been clearer.
There are instrumental actions that favour everyone (raising epistemic standards), and instrumental actions that favour you (making money).
The latter are for personal gains, regardless of your end goals.
Sorry for not getting deeper into it in this comment. This is quite a vast topic.
I might instead write a longer post about the interactions of deontology & consequentialism, and egoism & altruism.
(With “this logic” I meant to refer to [“for personal gain” = “getting what you want”]. But this isn’t important)
If we’re sticking to instrumental actions that do favour you (among other things), then the post is still incorrect:
[y is one consequence of x] does not imply [x is for y]
The “for” says something about motivation.
Is an action that happens to be to my benefit necessarily motivated by that? No.
(though more often than I’d wish to admit, of course)
If you want to claim that it’s bad to [Lie in such a way that you get something that benefits you], then make that claim (even though it’d be rather silly—just “lying is bad” is simpler and achieves the same thing).
If you’re claiming that people doing this are necessarily lying in order to benefit themselves, then you are wrong. (or at least the only way you’d be right is by saying that essentially all actions are motivated by personal gain)
If you’re claiming that people doing this are in fact lying in order to benefit themselves, then you should either provide some evidence, or lower your confidence in the claim.
If it’s clearer with an example, suppose that the first action on the [most probable to save the world] path happens to get me a million dollars. Suppose that I take this action.
Should we then say that I did it “for personal gain”?
That I can only have done it “for personal gain”?
This seems clearly foolish. That I happen to have gained from an instrumentally-useful-for-the-world action, does not imply that this motivated me. The same applies if I only think this path is the best for the world.
I think it still makes sense to have a heuristic of the form “I should have a particularly high bar of confidence If I do something deontologically bad that happens to be good for me personally”
Agreed—though I wouldn’t want to trust that heuristic alone in this area, since in practice the condition won’t be [if I do something deontologically bad] but rather something like [if I notice that I’m doing something that I’m inclined to classify as deontologically bad].