I agree that hidden motives are even more concerning, but I don’t think that’s the heuristic I see people applying in practice. It would explain why people scramble to un-hide the fact that their words and actions disagree, but I don’t think it explains the fact that people will themselves act as if their advice matters less if it doesn’t match their actions, or the way other people seem to discount advice from people whose words don’t match their actions.
As with other possible problems an anti-hypocrisy norm may prevent, I think I’d rather deal with “hidden motives” as its own problem. After all, those can already be a problem when no mismatch between words and deeds is apparent.
Nobody’s going to think twice if you say “I don’t have a licence because Y, which doesn’t apply to you and you should probably get one”. Only if you say “getting a license is great and worth sacrificing for, but I haven’t bothered” will people notice the apparent contradiction and downweight your opinion (and possibly judge you poorly).
If I did advise someone to get a license, it could well be in the second category. Left unspoken, but not difficult to infer, would be that I haven’t gotten a license because I’m lazy, or I have an ugh field around it, or something along those lines. That interpretation of you example goes against what you say in the previous paragraph: if the norm is only against hidden or unexplained variance, why discount my advice? Or, if I interpret the example as hiding or refusing to state my own reasons for not getting a license, why is that so relevant? If my case _is_ different than yours, it might be helpful to have a discussion about that to clarify my models, but it’s not automatically the most relevant discussion.
My advice should be discounted on general beware-other-optimizing grounds, but not specifically because I lack a license with no justification to differentiate my case.
Others might look poorly on me for my akrasia, but that’s true whether or not I advise other people to do better.
My motives may be suspect, but they won’t always be; I think extra outside reasons are needed for that to be an important consideration (and I see people having this flinch response in cases where that’s not the case at all). Flinching away from hypocrisy without those extra reasons seems epistemically unhygienic; it may block an important update, or leave you with a lasting cached thought that a certain argument was invalid.
So, it seems like we can separate all the concerns, and each concern is better dealt with on its own rather than with one broad anti-hypocrisy norm. So, while each of these provide somewhat reasonable post-hoc justifications for anti-hypocrisy, they just don’t seem like the kind of thing which would make me invent hypocrisy as the cluster which cuts reality at the joints if I hadn’t heard about it before.
These options aren’t exclusive. I can discount hypocritical advice _BOTH_ on other-optimizing grounds _AND_ on grounds that self-contradiction indicates error somewhere.
Thanks for trying to supply reasons for the norm!
I agree that hidden motives are even more concerning, but I don’t think that’s the heuristic I see people applying in practice. It would explain why people scramble to un-hide the fact that their words and actions disagree, but I don’t think it explains the fact that people will themselves act as if their advice matters less if it doesn’t match their actions, or the way other people seem to discount advice from people whose words don’t match their actions.
As with other possible problems an anti-hypocrisy norm may prevent, I think I’d rather deal with “hidden motives” as its own problem. After all, those can already be a problem when no mismatch between words and deeds is apparent.
If I did advise someone to get a license, it could well be in the second category. Left unspoken, but not difficult to infer, would be that I haven’t gotten a license because I’m lazy, or I have an ugh field around it, or something along those lines. That interpretation of you example goes against what you say in the previous paragraph: if the norm is only against hidden or unexplained variance, why discount my advice? Or, if I interpret the example as hiding or refusing to state my own reasons for not getting a license, why is that so relevant? If my case _is_ different than yours, it might be helpful to have a discussion about that to clarify my models, but it’s not automatically the most relevant discussion.
My advice should be discounted on general beware-other-optimizing grounds, but not specifically because I lack a license with no justification to differentiate my case.
Others might look poorly on me for my akrasia, but that’s true whether or not I advise other people to do better.
My motives may be suspect, but they won’t always be; I think extra outside reasons are needed for that to be an important consideration (and I see people having this flinch response in cases where that’s not the case at all). Flinching away from hypocrisy without those extra reasons seems epistemically unhygienic; it may block an important update, or leave you with a lasting cached thought that a certain argument was invalid.
So, it seems like we can separate all the concerns, and each concern is better dealt with on its own rather than with one broad anti-hypocrisy norm. So, while each of these provide somewhat reasonable post-hoc justifications for anti-hypocrisy, they just don’t seem like the kind of thing which would make me invent hypocrisy as the cluster which cuts reality at the joints if I hadn’t heard about it before.
These options aren’t exclusive. I can discount hypocritical advice _BOTH_ on other-optimizing grounds _AND_ on grounds that self-contradiction indicates error somewhere.
Well, I agree, but in this case it seems to me that the one does explain away the need for the other.