Thanks for taking the time to write up all your thoughts.
The nature of this should is that status evaluations are not why I am sharing the information.
I object to “status evaluations” being the stand-in term for all the “side-effects” of sharing information. I think we’re talking about a lot more here—consequences is a better, more inclusive term that I’d prefer. “Status evaluations” trivializes what we’re talking about in the same way I think “tone” diminishes the sheer scope of how information-dense the non-core aspects of speech are.
If I am reading you right, you are effectively saying that one shouldn’t have to bear responsibility for the consequences of the speech over and beyond ensuring that what you are saying is accurate. If what you are saying is accurate and is only causing accurate updates, you shouldn’t have to worry about what effects it will have (because this gets in the way of sharing true and relevant information, and creating clarity).
The power of this “should” is that I’m denying the legitimacy of coercing me into doing something in order to maintain someone else’s desire for social frame control.
In my mind, this discussion isn’t about you (the truth-speaker) should be coerced by some outside regulating force. I want to discuss what you (and I) should judge for ourselves is the correct approach to saying things. If you and all your fellow seekers of clarity are getting together to create a new community of clarity-seekers, what are the correct norms? If you are trying to accomplish things with your speech, how best to go about it?
I believe it is not virtuous or good decision theory to obligate people with additional burdens in order to do this, and make those doing so worry about being accursed of violating such burdens.
You haven’t explicitly stated the decision theory/selection of virtues which leads to the conclusion, but I think I can infer it. Let me know if I’m missing something or getting it wrong. 1) If you create any friction around doing something, it will reduce how much it happens. 2) Particularly in this case, if you allow for reasons to silence truth, people will actively do this to stifle truths they don’t like—as we do see in practice. Overall, truth-seeking is something to be precious to be guarded. Something that needs to be protected from our own rationalizations and the rationalizations/defensiveness of others. Any rules, regulations, or norms which restricts what you say are actually quite dangerous.
I think the above position is true, but it’s ignoring key considerations which make the picture more complicated. I’ll put my own position/response in the next comment for threading.
This might have gotten lost in the convo and likely I should have mentioned it again, but I advocated for the behavior under discussion to be supererogatory/ a virtue [1]: not something to be enforced, but still something individuals ought to do of their own volition. Hence “I want to talk about why you freely should want to do this” and not “why I should be allowed to make you do this.”
Even when talking about norms though, my instinct is to first clarify what’s normative/virtuous for individuals. I expect disagreements there to be cruxes for disagreements about groups. I guess because I expect both one’s beliefs about what’s good for individuals and what’s good for groups to do to arise from the same underlying models of what makes actions generally good.
(Otherwise, they would just be heuristics)
Huh, that’s a word choice I wouldn’t have considered. I’d usually say “norms apply to groups” and “there’s such a thing is ideal/virtuous/optimal behavior for individuals relative to their values/goals.” I guess it’s actually hard to determine what is ideal/virtuous/optimal, and so you only have heuristics? And virtues really are heuristics. This doesn’t feel like a key point, but let me know if you think there’s an important difference I’m missing.
____________________
[1] I admit that there are dangers even in just having something as a virtue/encouraged behavior, and that your point expressed in this comment to Ray is a legitimate concern.
I worry that saying certain ways of making criticisms are good/bad results in people getting silenced/blamed even when they’re saying true things, which is really bad.
I think that’s a very real risk and really bad when it happens. I think there are large costs in the other direction too. I’d be interested in thinking through together the costs/benefits of having vs not saying certain ways of saying things are better. I think marginal thoughts/discussion could cause me to update where the final balance lies here.
Before I read (2), I want to note that a universal idea that one is responsible for all the consequences of one’s accurate speech—in an inevitably Asymmetric Justice / CIE fashion—seems like it is effectively a way to ban truth-seeking entirely, and perhaps all speech of any kind. And the fact that there might be other consequences to true speech that one may not like and might want to avoid, does not mean it is unreasonable to point out that the subclass of such consequences that seems to be in play in these examples, seems like a subclass that seems much less worth worrying about avoiding. But yes, Kant saying you should tell the truth to an Axe murderer seems highly questionable, and all that.
And echo Jessica that it’s not reasonable to say that all of this is voluntary within the frame you’re offering, if the response to not doing it is to not be welcome, or to be socially punished. Regardless of what standards one chooses.
I think that is a far from complete description of my decision theory and selection of virtues here. Those are two important considerations, and this points in the right direction for the rest, but there are lots of others too. Margin too small to contain full description.
At some point I hope to write a virtue ethics sequence, but it’s super hard to describe it in written form, and every time I think about it I assume that even if I do get it across, people who speak better philosopher will technically pick anything I say to pieces and all that and I get an ugg field around the whole operation, and assume it won’t really work at getting people to reconsider. Alas.
(1 out of 2)
Thanks for taking the time to write up all your thoughts.
I object to “status evaluations” being the stand-in term for all the “side-effects” of sharing information. I think we’re talking about a lot more here—consequences is a better, more inclusive term that I’d prefer. “Status evaluations” trivializes what we’re talking about in the same way I think “tone” diminishes the sheer scope of how information-dense the non-core aspects of speech are.
If I am reading you right, you are effectively saying that one shouldn’t have to bear responsibility for the consequences of the speech over and beyond ensuring that what you are saying is accurate. If what you are saying is accurate and is only causing accurate updates, you shouldn’t have to worry about what effects it will have (because this gets in the way of sharing true and relevant information, and creating clarity).
In my mind, this discussion isn’t about you (the truth-speaker) should be coerced by some outside regulating force. I want to discuss what you (and I) should judge for ourselves is the correct approach to saying things. If you and all your fellow seekers of clarity are getting together to create a new community of clarity-seekers, what are the correct norms? If you are trying to accomplish things with your speech, how best to go about it?
You haven’t explicitly stated the decision theory/selection of virtues which leads to the conclusion, but I think I can infer it. Let me know if I’m missing something or getting it wrong. 1) If you create any friction around doing something, it will reduce how much it happens. 2) Particularly in this case, if you allow for reasons to silence truth, people will actively do this to stifle truths they don’t like—as we do see in practice. Overall, truth-seeking is something to be precious to be guarded. Something that needs to be protected from our own rationalizations and the rationalizations/defensiveness of others. Any rules, regulations, or norms which restricts what you say are actually quite dangerous.
I think the above position is true, but it’s ignoring key considerations which make the picture more complicated. I’ll put my own position/response in the next comment for threading.
Norms are outside regulating forces, though. (Otherwise, they would just be heuristics)
This might have gotten lost in the convo and likely I should have mentioned it again, but I advocated for the behavior under discussion to be supererogatory/ a virtue [1]: not something to be enforced, but still something individuals ought to do of their own volition. Hence “I want to talk about why you freely should want to do this” and not “why I should be allowed to make you do this.”
Even when talking about norms though, my instinct is to first clarify what’s normative/virtuous for individuals. I expect disagreements there to be cruxes for disagreements about groups. I guess because I expect both one’s beliefs about what’s good for individuals and what’s good for groups to do to arise from the same underlying models of what makes actions generally good.
Huh, that’s a word choice I wouldn’t have considered. I’d usually say “norms apply to groups” and “there’s such a thing is ideal/virtuous/optimal behavior for individuals relative to their values/goals.” I guess it’s actually hard to determine what is ideal/virtuous/optimal, and so you only have heuristics? And virtues really are heuristics. This doesn’t feel like a key point, but let me know if you think there’s an important difference I’m missing.
____________________
[1] I admit that there are dangers even in just having something as a virtue/encouraged behavior, and that your point expressed in this comment to Ray is a legitimate concern.
I think that’s a very real risk and really bad when it happens. I think there are large costs in the other direction too. I’d be interested in thinking through together the costs/benefits of having vs not saying certain ways of saying things are better. I think marginal thoughts/discussion could cause me to update where the final balance lies here.
Before I read (2), I want to note that a universal idea that one is responsible for all the consequences of one’s accurate speech—in an inevitably Asymmetric Justice / CIE fashion—seems like it is effectively a way to ban truth-seeking entirely, and perhaps all speech of any kind. And the fact that there might be other consequences to true speech that one may not like and might want to avoid, does not mean it is unreasonable to point out that the subclass of such consequences that seems to be in play in these examples, seems like a subclass that seems much less worth worrying about avoiding. But yes, Kant saying you should tell the truth to an Axe murderer seems highly questionable, and all that.
And echo Jessica that it’s not reasonable to say that all of this is voluntary within the frame you’re offering, if the response to not doing it is to not be welcome, or to be socially punished. Regardless of what standards one chooses.
I think that is a far from complete description of my decision theory and selection of virtues here. Those are two important considerations, and this points in the right direction for the rest, but there are lots of others too. Margin too small to contain full description.
At some point I hope to write a virtue ethics sequence, but it’s super hard to describe it in written form, and every time I think about it I assume that even if I do get it across, people who speak better philosopher will technically pick anything I say to pieces and all that and I get an ugg field around the whole operation, and assume it won’t really work at getting people to reconsider. Alas.