But you seem to be suggesting first (a) surrendering to inevitable error, not even trying to not err
Certainly not. Recalibrating one’s intuitions to better reflect reality is an admirable aim, and one in which we should all be engaged. However, as far as norms of discourse go, there is more to the matter than that: different people will unavoidably have differences of intuition regarding their interlocutor’s goodwill, with certain individuals quicker to draw the line than others. How best to participate in (object-level) discourse in spite of these differences of (meta-level) opinion, without having to arbitrate that meta-level disagreement from scratch each time, is its own, separate question.
(b) correcting, not by precisely as much as is necessary (or some attempt at approximating that amount), but simply by… some arbitrary amount (trusting that it’s enough? trusting that it’s not too much?).
One of the consequences of being the type of agent that errs at all, is that estimating the precise magnitude of your error, and hence the precise size of corrective factor to apply, is unlikely to be possible.
This does not, however, mean that we are left in the dark, with no recourse but to correct by an arbitrary amount—for two reasons:
One of the further consequences of being the type of agent that errs in a predictable direction, is that whatever corrective factor you generate, by dint of having been produced by the same fallible intuitions that produced the initial misjudgment, is more likely to be too little than too much. And here I do in fact submit that humans are, by default, more likely to judge themselves too persecuted, than too little.
Two errors of equal magnitude but opposite sign do not have the same degree of negative consequence. Behaving as if your interlocutor is engaged in cooperative truthseeking, when in fact they are not, is at worst likely to lead to a waste of time and effort attempting to persuade someone who cannot be persuaded. Conversely, misidentifying a cooperative interlocutor as some kind of bad actor will, at minimum, preemptively kill a potentially fruitful discussion, while also carrying a nonzero risk of alienating your interlocutor and any observers.
Given these two observations—that we err in a predictable direction, and that the consequences from opposing errors are not of equal magnitude—it becomes clear that if we, in our attempt to apply our corrective factor, were to (God forbid) miss the mark, it would be better to miss by way of overshooting than by undershooting. This then directly leads to the discourse norm of assuming goodwill.
How best to participate in discourse in spite of these differences of (meta-level) opinion, without having to arbitrate that meta-level disagreement from scratch each time, is its own, separate question.
Sure, but why should this question have an answer like “we just can’t not err, or even reduce how much we err”? Why would we expect this?
Also (and perhaps more importantly):
different people will predictably have differences of intuition regarding their interlocutor’s goodwill
Hold on, hold on. How did we get to “intuitions regarding their interlocutor’s goodwill”?
We started at “some people perceive disagreements as hostility”. This is true, some (indeed, many) people do this. The solution to this problem on an individual level is “don’t do that”. The solution to this problem on a social level is “have norms that firmly oppose doing that”.
Why are we suddenly having to have “goodwill”, to try to divine how much “goodwill” other people have, etc.? We identified a problem and then we identified the solution. Seems like we’re done.
One of the consequences of being the type of agent that errs at all, is that estimating the precise magnitude of your error, and hence the precise size of corrective factor to apply, is unlikely to be possible.
How is that a consequence of “being the type of agent that errs at all”? I don’t see it—please elaborate.
And here I do in fact submit that humans are, by default, more likely to judge themselves too persecuted, than too little.
Yes, I agree. The solution to this is… as I said above. Stop perceiving disagreement as hostility; discourage others from doing so.
The rest of your comment, from that point, seems to continue conflating perception of others’ behavior, and one’s own behavior. I think it would be good to disentangle these two things.
It seems possible at this point that some of our disagreement may stem from a difference in word usage.
When I say “goodwill” (or, more accurately, when I read “goodwill” in the context of Rob Bensinger’s original post), what I take it to mean is something along the lines of “being (at least in the context of this conversation, and possibly also in the broader context of participation on LW as a whole) interested in figuring out true things, and having that as a primary motivator during discussions”.
The alternative to this (which your use of “hostility” appears to qualify for as a special case) is any situation in which that is not the case, i.e. someone is participating in the discussion with some other aim than arriving at truth. Possible alternative motivations here are too numerous to list comprehensively, but (broadly speaking) include classes such as: wanting confirmation for their existing beliefs, wanting to assert the status of some individual or group, wanting to lower the status of some individual or group, etc.
(That last case seems possibly to map to your use of “hostility”, where specifically the individual or group in question includes one of the discussion’s participants.)
This being the case, my response to what you say in your comment, e.g. here
We started at “some people perceive disagreements as hostility”. This is true, some (indeed, many) people do this. The solution to this problem on an individual level is “don’t do that”. The solution to this problem on a social level is “have norms that firmly oppose doing that”.
and here
Yes, I agree. The solution to this is… as I said above. Stop perceiving disagreement as hostility; discourage others from doing so.
is essentially that I agree, but that I don’t see how (on your view) Rob’s proposed norm of “assuming goodwill” isn’t essentially a restatement of your “don’t perceive disagreements as hostility”. (Perhaps you think the former generalizes too much compared to the latter, and take issue with some of the edge cases?)
In any case, I think it’d be beneficial to know where and how exactly your usage/perception of these terms differ, and how those differences concretely lead to our disagreement about Rob’s proposed norm.
Certainly not. Recalibrating one’s intuitions to better reflect reality is an admirable aim, and one in which we should all be engaged. However, as far as norms of discourse go, there is more to the matter than that: different people will unavoidably have differences of intuition regarding their interlocutor’s goodwill, with certain individuals quicker to draw the line than others. How best to participate in (object-level) discourse in spite of these differences of (meta-level) opinion, without having to arbitrate that meta-level disagreement from scratch each time, is its own, separate question.
One of the consequences of being the type of agent that errs at all, is that estimating the precise magnitude of your error, and hence the precise size of corrective factor to apply, is unlikely to be possible.
This does not, however, mean that we are left in the dark, with no recourse but to correct by an arbitrary amount—for two reasons:
One of the further consequences of being the type of agent that errs in a predictable direction, is that whatever corrective factor you generate, by dint of having been produced by the same fallible intuitions that produced the initial misjudgment, is more likely to be too little than too much. And here I do in fact submit that humans are, by default, more likely to judge themselves too persecuted, than too little.
Two errors of equal magnitude but opposite sign do not have the same degree of negative consequence. Behaving as if your interlocutor is engaged in cooperative truthseeking, when in fact they are not, is at worst likely to lead to a waste of time and effort attempting to persuade someone who cannot be persuaded. Conversely, misidentifying a cooperative interlocutor as some kind of bad actor will, at minimum, preemptively kill a potentially fruitful discussion, while also carrying a nonzero risk of alienating your interlocutor and any observers.
Given these two observations—that we err in a predictable direction, and that the consequences from opposing errors are not of equal magnitude—it becomes clear that if we, in our attempt to apply our corrective factor, were to (God forbid) miss the mark, it would be better to miss by way of overshooting than by undershooting. This then directly leads to the discourse norm of assuming goodwill.
Sure, but why should this question have an answer like “we just can’t not err, or even reduce how much we err”? Why would we expect this?
Also (and perhaps more importantly):
Hold on, hold on. How did we get to “intuitions regarding their interlocutor’s goodwill”?
We started at “some people perceive disagreements as hostility”. This is true, some (indeed, many) people do this. The solution to this problem on an individual level is “don’t do that”. The solution to this problem on a social level is “have norms that firmly oppose doing that”.
Why are we suddenly having to have “goodwill”, to try to divine how much “goodwill” other people have, etc.? We identified a problem and then we identified the solution. Seems like we’re done.
How is that a consequence of “being the type of agent that errs at all”? I don’t see it—please elaborate.
Yes, I agree. The solution to this is… as I said above. Stop perceiving disagreement as hostility; discourage others from doing so.
The rest of your comment, from that point, seems to continue conflating perception of others’ behavior, and one’s own behavior. I think it would be good to disentangle these two things.
It seems possible at this point that some of our disagreement may stem from a difference in word usage.
When I say “goodwill” (or, more accurately, when I read “goodwill” in the context of Rob Bensinger’s original post), what I take it to mean is something along the lines of “being (at least in the context of this conversation, and possibly also in the broader context of participation on LW as a whole) interested in figuring out true things, and having that as a primary motivator during discussions”.
The alternative to this (which your use of “hostility” appears to qualify for as a special case) is any situation in which that is not the case, i.e. someone is participating in the discussion with some other aim than arriving at truth. Possible alternative motivations here are too numerous to list comprehensively, but (broadly speaking) include classes such as: wanting confirmation for their existing beliefs, wanting to assert the status of some individual or group, wanting to lower the status of some individual or group, etc.
(That last case seems possibly to map to your use of “hostility”, where specifically the individual or group in question includes one of the discussion’s participants.)
This being the case, my response to what you say in your comment, e.g. here
and here
is essentially that I agree, but that I don’t see how (on your view) Rob’s proposed norm of “assuming goodwill” isn’t essentially a restatement of your “don’t perceive disagreements as hostility”. (Perhaps you think the former generalizes too much compared to the latter, and take issue with some of the edge cases?)
In any case, I think it’d be beneficial to know where and how exactly your usage/perception of these terms differ, and how those differences concretely lead to our disagreement about Rob’s proposed norm.