I think I’ve lost the thread of your point. It seems a LOT like you’re looking at motivation and systemic issues _WAY_ too soon in situation B. Start with “I think that statement is incorrect, Alice”, and work to crux the disagreement and find out what’s going on. _THEN_ decide if there’s something motivated or systemic that needs to be addressed.
Basically, don’t sit bolt upright in alarm for situation B. That’s the common case for anything complicated, and you need to untangle it as part of deciding if it’s important.
Ah, sorry not being clearer. Yes, that’s actually the point I meant to be making. It’s inappropriate (and factually wrong) for Bob to lead with “hey Alice you lied here”. (I was trying to avoid editorializing too much about what seemed appropriate, and focus on why the two situations are different)
I agree that the correct opening move is “that statement is incorrect”, etc.
One further complication, though, is that it might be that Alice and Bob have talked a lot about whether Alice is incorrect, looked for cruxes, etc, and after several months of this Bob still thinks Alice is being motivated and Alice still think her model just makes sense. (This was roughly the situation in the OP)
From Bob’s epistemic state, he’s now in a world where it looks like Alice has a pattern of motivation that needs to be addressed, and Alice is non-cooperative because Alice disagrees (and it’s hard to tell the difference between “Alice actually disagrees” or “Alice is feigning disagreement for political convenience). I don’t think there’s any simple thing that can happen next, and [for good or for ill] what happens next is probably going to have something to do with Alice and Bob’s respective social standing.
I think there are practices and institutions one could develop to help keep the topic in the domain of epistemics instead of politics, and there are meta-practices Alice and Bob can try to follow if they both wish for it to remain in the domain of epistemics rather than politics. But there is no special trick for it.
I think a little clearer in the comment, but I’m confused about the main post—in the case of subtle disagreements that _aren’t_ clearly wrong nor intended to mislead, why do you want a word or concept that makes people sit up in alarm? Only after you’ve identified the object-level reasoning that shows it to be both incorrect and object-important, should you examine the process-importance of why Alice is saying it (though in reality, you’re evaluating this, just like she’s evaluating yours).
The biggest confounding issue in my experience is that for deep enough models that Alice has used for a long time, her prior that Bob is the one with a problem is MUCH higher than that her model is inappropriate for this question. In exactly the same way that Bob’s beliefs are pointing to the inverse and defying introspection of true reasons for his beliefs.
If you’re finding this after a fair bit of discussion, and it’s a topic without fairly straightforward empirical resolution, you’re probably in the “agree to disagree” state (admitting that on this topic you don’t have sufficient mutual knowledge of each others’ rational beliefs to agree). And then you CERTAINLY don’t want a word that makes people “sit up in alarm”, as it’s entirely about politics which of you is deemed to be biased.
There are other cases where Alice is uncooperative and you’re willing to assume her motives or process are so bad that you want others not to be infected. That’s more a warning to others than a statement that Alice should be expected to respond to. And it’s also going to hit politics and backfire on Bob, at least some of the time. This case comes up a lot in public statements by celebrities or authorities. There’s no room for discussion at the object level, so you kind of jump to assuming bad faith if you disagree with the statements. Reaction by those who disagree with Paul Krugman’s NYT column is an example of this—“he’s got a Nobel in Economics, he must be intentionally misleading people by ignoring all the complexity in his bad policy advice”.
I’ll try to write up a post that roughly summarizes the overall thesis I’m trying to build towards here, so that it’s clearer how individual pieces fit together.
But a short answer to the “why would I want a clear handle for ‘sitting upright in alarm’” is that I think it’s at least sometimes necessary (or at the very least, inevitable), for this sort of conversation to veer into politics, and what I want is to eventually be able to discuss politics-qua-politics sanely and truth-trackingly.
My current best guess (although very lightly held) is that politics will go better if it’s possible to pack rhetorical punch into things for a wider variety of reasons, so people don’t feel pressure to say misleading things in order to get attention.
if it’s possible to pack rhetorical punch into things for a wider variety of reasons, so people don’t feel pressure to say misleading things in order to get attention.
I don’t think I agree with any of that—I think that rational discussion needs to have less rhetorical punch and more specific clarity of proposition, which tends not to meet political/other-dominating needs. And most of what we’re talking about in this subthread isn’t “need to say misleading things”, but “have a confused (or just different from mine) worldview that feels misleading without lots of discussion”.
I look forward to further iterations—I hope I’m wrong and only stuck in my confused model of how rationalists approach such disagreements.
I think I’ve lost the thread of your point. It seems a LOT like you’re looking at motivation and systemic issues _WAY_ too soon in situation B. Start with “I think that statement is incorrect, Alice”, and work to crux the disagreement and find out what’s going on. _THEN_ decide if there’s something motivated or systemic that needs to be addressed.
Basically, don’t sit bolt upright in alarm for situation B. That’s the common case for anything complicated, and you need to untangle it as part of deciding if it’s important.
(I edited the comment, curious if it’s clearer now)
Ah, sorry not being clearer. Yes, that’s actually the point I meant to be making. It’s inappropriate (and factually wrong) for Bob to lead with “hey Alice you lied here”. (I was trying to avoid editorializing too much about what seemed appropriate, and focus on why the two situations are different)
I agree that the correct opening move is “that statement is incorrect”, etc.
One further complication, though, is that it might be that Alice and Bob have talked a lot about whether Alice is incorrect, looked for cruxes, etc, and after several months of this Bob still thinks Alice is being motivated and Alice still think her model just makes sense. (This was roughly the situation in the OP)
From Bob’s epistemic state, he’s now in a world where it looks like Alice has a pattern of motivation that needs to be addressed, and Alice is non-cooperative because Alice disagrees (and it’s hard to tell the difference between “Alice actually disagrees” or “Alice is feigning disagreement for political convenience). I don’t think there’s any simple thing that can happen next, and [for good or for ill] what happens next is probably going to have something to do with Alice and Bob’s respective social standing.
I think there are practices and institutions one could develop to help keep the topic in the domain of epistemics instead of politics, and there are meta-practices Alice and Bob can try to follow if they both wish for it to remain in the domain of epistemics rather than politics. But there is no special trick for it.
I think a little clearer in the comment, but I’m confused about the main post—in the case of subtle disagreements that _aren’t_ clearly wrong nor intended to mislead, why do you want a word or concept that makes people sit up in alarm? Only after you’ve identified the object-level reasoning that shows it to be both incorrect and object-important, should you examine the process-importance of why Alice is saying it (though in reality, you’re evaluating this, just like she’s evaluating yours).
The biggest confounding issue in my experience is that for deep enough models that Alice has used for a long time, her prior that Bob is the one with a problem is MUCH higher than that her model is inappropriate for this question. In exactly the same way that Bob’s beliefs are pointing to the inverse and defying introspection of true reasons for his beliefs.
If you’re finding this after a fair bit of discussion, and it’s a topic without fairly straightforward empirical resolution, you’re probably in the “agree to disagree” state (admitting that on this topic you don’t have sufficient mutual knowledge of each others’ rational beliefs to agree). And then you CERTAINLY don’t want a word that makes people “sit up in alarm”, as it’s entirely about politics which of you is deemed to be biased.
There are other cases where Alice is uncooperative and you’re willing to assume her motives or process are so bad that you want others not to be infected. That’s more a warning to others than a statement that Alice should be expected to respond to. And it’s also going to hit politics and backfire on Bob, at least some of the time. This case comes up a lot in public statements by celebrities or authorities. There’s no room for discussion at the object level, so you kind of jump to assuming bad faith if you disagree with the statements. Reaction by those who disagree with Paul Krugman’s NYT column is an example of this—“he’s got a Nobel in Economics, he must be intentionally misleading people by ignoring all the complexity in his bad policy advice”.
I’ll try to write up a post that roughly summarizes the overall thesis I’m trying to build towards here, so that it’s clearer how individual pieces fit together.
But a short answer to the “why would I want a clear handle for ‘sitting upright in alarm’” is that I think it’s at least sometimes necessary (or at the very least, inevitable), for this sort of conversation to veer into politics, and what I want is to eventually be able to discuss politics-qua-politics sanely and truth-trackingly.
My current best guess (although very lightly held) is that politics will go better if it’s possible to pack rhetorical punch into things for a wider variety of reasons, so people don’t feel pressure to say misleading things in order to get attention.
I don’t think I agree with any of that—I think that rational discussion needs to have less rhetorical punch and more specific clarity of proposition, which tends not to meet political/other-dominating needs. And most of what we’re talking about in this subthread isn’t “need to say misleading things”, but “have a confused (or just different from mine) worldview that feels misleading without lots of discussion”.
I look forward to further iterations—I hope I’m wrong and only stuck in my confused model of how rationalists approach such disagreements.