I think a little clearer in the comment, but I’m confused about the main post—in the case of subtle disagreements that _aren’t_ clearly wrong nor intended to mislead, why do you want a word or concept that makes people sit up in alarm? Only after you’ve identified the object-level reasoning that shows it to be both incorrect and object-important, should you examine the process-importance of why Alice is saying it (though in reality, you’re evaluating this, just like she’s evaluating yours).
The biggest confounding issue in my experience is that for deep enough models that Alice has used for a long time, her prior that Bob is the one with a problem is MUCH higher than that her model is inappropriate for this question. In exactly the same way that Bob’s beliefs are pointing to the inverse and defying introspection of true reasons for his beliefs.
If you’re finding this after a fair bit of discussion, and it’s a topic without fairly straightforward empirical resolution, you’re probably in the “agree to disagree” state (admitting that on this topic you don’t have sufficient mutual knowledge of each others’ rational beliefs to agree). And then you CERTAINLY don’t want a word that makes people “sit up in alarm”, as it’s entirely about politics which of you is deemed to be biased.
There are other cases where Alice is uncooperative and you’re willing to assume her motives or process are so bad that you want others not to be infected. That’s more a warning to others than a statement that Alice should be expected to respond to. And it’s also going to hit politics and backfire on Bob, at least some of the time. This case comes up a lot in public statements by celebrities or authorities. There’s no room for discussion at the object level, so you kind of jump to assuming bad faith if you disagree with the statements. Reaction by those who disagree with Paul Krugman’s NYT column is an example of this—“he’s got a Nobel in Economics, he must be intentionally misleading people by ignoring all the complexity in his bad policy advice”.
I’ll try to write up a post that roughly summarizes the overall thesis I’m trying to build towards here, so that it’s clearer how individual pieces fit together.
But a short answer to the “why would I want a clear handle for ‘sitting upright in alarm’” is that I think it’s at least sometimes necessary (or at the very least, inevitable), for this sort of conversation to veer into politics, and what I want is to eventually be able to discuss politics-qua-politics sanely and truth-trackingly.
My current best guess (although very lightly held) is that politics will go better if it’s possible to pack rhetorical punch into things for a wider variety of reasons, so people don’t feel pressure to say misleading things in order to get attention.
if it’s possible to pack rhetorical punch into things for a wider variety of reasons, so people don’t feel pressure to say misleading things in order to get attention.
I don’t think I agree with any of that—I think that rational discussion needs to have less rhetorical punch and more specific clarity of proposition, which tends not to meet political/other-dominating needs. And most of what we’re talking about in this subthread isn’t “need to say misleading things”, but “have a confused (or just different from mine) worldview that feels misleading without lots of discussion”.
I look forward to further iterations—I hope I’m wrong and only stuck in my confused model of how rationalists approach such disagreements.
I think a little clearer in the comment, but I’m confused about the main post—in the case of subtle disagreements that _aren’t_ clearly wrong nor intended to mislead, why do you want a word or concept that makes people sit up in alarm? Only after you’ve identified the object-level reasoning that shows it to be both incorrect and object-important, should you examine the process-importance of why Alice is saying it (though in reality, you’re evaluating this, just like she’s evaluating yours).
The biggest confounding issue in my experience is that for deep enough models that Alice has used for a long time, her prior that Bob is the one with a problem is MUCH higher than that her model is inappropriate for this question. In exactly the same way that Bob’s beliefs are pointing to the inverse and defying introspection of true reasons for his beliefs.
If you’re finding this after a fair bit of discussion, and it’s a topic without fairly straightforward empirical resolution, you’re probably in the “agree to disagree” state (admitting that on this topic you don’t have sufficient mutual knowledge of each others’ rational beliefs to agree). And then you CERTAINLY don’t want a word that makes people “sit up in alarm”, as it’s entirely about politics which of you is deemed to be biased.
There are other cases where Alice is uncooperative and you’re willing to assume her motives or process are so bad that you want others not to be infected. That’s more a warning to others than a statement that Alice should be expected to respond to. And it’s also going to hit politics and backfire on Bob, at least some of the time. This case comes up a lot in public statements by celebrities or authorities. There’s no room for discussion at the object level, so you kind of jump to assuming bad faith if you disagree with the statements. Reaction by those who disagree with Paul Krugman’s NYT column is an example of this—“he’s got a Nobel in Economics, he must be intentionally misleading people by ignoring all the complexity in his bad policy advice”.
I’ll try to write up a post that roughly summarizes the overall thesis I’m trying to build towards here, so that it’s clearer how individual pieces fit together.
But a short answer to the “why would I want a clear handle for ‘sitting upright in alarm’” is that I think it’s at least sometimes necessary (or at the very least, inevitable), for this sort of conversation to veer into politics, and what I want is to eventually be able to discuss politics-qua-politics sanely and truth-trackingly.
My current best guess (although very lightly held) is that politics will go better if it’s possible to pack rhetorical punch into things for a wider variety of reasons, so people don’t feel pressure to say misleading things in order to get attention.
I don’t think I agree with any of that—I think that rational discussion needs to have less rhetorical punch and more specific clarity of proposition, which tends not to meet political/other-dominating needs. And most of what we’re talking about in this subthread isn’t “need to say misleading things”, but “have a confused (or just different from mine) worldview that feels misleading without lots of discussion”.
I look forward to further iterations—I hope I’m wrong and only stuck in my confused model of how rationalists approach such disagreements.