If you’re not actually on the same side as the people who disagree with you, why would you (as a very strong but defeasible default) role-play otherwise?
Because there’s ambiguity, and there’s self-fulfilling prophecies. When there’s potential for self-fulfilling prophecies, there’s free variable that’s not a purely epistemic question; e.g. “Are we on the same side?”. E.g., giving any answer to that question is in some cases implicitly deciding to add your weight to the existence of a conflict.
You role-play to add some driving force to the system—driving towards fixed points that involve sustained actual discourse. This leads to more correct conclusions. But, you’re right that this is a very different sort of justification than Bayesian information processing, and needs better theorizing, and is mixed in which behavior that’s simply deceptive.
Because there’s ambiguity, and there’s self-fulfilling prophecies. When there’s potential for self-fulfilling prophecies, there’s free variable that’s not a purely epistemic question; e.g. “Are we on the same side?”. E.g., giving any answer to that question is in some cases implicitly deciding to add your weight to the existence of a conflict.
You role-play to add some driving force to the system—driving towards fixed points that involve sustained actual discourse. This leads to more correct conclusions. But, you’re right that this is a very different sort of justification than Bayesian information processing, and needs better theorizing, and is mixed in which behavior that’s simply deceptive.