In discourse, drilling down into each individual step of the update process and precondition for validity is likely to be more fruitful as a method of pinpointing (if not resolving) areas of disagreement and confusion, compared to attempts to reach consensus or even operationalize a disagreement by merely requiring that the end results of some unspecified underlying mental motions be expressed in Bayesian terms.
I don’t understand this. What two things are being contrasted here? Is it “inhabiting the other’s hypothesis” vs. “finding something to bet on”?
EDIT: Also, this is a fantastic post.
But if I had to say one thing, then I’d say that requirements 1 and 2 are actually just the requirements for clear thinking on a topic. If your “belief” about X doesn’t satisfy those requirements, then I’d say that your thinking on X is muddled, incoherent, splintered, coarse, doesn’t bind to reality or is a belief in a shibboleth/slogan.
Is it “inhabiting the other’s hypothesis” vs. “finding something to bet on”?
Yeah, sort of. I’m imagining two broad classes of strategy for resolving an intellectual disagreement:
Look directly for concrete differences of prediction about the future, in ways that can be suitably operationalized for experimentation or betting. The strength of this method is that it almost-automatically keeps the conversation tethered to reality; the weakness is that it can lead to a streetlight effect of only looking in places where the disagreement can be easily operationalized.
Explore the generators of the disagreement in the first place, by looking at existing data and mental models in different ways. The strength of this method is that it enables the exploration of less-easily operationalized areas of disagreement; the weakness is that it can pretty easily degenerate into navel-gazing.
An example of the first bullet is this comment by TurnTrout.
An example of the second would be a dialogue or post exploring how differing beliefs and ways of thinking about human behavior generate different starting views on AI, or lead to different interpretations of the same evidence.
Both strategies can be useful in different places, and I’m not trying to advocate for one over the other. I’m saying specifically that the rationalist practice of applying the machinery of Bayesian updating in as many places as possible (e.g. thinking in terms of likelihood ratios, conditioning on various observations as Bayesian evidence, tracking allocations of probability mass across the whole hypothesis space) works at least as well or better when using the second strategy, compared to applying the practice when using the first strategy. The reason thinking in terms of Bayesian updating works well when using the second strategy is that it can help to pinpoint the area of disagreement and keep the conversation from drifting into navel-gazing, even if it doesn’t actually result in any operationalizable differences in prediction.
I don’t understand this. What two things are being contrasted here? Is it “inhabiting the other’s hypothesis” vs. “finding something to bet on”?
EDIT: Also, this is a fantastic post.
But if I had to say one thing, then I’d say that requirements 1 and 2 are actually just the requirements for clear thinking on a topic. If your “belief” about X doesn’t satisfy those requirements, then I’d say that your thinking on X is muddled, incoherent, splintered, coarse, doesn’t bind to reality or is a belief in a shibboleth/slogan.
Yeah, sort of. I’m imagining two broad classes of strategy for resolving an intellectual disagreement:
Look directly for concrete differences of prediction about the future, in ways that can be suitably operationalized for experimentation or betting. The strength of this method is that it almost-automatically keeps the conversation tethered to reality; the weakness is that it can lead to a streetlight effect of only looking in places where the disagreement can be easily operationalized.
Explore the generators of the disagreement in the first place, by looking at existing data and mental models in different ways. The strength of this method is that it enables the exploration of less-easily operationalized areas of disagreement; the weakness is that it can pretty easily degenerate into navel-gazing.
An example of the first bullet is this comment by TurnTrout.
An example of the second would be a dialogue or post exploring how differing beliefs and ways of thinking about human behavior generate different starting views on AI, or lead to different interpretations of the same evidence.
Both strategies can be useful in different places, and I’m not trying to advocate for one over the other. I’m saying specifically that the rationalist practice of applying the machinery of Bayesian updating in as many places as possible (e.g. thinking in terms of likelihood ratios, conditioning on various observations as Bayesian evidence, tracking allocations of probability mass across the whole hypothesis space) works at least as well or better when using the second strategy, compared to applying the practice when using the first strategy. The reason thinking in terms of Bayesian updating works well when using the second strategy is that it can help to pinpoint the area of disagreement and keep the conversation from drifting into navel-gazing, even if it doesn’t actually result in any operationalizable differences in prediction.