What makes you think A and B are mutually exclusive? Or even significantly anticorrelated? If there are enough very different models built out of legitimate facts and theories for everyone to have one of their own, how can you tell they aren’t picking them for political reasons?
Note: (not sure if you had this in mind when you made your comment), the OP comment here wasn’t meant to be an argument per se – it’s meant to be trying to articulate what’s going on in my mind and what sort of motions would seem necessary for it to change. It’s more descriptive than normative.
My goal here is expose the workings of my belief structure, partly so others can help untangle things if applicable, and partly to try to demonstrate what doublecrux feels like when I do it (to help provide some examples for my current doublecrux sequence)
There a few different (orthogonal?) ways I can imagine my mind shifting here:
A: increase my prior on how motivated people are, as a likely explanation of why they seem obviously wrong – even people-whose epistemics-I-trust-pretty-well*.
B: increase my prior on the collective epistemic harm caused by people-whose-epistemics-I-trust, regardless of how motivated they are. (i.e. if people are concealing information for strategic reasons, I might respect their strategic reasons as valid, but still eventually think that this concealment is sufficiently damaging that it’s not worth the cost, even if they weren’t motivated at all)
C: refine the manner in which I classify people into “average epistemics” vs “medium epistemics” vs “epistemics I trust pretty well.” (For example, an easy mistake to make is that just because one person at an organization has good epistemics, the whole org must have good epistemics. I think I still fall prey to this more than I’d like)
D: I decrease my prior on how much I should assume people-whose-epistemics-I-trust-pretty-well are coming from importantly different background models, which might be built on important insights, or which I should assign non-trivial chance to being a good model of the world.
E: I should change my policy of “socially, in conversation, reduce the degree to which I advocate policies along the lines of ’try to understand people’s background models before forming (or stating publicly) judgments about their degree of motivation.
All of these are knobs that can be tweaked, rather than booleans to be flipped. And (hopefully obvious) this isn’t actually an exhaustive list of how my mind might change, just trying to articulate some of the more salient options.
It seems plausible that I should do A, B, or C (but, I have not yet been persuaded that my current weights are wrong). It does not seem plausible currently that I should do D. E is sufficiently complicated that I’m not sure I have a sense of how plausible it is, but current arguments I’ve encountered haven’t seemed that overwhelming.
What makes you think A and B are mutually exclusive? Or even significantly anticorrelated? If there are enough very different models built out of legitimate facts and theories for everyone to have one of their own, how can you tell they aren’t picking them for political reasons?
Not saying they’re exclusive.
Note: (not sure if you had this in mind when you made your comment), the OP comment here wasn’t meant to be an argument per se – it’s meant to be trying to articulate what’s going on in my mind and what sort of motions would seem necessary for it to change. It’s more descriptive than normative.
My goal here is expose the workings of my belief structure, partly so others can help untangle things if applicable, and partly to try to demonstrate what doublecrux feels like when I do it (to help provide some examples for my current doublecrux sequence)
There a few different (orthogonal?) ways I can imagine my mind shifting here:
A: increase my prior on how motivated people are, as a likely explanation of why they seem obviously wrong – even people-whose epistemics-I-trust-pretty-well*.
B: increase my prior on the collective epistemic harm caused by people-whose-epistemics-I-trust, regardless of how motivated they are. (i.e. if people are concealing information for strategic reasons, I might respect their strategic reasons as valid, but still eventually think that this concealment is sufficiently damaging that it’s not worth the cost, even if they weren’t motivated at all)
C: refine the manner in which I classify people into “average epistemics” vs “medium epistemics” vs “epistemics I trust pretty well.” (For example, an easy mistake to make is that just because one person at an organization has good epistemics, the whole org must have good epistemics. I think I still fall prey to this more than I’d like)
D: I decrease my prior on how much I should assume people-whose-epistemics-I-trust-pretty-well are coming from importantly different background models, which might be built on important insights, or which I should assign non-trivial chance to being a good model of the world.
E: I should change my policy of “socially, in conversation, reduce the degree to which I advocate policies along the lines of ’try to understand people’s background models before forming (or stating publicly) judgments about their degree of motivation.
All of these are knobs that can be tweaked, rather than booleans to be flipped. And (hopefully obvious) this isn’t actually an exhaustive list of how my mind might change, just trying to articulate some of the more salient options.
It seems plausible that I should do A, B, or C (but, I have not yet been persuaded that my current weights are wrong). It does not seem plausible currently that I should do D. E is sufficiently complicated that I’m not sure I have a sense of how plausible it is, but current arguments I’ve encountered haven’t seemed that overwhelming.