In addition to epistemic priors, there are also ontological priors and teleological priors to cross compare, each with their own problems. On top of which, people are even worse at comparing non epistemic priors than they are at comparing epistemic priors. As such, attempts to point out that these are an issue will be seen as a battle tactic: move the argument from a domain in which they have the upper hand (from their perspective) to unfamiliar territory in which you’ll have an advantage, and resisted.
You may share the experience I’ve had that most attempts at discussion don’t go anywhere. We mostly repeat our cached knowledge at each other. If two people who are earnestly trying to grok each other’s positions drill down for long enough they’ll get to a bit of ontology comparison, where it turns out they have different intuitions because they are using different conceptual metaphors for different moving parts of their model. But this takes so long that by the time it happens only a few bits of information get exchanged before one or both parties are too tired to continue. The workaround seems to be that if two people have a working relationship then, over time, they can accrue enough bits to get to real cruxes, and this can occasionally suggest novel research directions.
My main theory of change is therefore to find potentially productive pairings of people faster, and create the conditions under which they can speedrun getting to useful cruxes. Unfortunately, Eli Tyre tried this theory of change and reported that it mostly didn’t work, after a bunch of good faith efforts from a bunch of people. I’m not sure what’s next. I personally believe more progress could be made if people were trained in consciousness of abstraction (per Korzybski), but this is a sufficiently difficult ask as to fail people’s priors on how much effort to spend on novel skills with unclear payoffs. And a theory of change that has a curiosity stopper that halts on “other people should do this thing that they clearly aren’t going to do” is also not very useful.
Our sensible Chesterton fences
His biased priors
Their inflexible ideological commitments
In addition to epistemic priors, there are also ontological priors and teleological priors to cross compare, each with their own problems. On top of which, people are even worse at comparing non epistemic priors than they are at comparing epistemic priors. As such, attempts to point out that these are an issue will be seen as a battle tactic: move the argument from a domain in which they have the upper hand (from their perspective) to unfamiliar territory in which you’ll have an advantage, and resisted.
You may share the experience I’ve had that most attempts at discussion don’t go anywhere. We mostly repeat our cached knowledge at each other. If two people who are earnestly trying to grok each other’s positions drill down for long enough they’ll get to a bit of ontology comparison, where it turns out they have different intuitions because they are using different conceptual metaphors for different moving parts of their model. But this takes so long that by the time it happens only a few bits of information get exchanged before one or both parties are too tired to continue. The workaround seems to be that if two people have a working relationship then, over time, they can accrue enough bits to get to real cruxes, and this can occasionally suggest novel research directions.
My main theory of change is therefore to find potentially productive pairings of people faster, and create the conditions under which they can speedrun getting to useful cruxes. Unfortunately, Eli Tyre tried this theory of change and reported that it mostly didn’t work, after a bunch of good faith efforts from a bunch of people. I’m not sure what’s next. I personally believe more progress could be made if people were trained in consciousness of abstraction (per Korzybski), but this is a sufficiently difficult ask as to fail people’s priors on how much effort to spend on novel skills with unclear payoffs. And a theory of change that has a curiosity stopper that halts on “other people should do this thing that they clearly aren’t going to do” is also not very useful.