“The central problem [of coordination between rationalists] is that people use beliefs for many purposes—including tracking what is true. But another, practically important purpose is coordination. We think it’s likely that if an aspiring rationalist decides to “stop bullshitting”, they lose some of the social technology often used for successfully coordinating with other people. How exactly does this dynamic affect coordination? Can we do anything about it?”
Also: based on ESE experience, I have some “rich data but small sample size” research about rationalists failing at coordination, in experimental settings. Based on this I don’t think rationalists would benefit most e.g. from more advanced and complex S2-level coordination schemes, but more from something like improving “S1/S2″ interfaces / getting better at coordination between their S1 a S2 coordination models. (In a somewhat similar way I believe most people’s epistemic rationality benefits more from learning things like “noticing confusion” compared to e.g. “learning more from the heuristics and biases literature”.)
(Also as a sidenote … we have developed few group rationality techniques/exercised for ESE; I’m unlikely to write them for LW, but if someone would be interested in something like “write things in a legible way based on conversations” I would be happy to spend time on that (also likely could be payed work). )
Cf Epistea Summer Experiment (ESE)
Also: based on ESE experience, I have some “rich data but small sample size” research about rationalists failing at coordination, in experimental settings. Based on this I don’t think rationalists would benefit most e.g. from more advanced and complex S2-level coordination schemes, but more from something like improving “S1/S2″ interfaces / getting better at coordination between their S1 a S2 coordination models. (In a somewhat similar way I believe most people’s epistemic rationality benefits more from learning things like “noticing confusion” compared to e.g. “learning more from the heuristics and biases literature”.)
(Also as a sidenote … we have developed few group rationality techniques/exercised for ESE; I’m unlikely to write them for LW, but if someone would be interested in something like “write things in a legible way based on conversations” I would be happy to spend time on that (also likely could be payed work). )