I like that this post took a very messy, complicated subject, and picked one facet of to gain a really crisp understanding of. (MIRI’s 2018 Research Direction update goes into some thoughts on why you might want to become deconfused on a subject, and the Rocket Alignment Problem is a somewhat more narrativized version)
I personally suspect that the principles Zack points to here aren’t the primary principles at play for why epistemic factions form. But, it is interesting to explore that even when you strip away tons of messy-human-specific-cognition (i.e. propensity for tribal loyalty for ingroup protection reasons), a very simple model of purely epistemic agents may still form factions.
I also really liked Zack lays out his reasoning very clearly, with coding steps that you can follow along with. I should admit that I haven’t followed along all the way through (I got about a third of the way through before realizing I’d need to set aside more time to really process it). So, this curation is not an endorsement that all his coding checks out. The bar for Curated is, unfortunately, not the bar for Peer Review. (But! Later on when we get to the 2020 LessWrong Review, I’d want this sort of thing checked more thoroughly).
It is still relatively uncommon on LessWrong for someone to even rise to the bar of “clearly lays out their reasoning in a very checkable way”, and when someone does that while making a point that seems interesting and important-if-true, it seems good to curate it.
lays out his reasoning very clearly [...] the bar for Peer Review
I mean, it’s not really my reasoning. This academic-paper-summary-as-blog-post was basically me doing “peer” review for Synthese (because I liked the paper, but was annoyed that someone would publish a paper based on computer simulations without publishing their code).
Curated.
I like that this post took a very messy, complicated subject, and picked one facet of to gain a really crisp understanding of. (MIRI’s 2018 Research Direction update goes into some thoughts on why you might want to become deconfused on a subject, and the Rocket Alignment Problem is a somewhat more narrativized version)
I personally suspect that the principles Zack points to here aren’t the primary principles at play for why epistemic factions form. But, it is interesting to explore that even when you strip away tons of messy-human-specific-cognition (i.e. propensity for tribal loyalty for ingroup protection reasons), a very simple model of purely epistemic agents may still form factions.
I also really liked Zack lays out his reasoning very clearly, with coding steps that you can follow along with. I should admit that I haven’t followed along all the way through (I got about a third of the way through before realizing I’d need to set aside more time to really process it). So, this curation is not an endorsement that all his coding checks out. The bar for Curated is, unfortunately, not the bar for Peer Review. (But! Later on when we get to the 2020 LessWrong Review, I’d want this sort of thing checked more thoroughly).
It is still relatively uncommon on LessWrong for someone to even rise to the bar of “clearly lays out their reasoning in a very checkable way”, and when someone does that while making a point that seems interesting and important-if-true, it seems good to curate it.
I mean, it’s not really my reasoning. This academic-paper-summary-as-blog-post was basically me doing “peer” review for Synthese (because I liked the paper, but was annoyed that someone would publish a paper based on computer simulations without publishing their code).