That said, I fear that people in my position—viz., students who don’t really know non-student EAs*—don’t have the information to “figure out what’s going on and what that means.” So I want to note here that it would be valuable for people like me if you or someone else someday wrote a post explaining more what’s going on in organized EA (and I’ll finish reading this post carefully, since it seems relevant).
*I run my college’s EA group; even relative to other student groups I/we are relatively detached from organized EA.
Sidenote: my Zvi-model is consistent with Zvi being worried about organized EA both for reasons that would also worry me (e.g., “I have deep worries that important things are deeply, deeply wrong, especially epistemically, and results in an increasingly Goodharted and inherently political and insider-biased system”) and for reasons that would not worry me much (e.g., EA is quite demanding or quite utilitarian, or something related to “doing good better” or the definition of EA being bad). So I’m not well-positioned to infer much from the mere fact that Zvi (or someone else) has concerns. Of course, it’s much healthier to form beliefs on the basis of understanding rather than deference anyway, so it doesn’t really matter. I just wanted to note that I can’t infer much from your and others’ affects for this reason.
OK, thanks; this sounds reasonable.
That said, I fear that people in my position—viz., students who don’t really know non-student EAs*—don’t have the information to “figure out what’s going on and what that means.” So I want to note here that it would be valuable for people like me if you or someone else someday wrote a post explaining more what’s going on in organized EA (and I’ll finish reading this post carefully, since it seems relevant).
*I run my college’s EA group; even relative to other student groups I/we are relatively detached from organized EA.
Sidenote: my Zvi-model is consistent with Zvi being worried about organized EA both for reasons that would also worry me (e.g., “I have deep worries that important things are deeply, deeply wrong, especially epistemically, and results in an increasingly Goodharted and inherently political and insider-biased system”) and for reasons that would not worry me much (e.g., EA is quite demanding or quite utilitarian, or something related to “doing good better” or the definition of EA being bad). So I’m not well-positioned to infer much from the mere fact that Zvi (or someone else) has concerns. Of course, it’s much healthier to form beliefs on the basis of understanding rather than deference anyway, so it doesn’t really matter. I just wanted to note that I can’t infer much from your and others’ affects for this reason.