I intentionally dodged giving more details in these spots, because I want people to reason from the information and figure out what’s going on and what that means, and I don’t think updating ‘against’ (or for) things is the way one should be going about updating.
Also because Long Post Is Long and getting into those other things would be difficult to write well, make things much longer, and be a huge distraction from actually processing the information.
I think there’s a much better chance of people actually figuring things out this way.
That doesn’t mean you’re not asking good questions.
I’d give the following notes.
“Doing good better” implies a lot of framework already in ways worth thinking about.
The EA definition above has even more implicit framework, and as to whether I’d endorse it, my instinctive answer to whether I roughly endorse it would be Mu. My full answer is at least one post.
EA definitely has both shared moral frameworks that are like water to a fish, and also implied moral frameworks that fall out of actions and revealed preferences, many of which wouldn’t be endorsed consciously if made explicit. I disagree with much of both, but I want readers to be curious and ask what those are and figure that out, rather than taking my word for it. And leave whether I disagree with them for another time if and when I have the time and method to explain properly.
EA modes of operation disagreements I believe I do my best to largely answer through the full content of the post.
Apologies that I can’t more fully answer, at least for now.
That said, I fear that people in my position—viz., students who don’t really know non-student EAs*—don’t have the information to “figure out what’s going on and what that means.” So I want to note here that it would be valuable for people like me if you or someone else someday wrote a post explaining more what’s going on in organized EA (and I’ll finish reading this post carefully, since it seems relevant).
*I run my college’s EA group; even relative to other student groups I/we are relatively detached from organized EA.
Sidenote: my Zvi-model is consistent with Zvi being worried about organized EA both for reasons that would also worry me (e.g., “I have deep worries that important things are deeply, deeply wrong, especially epistemically, and results in an increasingly Goodharted and inherently political and insider-biased system”) and for reasons that would not worry me much (e.g., EA is quite demanding or quite utilitarian, or something related to “doing good better” or the definition of EA being bad). So I’m not well-positioned to infer much from the mere fact that Zvi (or someone else) has concerns. Of course, it’s much healthier to form beliefs on the basis of understanding rather than deference anyway, so it doesn’t really matter. I just wanted to note that I can’t infer much from your and others’ affects for this reason.
I intentionally dodged giving more details in these spots, because I want people to reason from the information and figure out what’s going on and what that means, and I don’t think updating ‘against’ (or for) things is the way one should be going about updating.
Also because Long Post Is Long and getting into those other things would be difficult to write well, make things much longer, and be a huge distraction from actually processing the information.
I think there’s a much better chance of people actually figuring things out this way.
That doesn’t mean you’re not asking good questions.
I’d give the following notes.
“Doing good better” implies a lot of framework already in ways worth thinking about.
The EA definition above has even more implicit framework, and as to whether I’d endorse it, my instinctive answer to whether I roughly endorse it would be Mu. My full answer is at least one post.
EA definitely has both shared moral frameworks that are like water to a fish, and also implied moral frameworks that fall out of actions and revealed preferences, many of which wouldn’t be endorsed consciously if made explicit. I disagree with much of both, but I want readers to be curious and ask what those are and figure that out, rather than taking my word for it. And leave whether I disagree with them for another time if and when I have the time and method to explain properly.
EA modes of operation disagreements I believe I do my best to largely answer through the full content of the post.
Apologies that I can’t more fully answer, at least for now.
OK, thanks; this sounds reasonable.
That said, I fear that people in my position—viz., students who don’t really know non-student EAs*—don’t have the information to “figure out what’s going on and what that means.” So I want to note here that it would be valuable for people like me if you or someone else someday wrote a post explaining more what’s going on in organized EA (and I’ll finish reading this post carefully, since it seems relevant).
*I run my college’s EA group; even relative to other student groups I/we are relatively detached from organized EA.
Sidenote: my Zvi-model is consistent with Zvi being worried about organized EA both for reasons that would also worry me (e.g., “I have deep worries that important things are deeply, deeply wrong, especially epistemically, and results in an increasingly Goodharted and inherently political and insider-biased system”) and for reasons that would not worry me much (e.g., EA is quite demanding or quite utilitarian, or something related to “doing good better” or the definition of EA being bad). So I’m not well-positioned to infer much from the mere fact that Zvi (or someone else) has concerns. Of course, it’s much healthier to form beliefs on the basis of understanding rather than deference anyway, so it doesn’t really matter. I just wanted to note that I can’t infer much from your and others’ affects for this reason.