So you’d be upset to, say, see research proposals prioritized for funding using explicitly utilitarian criteria? How would you rather see them prioritized?
I have had on the back burner for… probably six months now a post on why I am turned off by / leery about EA, despite donating 10% of my income to charity, caring about x-risk, and so on. One of the reasons that post has stayed on the back burner is “Why Our Kind Can’t Cooperate” plus “The Virtue of Silence”—given how few of the issues are methodological, better to just silently let EA be, or swallow my disagreements and endorse it, than spell out my disagreements and expect them to be taken seriously.
But this is suggesting to me that I probably should put them forward, in order to make this conversation easier if nothing else.
After talking with some EAs at the SF Solstice, I think it would be net positive to write this post. Expect it by the end of December if all goes well.
If there’s some reason to avoid broadcasting your thinking, you could just leave a comment in this thread instead of making a toplevel post. (Or send me a private message.) Anyway, you’ve got me curious already… is your objection to EA in principle, what the EA movement looks like in practice, or what the EA movement might become in practice? Does it extend to any explicit utilitarian calculations in general? (Feel free not to answer if you don’t want to.) Personally I’m a bit apprehensive about what the EA movement might become, but the EA leadership seems apprehensive too, so that’s reassuring.
Why not post it as username2? (If this is an equivalent to username, that is. I think LW shouldn’t disregard confessionals, since clearly people talk much more freely there.)
So you’d be upset to, say, see research proposals prioritized for funding using explicitly utilitarian criteria? How would you rather see them prioritized?
I have had on the back burner for… probably six months now a post on why I am turned off by / leery about EA, despite donating 10% of my income to charity, caring about x-risk, and so on. One of the reasons that post has stayed on the back burner is “Why Our Kind Can’t Cooperate” plus “The Virtue of Silence”—given how few of the issues are methodological, better to just silently let EA be, or swallow my disagreements and endorse it, than spell out my disagreements and expect them to be taken seriously.
But this is suggesting to me that I probably should put them forward, in order to make this conversation easier if nothing else.
After talking with some EAs at the SF Solstice, I think it would be net positive to write this post. Expect it by the end of December if all goes well.
Please do.
If there’s some reason to avoid broadcasting your thinking, you could just leave a comment in this thread instead of making a toplevel post. (Or send me a private message.) Anyway, you’ve got me curious already… is your objection to EA in principle, what the EA movement looks like in practice, or what the EA movement might become in practice? Does it extend to any explicit utilitarian calculations in general? (Feel free not to answer if you don’t want to.) Personally I’m a bit apprehensive about what the EA movement might become, but the EA leadership seems apprehensive too, so that’s reassuring.
Why not post it as username2? (If this is an equivalent to username, that is. I think LW shouldn’t disregard confessionals, since clearly people talk much more freely there.)