Thanks for sharing your reasoning, that was very interesting to read! I kind of agree with the worldview outlined in the quoted messages from the “Closing-Office-Reasoning” channel. Something like “unless you go to extreme lengths to cultivate integrity and your ability to reason in truth-tracking ways, you’ll become a part of the incentive-gradient landscape around you, which kills all your impact.”
Seems like a tough decision to have to decide whether an ecosystem has failed vs. whether it’s still better than starting from scratch despite its flaws. (I could imagine that there’s an instinct to just not think about it.)
Sometimes we also just get unlucky, though. (I don’t think FTX was just bad luck, but e.g., with some of the ways AI stuff played out, I find it hard to tell. Of course, just because I find it hard to tell doesn’t mean it’s objectively hard to tell. Maybe some things really were stupid also when they happened, not just in hindsight.)
I’m curious if you think there are “good EA orgs” where you think the leadership satisfies the threshold needed to predictably be a force of good in the world (my view is yes!). If yes, do you think that this isn’t necessarily enough for “building the EA movement” to be net positive? E.g., maybe you think it boosts the not-so-good orgs just as much as the good ones, and “burns the brand” in the process?
I’d say that, if there are some “good EA orgs,” that’s a reason for optimism. We can emulate what’s good about them and their culture. (It could still make sense to be against further growth if you believe the ratio has become too skewed.) Whereas, if there aren’t any, then we’re already in trouble, so there’s a bit of a wager against it.
Thanks for sharing your reasoning, that was very interesting to read! I kind of agree with the worldview outlined in the quoted messages from the “Closing-Office-Reasoning” channel. Something like “unless you go to extreme lengths to cultivate integrity and your ability to reason in truth-tracking ways, you’ll become a part of the incentive-gradient landscape around you, which kills all your impact.”
Seems like a tough decision to have to decide whether an ecosystem has failed vs. whether it’s still better than starting from scratch despite its flaws. (I could imagine that there’s an instinct to just not think about it.)
Sometimes we also just get unlucky, though. (I don’t think FTX was just bad luck, but e.g., with some of the ways AI stuff played out, I find it hard to tell. Of course, just because I find it hard to tell doesn’t mean it’s objectively hard to tell. Maybe some things really were stupid also when they happened, not just in hindsight.)
I’m curious if you think there are “good EA orgs” where you think the leadership satisfies the threshold needed to predictably be a force of good in the world (my view is yes!). If yes, do you think that this isn’t necessarily enough for “building the EA movement” to be net positive? E.g., maybe you think it boosts the not-so-good orgs just as much as the good ones, and “burns the brand” in the process?
I’d say that, if there are some “good EA orgs,” that’s a reason for optimism. We can emulate what’s good about them and their culture. (It could still make sense to be against further growth if you believe the ratio has become too skewed.) Whereas, if there aren’t any, then we’re already in trouble, so there’s a bit of a wager against it.