Thanks for cross-posting! You didn’t realize because I didn’t think to cross-post until after you had commented there. (Sorry for being unclear.) I’ve added a link to this cross-post to the text on benkuhn.net for people who want to comment.
First, good on you for attempting a serious critique of your views. I hope you don’t mind if I’m a little unkind in responding to to your critique, as that makes it easier and more direct.
Go ahead! Obviously this is important enough that Crocker’s Rules apply.
Second, the cynical bit: to steal Yvain’s great phrase, this post strikes me as the “we need two Stalins!” sort of apostasy that lands you a cushy professorship. (The pretending to try vs. actually trying distinction seems relevant here.) The conclusion- “we need to be sufficiently introspective”- looks self-serving from the outside. Would being introspective happen to be something you consider a comparative advantage? Is the usefulness of the Facebook group how intellectually stimulating and rigorous you find the conversations, or how many dollars are donated as a result of its existence?
You’ve correctly detected that I didn’t spend as much time on the conclusion as the criticisms. I actually debated not proposing any solutions, but decided against it, for a couple reasons:
The solution is essentially “we need to actually care about these problems I just listed” but phrased more nicely. I think any solution to the problems I listed involves actually caring about them more than we currently do.
The end of this post is the best place I could think of to propose a solution that would actually get people’s attention.
I didn’t want to end without saying anything constructive.
Incidentally, I don’t actually consider being thoughtful about social dynamics a comparative advantage. I think we need more, like, sociologists or something—people who are actually familiar with the pitfalls of being a movement.
Third, the helpful bit: instead of saying “this is what I think would make EA slightly less bad,” consider an alternative prompt: ten years from now, you look back at your EA advocacy as a huge waste of your time. Why?
I see now that it’s not obvious from the finished product, but this was actually the prompt I started with. I removed most of the doom-mongering (of the form “these problems are so bad that they are going to sink EA as a movement”) because I found it less plausible than the actual criticisms and wanted to maximize the chance that this post would be taken seriously by effective altruists. But I stand by these criticisms as the things that I think are most likely to torpedo EA right now. I’m less concerned about one of the principles failing than I am that the principles won’t be enough—that people won’t apply them properly because of failures of epistemology.
Incidentally, I don’t actually consider being thoughtful about social dynamics a comparative advantage. I think we need more, like, sociologists or something—people who are actually familiar with the pitfalls of being a movement.
That deflates that criticism. For the object-level social dynamics problem, I think that people will not actually care about those problems unless they are incentivised to care about those problems, and it’s not clear to me that is possible to do.
What does the person who EA is easy for look like? My first guess is a person who gets warm fuzzies from rigor. But then that suggests they’ll overconsume rigor and underconsume altruism.
I’m less concerned about one of the principles failing than I am that the principles won’t be enough—that people won’t apply them properly because of failures of epistemology.
Is epistemology the real failing, here? This may just be the communism analogy, but I’m not seeing how the incentive structure of EA is lined up with actually getting things done rather than pretending to actually get things done. Do you have a good model of the incentive structure of EA?
I see now that it’s not obvious from the finished product, but this was actually the prompt I started with. I removed most of the doom-mongering (of the form “these problems are so bad that they are going to sink EA as a movement”) because I found it less plausible than the actual criticisms and wanted to maximize the chance that this post would be taken seriously by effective altruists.
Interesting. The critique you’ve written strikes me as more “nudging” than “apostasy,” and while nudging is probably more effective at improving EA, keeping those concepts separate seems useful. (The rest of this comment is mostly meta-level discussion of nudging vs. apostasy, and can be ignored by anyone interested in just the object-level discussion.)
I interpreted the idea of apostasy along the lines of Avoiding Your Belief’s Real Weak Points. Suppose you knew that EA being a good idea was conditional on there being a workable population ethics, and you were uncertain if a workable population ethics existed. Then you would say “well, the real weak spot of EA is population ethics, because if that fails, then the whole edifice comes crashing down.” This way, everyone who isn’t on board with EA because they’re pessimistic about population ethics says “aha, Ben gets it,” and possibly people in EA say “hm, maybe we should take the population ethics problem more seriously.” This also fits Bostrom’s idea- you could tell your past self “look, past Ben, you’re not taking this population ethics problem seriously, and if you do, you’ll realize that it’s impossible and EA is wasted effort.” (And maybe another EAer reads your argument and is motivated to find that workable population ethics.)
I think there’s a moderately strong argument for sorting beliefs by badness-if-true rather than badness-if-true times plausibility because it’s far easier to subconsciously nudge your estimate of plausibility than your estimate of badness-if-true. I want to say there’s an article by Yvain or Kaj Sotala somewhere about “I hear criticisms of utilitarianism and think ‘oh, that’s just uninteresting engineering, someone else will solve that problem’ but when I look at other moral theories I think ‘but they don’t have an answer for X!’ and think that sinks their theory, even though its proponents see X as just uninteresting engineering,” which seems to me a good example of what differing plausibility assumptions look like in practice. Part of the benefit of this exercise seems to be listing out all of the questions whose answers could actually kill your theory/plan/etc., and then looking at them together and saying “what is the probability that none of these answers go against my theory?”
Now, it probably is the case that the total probability is small. (This is a belief you picked because you hold it strongly and you’ve thought about it a long time, not one picked at random!) But the probably may be much higher than it seems at first, because you may have dismissed an unpleasant possibility without fully considering it. (It also may be that by seriously considering one of these questions, you’re able to adjust EA so that the question no longer has the chance of killing EA.)
As an example, let’s switch causes to cryonics. My example of cryonics apostasy is “actually, freezing dead people is probably worthless; we should put all of our effort into making it legal to freeze live people once they get a diagnosis of a terminal condition or a degenerative neurological condition” and my example of cryonics nudging is “we probably ought to have higher fees / do more advertising and outreach.” The first is much more painful to hear, and that pain is both what makes it apostasy and what makes it useful to actually consider. If it’s true, the sooner you know the better.
I think there’s a moderately strong argument for sorting beliefs by badness-if-true rather than badness-if-true times plausibility
This seems to encourage Pascal’s mugging. In fact, it’s even worse than Pascal’s mugging; in Pascal’s mugging, at least the large amount of possible damage has to be large enough that the expected value is large even after considering its small probability. Here, the amount of possible damage just has to be large and it doesn’t even matter that the plausibility is small.
(If you think plausibility can’t be substituted for probability here, then replace “Pascal’s mugging” with “problems greatly resembling Pascal’s mugging”).
Re your meta point (sorry for taking a while to respond): I now agree with you that this should not be called a “(hypothetical) apostasy” as such. Evidence which updated me in that direction includes:
Your argument
Referencing a “hypothetical apostasy” seems to have already lead to some degradation of the meaning of the term; cf. Diego’s calling his counter-argument also an apostasy. (Though this may be a language barrier thing?)
This article got a far more positive response than my verbal anticipations expected (though possibly not than System 1 predicted).
Thanks for calling this out. Should I edit with a disclaimer, do you think?
Probably. If you want do the minimal change, I would rewrite the “how to read this” section to basically be just its last paragraph, with a link to something that you think is a better introduction to EA, and maybe a footnote explaining that you originally wrote this as a response to the apostasy challenge but thought the moderate critique was better.
If you want to do the maximal change, I would do the minimal change and also post the “doom-mongering” parts you deleted, probably as a separate article. (Here, the disclaimer is necessary, though it could be worded so that it isn’t.)
That deflates that criticism. For the object-level social dynamics problem, I think that people will not actually care about those problems unless they are incentivised to care about those problems, and it’s not clear to me that is possible to do.
Is epistemology the real failing, here? This may just be the communism analogy, but I’m not seeing how the incentive structure of EA is lined up with actually getting things done rather than pretending to actually get things done. Do you have a good model of the incentive structure of EA?
I don’t think EA has to worry about incentive structure in the same way that communism does, because EA doesn’t want to take over countries (well, if it does, that’s a different issue). Fundamentally we rely on people deciding to do EA on their own, and thus having at least some sort of motivation (or, like, coherent extrapolated motivation) to actually try. (Unless you’re arguing that EA is primarily people who are doing it entirely for the social feedback from people and not at all out of a desire to actually implement utilitarianism. This may be true; if it is, it’s a separate problem from incentives.)
The problem is more that this motivation gets co-opted by social-reward-seeking systems and we aren’t aware of that when it happens. One way to fix this is to fix incentives, it’s true, but another way is to fix the underlying problem of responding to social incentives when you intended to actually implement utilitarianism. Since the reason EA started was to fix the latter problem (e.g. people responding to social incentives by donating to the Charity for Rare Diseases in Cute Puppies), I think that that route is likely to be a better solution, and involve fewer epicycles (of the form where we have to consciously fix incentives again whenever we discover other problems).
I’m also not entirely sure this makes sense, though, because as I mentioned, social dynamics isn’t a comparative advantage of mine :P
(Responding to the meta-point separately because yay threading.)
I don’t think EA has to worry about incentive structure in the same way that communism does, because EA doesn’t want to take over countries (well, if it does, that’s a different issue)
GiveWell is moving into politics and advocacy, there are 80k people in politics, and GWWC principals like Toby Ord do a lot of advocacy with government and international organizations, and have looked at aid advocacy groups.
In a more general sense, telling some large, ideologically-cohesive group of people to take as much of their money as they can stand to part with and throw it all at some project, and expecting them to obey, seems like an intrinsically political act.
Unless you’re arguing that EA is primarily people who are doing it entirely for the social feedback from people and not at all out of a desire to actually implement utilitarianism. This may be true; if it is, it’s a separate problem from incentives.
I think that the EA system will be both more robust and more effective if it is designed with the assumption that the people in it do not share the system’s utility function, but that win-win trades are possible between the system and the people inside it.
Social feedback is an incentive, and the bigger the community gets the more social feedback is possible.
Insofar as Utilitarianism is weird, negative social feedback is a major reason to avoid acting on it, and so early EAs must have been very strongly motivated to implement utilitarianism in order to overcome it. As the community gets bigger, it is less weird and there is more positive support, and so it’s less of a social feedback hit.
This is partially good, because it makes it easier to “get into” trying to implement utilitarianism, but it’s also bad because it means that newer EAs need to care about utilitarianism relatively less.
It seems that saying that incentives don’t matter as long as you remove social-approval-seeking ignores the question of why the remaining incentives would actually push people towards actually trying.
It’s also unclear what’s left of the incentives holding the community together after you remove the social incentives. Yes, talking to each other probably does make it easier to implement utilitarian goals, but at the same time it seems that the accomplishment of utilitarian goals is not in itself a sufficiently powerful incentive, otherwise there wouldn’t be effectiveness problems to begin with. If it were, then EAs would just be incentivized to effectively pursue utilitarian goals.
Thanks for cross-posting! You didn’t realize because I didn’t think to cross-post until after you had commented there. (Sorry for being unclear.) I’ve added a link to this cross-post to the text on benkuhn.net for people who want to comment.
Go ahead! Obviously this is important enough that Crocker’s Rules apply.
You’ve correctly detected that I didn’t spend as much time on the conclusion as the criticisms. I actually debated not proposing any solutions, but decided against it, for a couple reasons:
The solution is essentially “we need to actually care about these problems I just listed” but phrased more nicely. I think any solution to the problems I listed involves actually caring about them more than we currently do.
The end of this post is the best place I could think of to propose a solution that would actually get people’s attention.
I didn’t want to end without saying anything constructive.
Incidentally, I don’t actually consider being thoughtful about social dynamics a comparative advantage. I think we need more, like, sociologists or something—people who are actually familiar with the pitfalls of being a movement.
I see now that it’s not obvious from the finished product, but this was actually the prompt I started with. I removed most of the doom-mongering (of the form “these problems are so bad that they are going to sink EA as a movement”) because I found it less plausible than the actual criticisms and wanted to maximize the chance that this post would be taken seriously by effective altruists. But I stand by these criticisms as the things that I think are most likely to torpedo EA right now. I’m less concerned about one of the principles failing than I am that the principles won’t be enough—that people won’t apply them properly because of failures of epistemology.
That deflates that criticism. For the object-level social dynamics problem, I think that people will not actually care about those problems unless they are incentivised to care about those problems, and it’s not clear to me that is possible to do.
What does the person who EA is easy for look like? My first guess is a person who gets warm fuzzies from rigor. But then that suggests they’ll overconsume rigor and underconsume altruism.
Is epistemology the real failing, here? This may just be the communism analogy, but I’m not seeing how the incentive structure of EA is lined up with actually getting things done rather than pretending to actually get things done. Do you have a good model of the incentive structure of EA?
Interesting. The critique you’ve written strikes me as more “nudging” than “apostasy,” and while nudging is probably more effective at improving EA, keeping those concepts separate seems useful. (The rest of this comment is mostly meta-level discussion of nudging vs. apostasy, and can be ignored by anyone interested in just the object-level discussion.)
I interpreted the idea of apostasy along the lines of Avoiding Your Belief’s Real Weak Points. Suppose you knew that EA being a good idea was conditional on there being a workable population ethics, and you were uncertain if a workable population ethics existed. Then you would say “well, the real weak spot of EA is population ethics, because if that fails, then the whole edifice comes crashing down.” This way, everyone who isn’t on board with EA because they’re pessimistic about population ethics says “aha, Ben gets it,” and possibly people in EA say “hm, maybe we should take the population ethics problem more seriously.” This also fits Bostrom’s idea- you could tell your past self “look, past Ben, you’re not taking this population ethics problem seriously, and if you do, you’ll realize that it’s impossible and EA is wasted effort.” (And maybe another EAer reads your argument and is motivated to find that workable population ethics.)
I think there’s a moderately strong argument for sorting beliefs by badness-if-true rather than badness-if-true times plausibility because it’s far easier to subconsciously nudge your estimate of plausibility than your estimate of badness-if-true. I want to say there’s an article by Yvain or Kaj Sotala somewhere about “I hear criticisms of utilitarianism and think ‘oh, that’s just uninteresting engineering, someone else will solve that problem’ but when I look at other moral theories I think ‘but they don’t have an answer for X!’ and think that sinks their theory, even though its proponents see X as just uninteresting engineering,” which seems to me a good example of what differing plausibility assumptions look like in practice. Part of the benefit of this exercise seems to be listing out all of the questions whose answers could actually kill your theory/plan/etc., and then looking at them together and saying “what is the probability that none of these answers go against my theory?”
Now, it probably is the case that the total probability is small. (This is a belief you picked because you hold it strongly and you’ve thought about it a long time, not one picked at random!) But the probably may be much higher than it seems at first, because you may have dismissed an unpleasant possibility without fully considering it. (It also may be that by seriously considering one of these questions, you’re able to adjust EA so that the question no longer has the chance of killing EA.)
As an example, let’s switch causes to cryonics. My example of cryonics apostasy is “actually, freezing dead people is probably worthless; we should put all of our effort into making it legal to freeze live people once they get a diagnosis of a terminal condition or a degenerative neurological condition” and my example of cryonics nudging is “we probably ought to have higher fees / do more advertising and outreach.” The first is much more painful to hear, and that pain is both what makes it apostasy and what makes it useful to actually consider. If it’s true, the sooner you know the better.
Arguably trying for apostasy, failing due to motivated cognition, and producing only nudging is a good strategy that should be applied more broadly.
A good strategy for what ends?
Finding good nudges!
This seems to encourage Pascal’s mugging. In fact, it’s even worse than Pascal’s mugging; in Pascal’s mugging, at least the large amount of possible damage has to be large enough that the expected value is large even after considering its small probability. Here, the amount of possible damage just has to be large and it doesn’t even matter that the plausibility is small.
(If you think plausibility can’t be substituted for probability here, then replace “Pascal’s mugging” with “problems greatly resembling Pascal’s mugging”).
This is one reason why I think the argument is only moderately strong.
Maybe include plausibility, but put some effort into coming up with pessimistic estimates?
Re your meta point (sorry for taking a while to respond): I now agree with you that this should not be called a “(hypothetical) apostasy” as such. Evidence which updated me in that direction includes:
Your argument
Referencing a “hypothetical apostasy” seems to have already lead to some degradation of the meaning of the term; cf. Diego’s calling his counter-argument also an apostasy. (Though this may be a language barrier thing?)
This article got a far more positive response than my verbal anticipations expected (though possibly not than System 1 predicted).
Thanks for calling this out. Should I edit with a disclaimer, do you think?
No problem!
That’s what I like to hear! :P
Probably. If you want do the minimal change, I would rewrite the “how to read this” section to basically be just its last paragraph, with a link to something that you think is a better introduction to EA, and maybe a footnote explaining that you originally wrote this as a response to the apostasy challenge but thought the moderate critique was better.
If you want to do the maximal change, I would do the minimal change and also post the “doom-mongering” parts you deleted, probably as a separate article. (Here, the disclaimer is necessary, though it could be worded so that it isn’t.)
I think that this is an effective list of real weak spots. If these problems can’t be fixed, EA won’t do much good.
I don’t think EA has to worry about incentive structure in the same way that communism does, because EA doesn’t want to take over countries (well, if it does, that’s a different issue). Fundamentally we rely on people deciding to do EA on their own, and thus having at least some sort of motivation (or, like, coherent extrapolated motivation) to actually try. (Unless you’re arguing that EA is primarily people who are doing it entirely for the social feedback from people and not at all out of a desire to actually implement utilitarianism. This may be true; if it is, it’s a separate problem from incentives.)
The problem is more that this motivation gets co-opted by social-reward-seeking systems and we aren’t aware of that when it happens. One way to fix this is to fix incentives, it’s true, but another way is to fix the underlying problem of responding to social incentives when you intended to actually implement utilitarianism. Since the reason EA started was to fix the latter problem (e.g. people responding to social incentives by donating to the Charity for Rare Diseases in Cute Puppies), I think that that route is likely to be a better solution, and involve fewer epicycles (of the form where we have to consciously fix incentives again whenever we discover other problems).
I’m also not entirely sure this makes sense, though, because as I mentioned, social dynamics isn’t a comparative advantage of mine :P
(Responding to the meta-point separately because yay threading.)
GiveWell is moving into politics and advocacy, there are 80k people in politics, and GWWC principals like Toby Ord do a lot of advocacy with government and international organizations, and have looked at aid advocacy groups.
In a more general sense, telling some large, ideologically-cohesive group of people to take as much of their money as they can stand to part with and throw it all at some project, and expecting them to obey, seems like an intrinsically political act.
“Take over countries” is such an ugly phrase. I prefer “country optimisation”.
I think that the EA system will be both more robust and more effective if it is designed with the assumption that the people in it do not share the system’s utility function, but that win-win trades are possible between the system and the people inside it.
I think that attempting effectiveness points towards a strong attractor of taking over countries.
Social feedback is an incentive, and the bigger the community gets the more social feedback is possible.
Insofar as Utilitarianism is weird, negative social feedback is a major reason to avoid acting on it, and so early EAs must have been very strongly motivated to implement utilitarianism in order to overcome it. As the community gets bigger, it is less weird and there is more positive support, and so it’s less of a social feedback hit.
This is partially good, because it makes it easier to “get into” trying to implement utilitarianism, but it’s also bad because it means that newer EAs need to care about utilitarianism relatively less.
It seems that saying that incentives don’t matter as long as you remove social-approval-seeking ignores the question of why the remaining incentives would actually push people towards actually trying.
It’s also unclear what’s left of the incentives holding the community together after you remove the social incentives. Yes, talking to each other probably does make it easier to implement utilitarian goals, but at the same time it seems that the accomplishment of utilitarian goals is not in itself a sufficiently powerful incentive, otherwise there wouldn’t be effectiveness problems to begin with. If it were, then EAs would just be incentivized to effectively pursue utilitarian goals.