[Link] Review of “Doing Good Better”
The book is by William MacAskill, founder of 80000 Hours and Giving What We Can. Excerpt:
Effective altruism takes up the spirit of Singer’s argument but shields us from the full blast of its conclusion; moral indictment is transformed into an empowering investment opportunity...
Either effective altruism, like utilitarianism, demands that we do the most good possible, or it asks merely that we try to make things better. The first thought is genuinely radical, requiring us to overhaul our daily lives in ways unimaginable to most...The second thought – that we try to make things better – is shared by every plausible moral system and every decent person. If effective altruism is simply in the business of getting us to be more effective when we try to help others, then it’s hard to object to it. But in that case it’s also hard to see what it’s offering in the way of fresh moral insight, still less how it could be the last social movement we’ll ever need.
I’d like to play the devil’s advocate here for a moment. I’m not entirely sure how I should respond to the following argument.
That begs the question: people often disagree on what is a better state of things. (And of course they say those who disagree with you are not “decent”.)
Don’t ignore the fact that people agree on only a very small set of altrustic acts. And even then, many people are neutral about them, or almost so, or they only support them if they ignore the lost opportunities of e.g. giving money to them and not to those other less fortunate people.
The great majority of things people want, they don’t want in common. Do you want to improve technology and medicine, or prevent unfriendly AI, or convert people to Christianity, or allow abortion, or free slaves, or prevent use of birth control, or give women equal legal rights, or make atheism legal, or prevent the disrespect and destruction of holy places, or remove speech restrictions, or allow free market contracts? Name any change you think a great historical moral advance, and you’ll find people who fought against it.
Most great causes have people fighting for and against. This is unsurprising: when everyone is on the same side, the problem tends to be resolved quickly. The only things everyone agrees are bad, but which keep existing for decades, are those people are apathetic about—not the greatest moral causes of the day.
Does selecting causes for the widest moral consensus mean selecting the most inoffensive ones? If not, why not? Do you believe that impersonal and accidental forces of history generate as much misery, which you can fight against, as the deliberate efforts of people who disagree with you? Wouldn’t that be surprising if it were true?
I don’t think that’s the case. Karma based moral systems work quite well without it.
There a scene in “The way of the Peaceful Warrior” where the main person asks the wise man why the wise man doesn’t do something substantial with his life but works in filling station. He replies that he’s “at service” in the filling station. The act of being “of service” is more important than the value created with it. It’s especially better than “trying” to do something from that perspective.
Agreed. Is there a particular reason this is a reply to my comment and not at the top level? Is it intended to support my point via another line of argument?
You are right, it would have been better at the top level.
“Do you believe that impersonal and accidental forces of history generate as much misery, which you can fight against, as the deliberate efforts of people who disagree with you? Wouldn’t that be surprising if it were true?”
Yes, I believe that, and no, it is not surprising. Issues where people disagree are likely to be mixed issues, where making changes will do harm as well as benefit. That is exactly why people disagree. So working on those issues will tend to do less benefit than working on the issues everyone agrees on, which are likely to be much less mixed.
Harm and benefit are two-place words; harm is always to someone, and according to someone’s values or goals.
If two people have different values—which can be as simple as each wanting the same resource for themselves, or as complex as different religious beliefs—then harm to the one can be benefit to the other. It might not be a zero-sum game because their utility functions aren’t exact inverses, but it’s still a tradeoff between the two, and each prefers their own values over the other’s.
On this view, such issues where people disagree are tautologically those where each change one of them wants benefits themselves and harms the other. Any changes that benefit everyone are quickly implemented until there aren’t any left.
If you share the values of one of these people, then working on the problem will result in benefit (by your values), and you won’t care about the harm (by some other person’s values).
If on most or all such divisive issues, you don’t side with any established camp, that is a very surprising fact that makes you an outlier. Can you build an EA movement out of altruists who don’t care about most divisive issues?
A disagreement could resolve into one side being mostly right and another mostly wrong, so actual harm+benefit isn’t necessary, only expected harm+benefit. All else equal, harm+benefit is worse than pure benefit, but usually there are other relevant distinctions, so that the effect of a harm+benefit cause could overwhelm available pure benefit causes.
The disagreements I was talking about—which I clam are many, perhaps most, disagreements—are not about unknown or disputed facts, but about conflicting values and goals. Such disagreements can’t be resolved into sides being objectively right or wrong (unless you’re a moral realist). If you side with one of the sides, that’s the same as saying their desires are ‘right’ to you, and implementing their desires usually (in most moral theories in practice) outweighs the cost of the moral outrage suffered by those who disagree. (E.g., I would want to free one slave even if it made a million slave-owners really angry, very slightly increasing the incidence of heart attacks and costing more QALYs in aggregate than the one slave gained.)
This is true in principle, but since I take disagreements pretty seriously I think it is normally false in practice. In other words there is actual harm and actual benefit in almost every real case.
Of course the last part of your comment is still true, namely that a mixed cause could still be better than a pure benefit case. However, this will not be true on average, and especially if I am always acting on my own opinion, since I will not always be right.
That’s the question, what is the base rate of the options you are likely to notice. If visible causes come in equivalent pairs, one with harm in it and another without, all other traits similar, that would be true. Similarly if pure benefit causes tend to be stronger. But it could be the case that best pure benefit causes have less positive impact than best mixed benefit causes.
How does your taking disagreements seriously (what do you mean by that?) inform the question of whether most real (or just contentious?) causes involve actual harm as well as benefit? (Do you mean to characterize your use of the term “disagreement”, which causes you point to as involving disagreement? For example, global warming could be said to involve no disagreement that’s to be taken seriously.)
Yes, it could be the case that the best pure benefit causes have less positive impact than the best mixed benefit causes. But I have no special reason to believe this is the case. If benefit of the doubt is going to go to one side without argument, I would put it on the side of pure benefit causes, since they don’t have the additional negative factor.
By taking disagreements seriously, I mean that I think that if someone disagrees with me, there is a good chance that there is something right about what he is saying, and especially in issues of policy (i.e. I don’t expect people to advocate policies that are 100% bad, with extremely rare exceptions.)
That global warming is happening, and that human beings are a substantial part of the cause, is certainly true. This isn’t an issue of policy in itself, and I don’t take disagreement about it very seriously in comparison to most disagreements. However, there still may be some truth in the position of people who disagree, e.g. there is a good chance that the effects will end up being not as bad as generally predicted. A broad outside view also suggests this, as for example in previous environmental issues such as the Kuwait oil fires, or the Y2K computer issue, and so on.
In any case the kind of disagreement I was talking about was about policy, and as I said I don’t generally expect people other than Hitler to advocate purely evil policies. Restricting carbon emissions, for example, may be a benefit overall, but it has harmful effects as well, and that is precisely the reason why some people would oppose it.
Do you disagree with the point you are making, or merely with the pro-book/anti-book side where it fits? I think being a devil’s advocate is about the former, not the latter. (There is also the move of steelmanning a flaw, looking for a story that paints it as clearly bad, to counteract the drive to excuse it, which might be closer to what you meant.)
Btw, Scott recently wrote a post about issues with admitting controversial causes in altruism.
Like I said, I’m not sure if I agree with it yet. It’s novel to me, it seems valid (up to empirical data I don’t have yet), but I’m pretty sure I haven’t thought through all its implications yet, or the other theories from its class. That’s why I seek other opinions, particularly if someone has encountered this idea before.
“Devils advocate” was referring to the fact that this is an argument against EA, while I am generally in favor of EA.