I see in the “Recent on Rationality Blogs” panel an article entitled “Why EA is new and obvious”. I’ll take that as a prompt to list my three philosophical complaints abouts EA:
I believe in causality as a basic moral concept. My ethical system absolutely requires me to avoid hurting people, but is much less adamant about helping people. While some people claim to be indifferent to this distinction, in practice people’s revealed moral preferences suggest that they agree with me (certainly the legal system agrees with me).
I also believe in locality as an ontologically primitive moral issue. I am more morally obligated to my mother than to a random stranger in Africa. Finer gradations are harder to tease out, but I still feel more obligation to a fellow American than to a citizen of another country, ceteris paribus.
I do not believe a good ethical system should rely on moral exhortation, at least not to the extent that EA does. Such systems will never succeed in solving the free-rider problem. The best strategy to produce ethical behavior is simply to appeal to self-interest, by offering people membership in a community that confers certain benefits, if the person is willing to follow certain rules.
The best strategy to produce ethical behavior is simply to appeal to self-interest
This is only true of ethical behaviours that can be produced by appealing to self-interest. That might not be all of them. I don’t see how you can claim to know that the best strategies are all in this category without actually doing the relevant cost-benefit calculations.
My claim is based on historical analysis. Historically, the ideas that benefit humanity the most in the long term are things like capitalism, science, the criminal justice system, and (to a lesser extent) democracy. These ideas are all based on aligning individual self-interest with the interests of the society as a whole.
Moral exhortation, it must be noted, also has a hideous dark side, in that it delineates a ingroup/outgroup distinction between those who accept the exhortation and those who reject it, and that distinction is commonly used to justify violence and genocide. Judaism, Christianity and Islam are all based on moral exhortation and were all used in history to justify atrocities against the infidel outgroup. The same is true of communism. Hitler spent a lot of time on his version of moral exhortation. The French revolutionaries had an inspiring creed of “liberty, equality and fraternity” and then used that creed to justify astonishing bloodshed first within France and then throughout Europe.
I find your list of historical examples less than perfectly convincing. The single biggest success story there is probably science, but (as ChristianKl has also pointed out) science is not at all “based on aligning individual self-interest with the interests of the society as a whole”; if you asked a hundred practising scientists and a hundred eminent philosophers of science to list twenty things each that science is “based on” I doubt anything like that would appear in any of the lists.
(Nor, for that matter, is science based on pursuing the interests of others at the cost of one’s own self-interest. What you wrote is orthogonal to the truth rather than opposite.)
I do agree that when self-interest can be made to lead to good things for everyone it’s very nice, and I don’t dispute your characterization of capitalism, criminal justice, and democracy as falling nicely in line with that. But it’s a big leap from “there are some big examples where aligning people’s self-interest with the common good worked out well” to “a good moral system should never appeal to anything other than self-interest”.
Yes, moral exhortation has sometimes been used to get people to commit atrocities, but atrocities have been motivated by self-interest from time to time too. (And … isn’t your main argument against moral exhortation that it’s ineffective? If it turns out to be a more effective way to get people to commit atrocities than appealing to self-interest is, doesn’t that undermine that main argument?)
The distrust of individual scholars found in science is in fact an example of aligning individual incentives, by making success and prestige dependent on genuine truth-seeking.
But it’s a big leap from “there are some big examples where aligning people’s self-interest with the common good worked out well” to “a good moral system should never appeal to anything other than self-interest”.
The claim is not so much that moral appeals should never be used, but that they should only happen when strictly necessary, once incentives have been aligned to the greatest possible extent. Promoting efficient giving is an excellent example, but moral appeals are of course also relevant on the very small scale. Effective altruists are in fact very good at using self-interest as a lever for positive social change, whenever possible—this is the underlying rationale for the ‘earning to give’ idea, as well as for the attention paid to extreme poverty in undeveloped countries.
The distrust of individual scholars found in science is in fact an example of aligning individual incentives, by making success and prestige dependent on genuine truth-seeking.
Scientists generally do trust scientific papers to not lie about the results they report.
Even an organisations like the FDA frequently gives companies the presumption of correct data reporting as demonstrated well in the Ranbaxy case.
I think I’ve been in the top 5% of my age cohort all my life in understanding the power of incentives, and all my life I’ve underestimated it. And never a year passes but I get some surprise that pushes my limit a
little farther.
Historically, the ideas that benefit humanity the most in the long term are things like capitalism, science, the criminal justice system, and (to a lesser extent) democracy. These ideas are all based on aligning individual self-interest with the interests of the society as a whole.
What does science have to do with self-interest? Making one’s claims in a way that they can get falsified by others isn’t normally in people self-interest.
Science appeals to sacred values of truth to prevent people from publishing data based on fake data. If it wouldn’t do so and people would fake data whenever it would be in their self-interest the scientific system wouldn’t get anywhere.
There may be an ethically relevant distinction between a rule that tells you to avoid being the cause of bad things, and a rule that says you should cause good things to happen. However, I am not convinced that causality is relevant to this distinction. As far as I can tell, these two concepts are both about causality. We may be using words differently, do you think you could explain why you think this distinction is about causality?
In my understanding, consequentialism doesn’t accept a moral distinction between sins of omission and sins of action. If a person dies whom I could have saved through some course of action, I’m just as guilty as I would be if I murdered the person. In my view, there must be a distinction between murder (=causing a death) and failure to prevent a death.
If you want to be more formal, here’s a good rule. Given a death, would the death still have a occurred in a counterfactual world where the potentially-guilty person did not exist? If the answer is yes, the person is innocent. Since lots of poor people would still be dying if I didn’t exist, I’m thereby exonerated of their death (phew). I still feel bad about eating meat, though.
If we look at this issue from an angle “ethics is memetic system evolved by cultural group selection”, then I guess it makes sense that (1) systems promoting helping your cultural group would have an advantage over systems promoting helping everyone to the same degree, and (2) systems that allow to achieve the “ethical enough” state reasonably fast would have an advantage over systems where no one can realistically become “ethical enough”.
The problem appears when someone tries to do an extrapolation of that concept.
I am not sure how to answer the question “should we extrapolate our ethical concepts?”. Because “should” itself is within the domain of ethics, and the question is precisely about whether that “should” should also be extrapolated.
I won’t talk about your first two points—I kind of agree (but I’m an anti-realist and think you’re a bit strong in your beliefs). I’d like to hear more about
I do not believe a good ethical system should rely on moral exhortation,
I don’t get it. ethical systems exist in people regardless of transmission and enforcement mechanism. Put another way, what mechanism would you add to EA that would make it better? EA + force doesn’t seem an improvement. EA + rejection of heretics likewise seems a limitation rather than an improvement.
I’d also like to point out that “the free rider problem” isn’t fundamental. My preference for solving it is to be so productive that I just don’t care if someone is riding free—as long as they and I are happy, it’s all good.
Ignoring the free rider problem until we get the holodeck doesn’t seem to be a serious solution. If you have a deadbeat brother sponging off you it is all well and good to think that one day you’ll win the lottery and you won’t care. That only works with your own money though. DB is talking about a system that you are trying to get other people to buy into. They won’t do that if your system is transparently rob-able. They’ll rob it instead. People aren’t dumb. Give em the choice of pulling the cart or sitting on it and you pull alone.
I mostly meant that “free rider” isn’t a problem in altruism (where you pay for things if you think it improves the world), only in capitalist financing (where you pay only for things where you expect to capture more value than your costs).
ALL recipients of charity and social support are free riders: they’re taking more value than they’re contributing. And I don’t care, and neither should you. Calling them “deadbeats” implies you know and can judge WHY they’re in need of help, and you are comparing deservedness rather than effectiveness. I recommend not doing that; deciding what people deserve pretty much cannot be done rationally.
I mostly meant that “free rider” isn’t a problem in altruism
Actually, it is. The problem with ‘free riding’ is not that it’s somehow unfair to the people who are picking up the slack, it’s that it distorts behavior. You don’t want to give money to beggars if this just incents more people to beg and begging is a horrible job—and this is true even if you’re altruistic towards people who might beg. You’ll need to find a way to give money that doesn’t have these bad consequences, even if that means expending some resources.
I was going for a specific common situation, a family member who is mooching. I didn’t mean that all recipients of charity are deadbeats. Obviously, that’s going to depend on the individuals in question.
This is a “teach/give fish” issue here. If you give people stuff they don’t scarequotes earn unscarequotes then they have, in a way, earned it. I mean, value judgement aside they have it now, right? They were miserable enough in front of you that they got it off you. Good on em. Mad beggar skills.
But that’s just on a personal level. If you expand that, and you aren’t just a dude who is a soft touch, but actually build an organization on the principle of “see cry, give hanky”, then your charity is vulnerable to a free rider attack. You gotta fix that, if you actually want to do good and not just create a client group.
If you’ve ever seen the situation of “it would actually be bad for me to get a job because I’d lose X benefit” you get what I’m talking about here. It is a real problem, and the fact that it takes a hard heart to look at it doesn’t make it less real. You have to solve the free rider problem if you want to do charity well, like you have to solve the impostor problem if you want to do encryption.
Even if you believe that locality matters, EA principles like room-for-funding or focus on effectiveness instead of exerting effort for interventions still apply.
The best strategy to produce ethical behavior is simply to appeal to self-interest
That depends largely on your audience. For some people self-interest is very important. For other people fairness is more important. It’s a mistake to generalize too much from one example in that regard.
Clare Graves’s developmental theory for example groups people in different stages whereby different stages are differently motivated.
I see in the “Recent on Rationality Blogs” panel an article entitled “Why EA is new and obvious”. I’ll take that as a prompt to list my three philosophical complaints abouts EA:
I believe in causality as a basic moral concept. My ethical system absolutely requires me to avoid hurting people, but is much less adamant about helping people. While some people claim to be indifferent to this distinction, in practice people’s revealed moral preferences suggest that they agree with me (certainly the legal system agrees with me).
I also believe in locality as an ontologically primitive moral issue. I am more morally obligated to my mother than to a random stranger in Africa. Finer gradations are harder to tease out, but I still feel more obligation to a fellow American than to a citizen of another country, ceteris paribus.
I do not believe a good ethical system should rely on moral exhortation, at least not to the extent that EA does. Such systems will never succeed in solving the free-rider problem. The best strategy to produce ethical behavior is simply to appeal to self-interest, by offering people membership in a community that confers certain benefits, if the person is willing to follow certain rules.
This is only true of ethical behaviours that can be produced by appealing to self-interest. That might not be all of them. I don’t see how you can claim to know that the best strategies are all in this category without actually doing the relevant cost-benefit calculations.
My claim is based on historical analysis. Historically, the ideas that benefit humanity the most in the long term are things like capitalism, science, the criminal justice system, and (to a lesser extent) democracy. These ideas are all based on aligning individual self-interest with the interests of the society as a whole.
Moral exhortation, it must be noted, also has a hideous dark side, in that it delineates a ingroup/outgroup distinction between those who accept the exhortation and those who reject it, and that distinction is commonly used to justify violence and genocide. Judaism, Christianity and Islam are all based on moral exhortation and were all used in history to justify atrocities against the infidel outgroup. The same is true of communism. Hitler spent a lot of time on his version of moral exhortation. The French revolutionaries had an inspiring creed of “liberty, equality and fraternity” and then used that creed to justify astonishing bloodshed first within France and then throughout Europe.
I find your list of historical examples less than perfectly convincing. The single biggest success story there is probably science, but (as ChristianKl has also pointed out) science is not at all “based on aligning individual self-interest with the interests of the society as a whole”; if you asked a hundred practising scientists and a hundred eminent philosophers of science to list twenty things each that science is “based on” I doubt anything like that would appear in any of the lists.
(Nor, for that matter, is science based on pursuing the interests of others at the cost of one’s own self-interest. What you wrote is orthogonal to the truth rather than opposite.)
I do agree that when self-interest can be made to lead to good things for everyone it’s very nice, and I don’t dispute your characterization of capitalism, criminal justice, and democracy as falling nicely in line with that. But it’s a big leap from “there are some big examples where aligning people’s self-interest with the common good worked out well” to “a good moral system should never appeal to anything other than self-interest”.
Yes, moral exhortation has sometimes been used to get people to commit atrocities, but atrocities have been motivated by self-interest from time to time too. (And … isn’t your main argument against moral exhortation that it’s ineffective? If it turns out to be a more effective way to get people to commit atrocities than appealing to self-interest is, doesn’t that undermine that main argument?)
The distrust of individual scholars found in science is in fact an example of aligning individual incentives, by making success and prestige dependent on genuine truth-seeking.
The claim is not so much that moral appeals should never be used, but that they should only happen when strictly necessary, once incentives have been aligned to the greatest possible extent. Promoting efficient giving is an excellent example, but moral appeals are of course also relevant on the very small scale. Effective altruists are in fact very good at using self-interest as a lever for positive social change, whenever possible—this is the underlying rationale for the ‘earning to give’ idea, as well as for the attention paid to extreme poverty in undeveloped countries.
Scientists generally do trust scientific papers to not lie about the results they report.
Even an organisations like the FDA frequently gives companies the presumption of correct data reporting as demonstrated well in the Ranbaxy case.
Charlie Munger
His favorite example is Federal Express. Of course in a business like Federal Express self-interest incentives are the biggest driver of performance.
That doesn’t mean that they are the biggest driver in a project like Wikipedia.
What does science have to do with self-interest? Making one’s claims in a way that they can get falsified by others isn’t normally in people self-interest.
Science appeals to sacred values of truth to prevent people from publishing data based on fake data. If it wouldn’t do so and people would fake data whenever it would be in their self-interest the scientific system wouldn’t get anywhere.
There may be an ethically relevant distinction between a rule that tells you to avoid being the cause of bad things, and a rule that says you should cause good things to happen. However, I am not convinced that causality is relevant to this distinction. As far as I can tell, these two concepts are both about causality. We may be using words differently, do you think you could explain why you think this distinction is about causality?
In my understanding, consequentialism doesn’t accept a moral distinction between sins of omission and sins of action. If a person dies whom I could have saved through some course of action, I’m just as guilty as I would be if I murdered the person. In my view, there must be a distinction between murder (=causing a death) and failure to prevent a death.
If you want to be more formal, here’s a good rule. Given a death, would the death still have a occurred in a counterfactual world where the potentially-guilty person did not exist? If the answer is yes, the person is innocent. Since lots of poor people would still be dying if I didn’t exist, I’m thereby exonerated of their death (phew). I still feel bad about eating meat, though.
Have you read Scott Alexander’s piece on Newtonian ethics?
If we look at this issue from an angle “ethics is memetic system evolved by cultural group selection”, then I guess it makes sense that (1) systems promoting helping your cultural group would have an advantage over systems promoting helping everyone to the same degree, and (2) systems that allow to achieve the “ethical enough” state reasonably fast would have an advantage over systems where no one can realistically become “ethical enough”.
The problem appears when someone tries to do an extrapolation of that concept.
I am not sure how to answer the question “should we extrapolate our ethical concepts?”. Because “should” itself is within the domain of ethics, and the question is precisely about whether that “should” should also be extrapolated.
I won’t talk about your first two points—I kind of agree (but I’m an anti-realist and think you’re a bit strong in your beliefs). I’d like to hear more about
I don’t get it. ethical systems exist in people regardless of transmission and enforcement mechanism. Put another way, what mechanism would you add to EA that would make it better? EA + force doesn’t seem an improvement. EA + rejection of heretics likewise seems a limitation rather than an improvement.
I’d also like to point out that “the free rider problem” isn’t fundamental. My preference for solving it is to be so productive that I just don’t care if someone is riding free—as long as they and I are happy, it’s all good.
Ignoring the free rider problem until we get the holodeck doesn’t seem to be a serious solution. If you have a deadbeat brother sponging off you it is all well and good to think that one day you’ll win the lottery and you won’t care. That only works with your own money though. DB is talking about a system that you are trying to get other people to buy into. They won’t do that if your system is transparently rob-able. They’ll rob it instead. People aren’t dumb. Give em the choice of pulling the cart or sitting on it and you pull alone.
I mostly meant that “free rider” isn’t a problem in altruism (where you pay for things if you think it improves the world), only in capitalist financing (where you pay only for things where you expect to capture more value than your costs).
ALL recipients of charity and social support are free riders: they’re taking more value than they’re contributing. And I don’t care, and neither should you. Calling them “deadbeats” implies you know and can judge WHY they’re in need of help, and you are comparing deservedness rather than effectiveness. I recommend not doing that; deciding what people deserve pretty much cannot be done rationally.
Actually, it is. The problem with ‘free riding’ is not that it’s somehow unfair to the people who are picking up the slack, it’s that it distorts behavior. You don’t want to give money to beggars if this just incents more people to beg and begging is a horrible job—and this is true even if you’re altruistic towards people who might beg. You’ll need to find a way to give money that doesn’t have these bad consequences, even if that means expending some resources.
I was going for a specific common situation, a family member who is mooching. I didn’t mean that all recipients of charity are deadbeats. Obviously, that’s going to depend on the individuals in question.
This is a “teach/give fish” issue here. If you give people stuff they don’t scarequotes earn unscarequotes then they have, in a way, earned it. I mean, value judgement aside they have it now, right? They were miserable enough in front of you that they got it off you. Good on em. Mad beggar skills.
But that’s just on a personal level. If you expand that, and you aren’t just a dude who is a soft touch, but actually build an organization on the principle of “see cry, give hanky”, then your charity is vulnerable to a free rider attack. You gotta fix that, if you actually want to do good and not just create a client group.
If you’ve ever seen the situation of “it would actually be bad for me to get a job because I’d lose X benefit” you get what I’m talking about here. It is a real problem, and the fact that it takes a hard heart to look at it doesn’t make it less real. You have to solve the free rider problem if you want to do charity well, like you have to solve the impostor problem if you want to do encryption.
Even if you believe that locality matters, EA principles like room-for-funding or focus on effectiveness instead of exerting effort for interventions still apply.
That depends largely on your audience. For some people self-interest is very important. For other people fairness is more important. It’s a mistake to generalize too much from one example in that regard.
Clare Graves’s developmental theory for example groups people in different stages whereby different stages are differently motivated.