I’m hoping that “more babies should be born altruists” is something almost everyone can agree on.
Sorry, nope :-/
Do either of these apply to you?
Don’t think so. I don’t foresee any difficulties in sitting in my wheelchair, shaking my cane and yelling “You kids get off my lawn!” :-D And I rather doubt the conversion to altruism is going to be that total that I won’t be able to find anyone to be friends with.
But yes, I suspect that a world full of altruists is going to have a few unpleasant failure modes.
First, what are we talking about? The opposite of “altruistic” is “selfish”—so we are talking about people who don’t care much about their personal satisfaction, success, or well-being, but care greatly about the well-being of some greater community. There are other words usually applied to such people. If we approve of them and their values (and, by implication, goals) we call them “idealists”. If we disapprove of them, we call them “fanatics”.
Early communists, for example, were altruists—they were building a paradise for all workers everywhere. That didn’t stop them from committing a variety of atrocities and swiftly evolving into the most murderous regimes in human history.
The problem, basically, is that if you think that the needs and wants of an individual are insignificant in the face of the good that can accrue to the larger community, you are very willing to sacrifice individuals for that greater good. That is a well-trod path and we know where it leads.
If we approve of them and their values (and, by implication, goals) we call them “idealists”. If we disapprove of them, we call them “fanatics”.
Sure… and if they operate using reason and evidence, we call them “scientists”, “economists”, etc. (Making the world better is an implicit value premise in lots of academic work, e.g. there’s lots of Alzheimer’s research being done because an aging population is going to mean lots of Alzheimer’s patients. Most economists write papers on how to facilitate economic growth, not economic crashes. Etc.) I agree that releasing a bunch of average intelligence, average reflectiveness altruists on the world is not necessarily a good idea and I didn’t propose it.
The problem, basically, is that if you think that the needs and wants of an individual are insignificant in the face of the good that can accrue to the larger community, you are very willing to sacrifice individuals for that greater good.
I mean, the Allied soldiers that died during WWII were sacrificed for the greater good in a certain sense, right? I feel like the real problem here might be deeper, e.g. willingness of the population to accept any proposal that authorities say is for the greater good (not necessarily quite the same thing as altruism… see below).
I think there are a bunch of related but orthogonal concepts that it’s important to separate:
Individualism vs collectivism (as a sociological phenomenon, e.g. “America’s culture is highly individualistic”). Maybe the only genetic tinkering that’s possible would also increase collectivism and cause problems.
Looking good vs being good. Maybe due to the conditions human altruism evolved in (altruistic punishment etc.), altruists tend to be more interested in seeming good (e.g. obsess about not saying anything offensive) than being good (e.g. figure out who’s most in need and help that person without telling anyone). It could be that you are sour on altruism because you associate it with people who try to look good (self-proclaimed altruists), which isn’t necessarily the same group as people who actually are altruists (anything from secretly volunteering at an animal shelter to a Fed chairman who thinks carefully, is good at their job, and helps more poor people than 100 Mother Teresas). Again, in principle it seems like these axes are orthogonal but maybe in practice they’re genetically related.
Utilitarianism vs deontology (do you flip the lever in the trolley problem). EY wrote a sequence about how these are a useful safeguard on utilitarianism. I specified that my utopia would have people who were highly reflective, so they should understand this suggestion and either follow it or improve on it.
Whatever dimension this quiz measures. Orthogonal in theory, maybe related in practice.
A little knowledge is a dangerous thing—sometimes people are just wrong about things. Even non-communists thought communist economies would outdo capitalist ones. I think in a certain sense the failure of communism says more about the fact that society design is a hard problem than the dangers of altruism. Probably a good consideration against tinkering with society in general, which includes genetic engineering. However, it sounds like we both agree that genetic engineering is going to happen, and the default seems bad. I think the fundamental consideration here is how much to favor the status quo vs some new unproven but promising idea. Again, seems theoretically orthogonal to altruism but might be related in practice.
Gullibility. I’d expect that agreeable people are more gullible. Orthogonal in theory, maybe related in practice.
And finally, altruism vs selfishness (insofar as one is a utilitarian, what’s the balance of your own personal utility vs that of others). I don’t think making people more altruistic along this axis is problematic ceteris paribus (as long as you don’t get in to pathological self-sacrifice territory), but maybe I’m wrong.
This is a useful list of failure modes to watch for when modifying genes that seem to increase altruism but might change other stuff, so thanks. Perhaps it’d be wise to prioritize reflectiveness over altruism. (Need for cognition might be the construct we want. Feel free to shoot holes in that proposal if you want to continue talking :P)
I agree that releasing a bunch of average intelligence, average reflectiveness altruists on the world is not necessarily a good idea
I am relieved :-P
And yes, I think the subthread has drifted sufficiently far so I’ll bow out and leave you to figure out by yourself the orthogonality of being altruistic and being gullible :-)
Sorry, nope :-/
Don’t think so. I don’t foresee any difficulties in sitting in my wheelchair, shaking my cane and yelling “You kids get off my lawn!” :-D And I rather doubt the conversion to altruism is going to be that total that I won’t be able to find anyone to be friends with.
But yes, I suspect that a world full of altruists is going to have a few unpleasant failure modes.
First, what are we talking about? The opposite of “altruistic” is “selfish”—so we are talking about people who don’t care much about their personal satisfaction, success, or well-being, but care greatly about the well-being of some greater community. There are other words usually applied to such people. If we approve of them and their values (and, by implication, goals) we call them “idealists”. If we disapprove of them, we call them “fanatics”.
Early communists, for example, were altruists—they were building a paradise for all workers everywhere. That didn’t stop them from committing a variety of atrocities and swiftly evolving into the most murderous regimes in human history.
The problem, basically, is that if you think that the needs and wants of an individual are insignificant in the face of the good that can accrue to the larger community, you are very willing to sacrifice individuals for that greater good. That is a well-trod path and we know where it leads.
Do you think the effective altruist movement is likely to run in to the same failure modes that the communist movement ran in to?
If it gets sufficient amount of power (which I don’t anticipate happening) then yes.
Sure… and if they operate using reason and evidence, we call them “scientists”, “economists”, etc. (Making the world better is an implicit value premise in lots of academic work, e.g. there’s lots of Alzheimer’s research being done because an aging population is going to mean lots of Alzheimer’s patients. Most economists write papers on how to facilitate economic growth, not economic crashes. Etc.) I agree that releasing a bunch of average intelligence, average reflectiveness altruists on the world is not necessarily a good idea and I didn’t propose it.
I mean, the Allied soldiers that died during WWII were sacrificed for the greater good in a certain sense, right? I feel like the real problem here might be deeper, e.g. willingness of the population to accept any proposal that authorities say is for the greater good (not necessarily quite the same thing as altruism… see below).
I think there are a bunch of related but orthogonal concepts that it’s important to separate:
Individualism vs collectivism (as a sociological phenomenon, e.g. “America’s culture is highly individualistic”). Maybe the only genetic tinkering that’s possible would also increase collectivism and cause problems.
Looking good vs being good. Maybe due to the conditions human altruism evolved in (altruistic punishment etc.), altruists tend to be more interested in seeming good (e.g. obsess about not saying anything offensive) than being good (e.g. figure out who’s most in need and help that person without telling anyone). It could be that you are sour on altruism because you associate it with people who try to look good (self-proclaimed altruists), which isn’t necessarily the same group as people who actually are altruists (anything from secretly volunteering at an animal shelter to a Fed chairman who thinks carefully, is good at their job, and helps more poor people than 100 Mother Teresas). Again, in principle it seems like these axes are orthogonal but maybe in practice they’re genetically related.
Utilitarianism vs deontology (do you flip the lever in the trolley problem). EY wrote a sequence about how these are a useful safeguard on utilitarianism. I specified that my utopia would have people who were highly reflective, so they should understand this suggestion and either follow it or improve on it.
Whatever dimension this quiz measures. Orthogonal in theory, maybe related in practice.
A little knowledge is a dangerous thing—sometimes people are just wrong about things. Even non-communists thought communist economies would outdo capitalist ones. I think in a certain sense the failure of communism says more about the fact that society design is a hard problem than the dangers of altruism. Probably a good consideration against tinkering with society in general, which includes genetic engineering. However, it sounds like we both agree that genetic engineering is going to happen, and the default seems bad. I think the fundamental consideration here is how much to favor the status quo vs some new unproven but promising idea. Again, seems theoretically orthogonal to altruism but might be related in practice.
Gullibility. I’d expect that agreeable people are more gullible. Orthogonal in theory, maybe related in practice.
And finally, altruism vs selfishness (insofar as one is a utilitarian, what’s the balance of your own personal utility vs that of others). I don’t think making people more altruistic along this axis is problematic ceteris paribus (as long as you don’t get in to pathological self-sacrifice territory), but maybe I’m wrong.
This is a useful list of failure modes to watch for when modifying genes that seem to increase altruism but might change other stuff, so thanks. Perhaps it’d be wise to prioritize reflectiveness over altruism. (Need for cognition might be the construct we want. Feel free to shoot holes in that proposal if you want to continue talking :P)
I am relieved :-P
And yes, I think the subthread has drifted sufficiently far so I’ll bow out and leave you to figure out by yourself the orthogonality of being altruistic and being gullible :-)