I think it’s quite clear that having children is less moral than donating the equivalent number of funds to effective charity under the average Lesswrong-user morality.
I don’t think I “should” be giving my money to charity instead of having kids. So either I’m not an average Lesswronger, or you’re wrong about the beliefs of LessWrongers. In any case, I don’t think it’s “quite clear”.
That’s not how morality is defined, for me and I think most others. It’s not about what you would do. It’s about what how you wish people would act in a world where you personally were out of the picture. (So “people shouldn’t hurt each other” is a moral instinct since you are out of the picture, “people should give me money” is not a moral instinct since you are in the picture).
Egg A contains an upper-middle class Westerner, will one day wish to have a child and be able to carry out that wish. Egg B contains an upper middle class Westerner, will one day wish to donate the equivalent amount of money to charity, and be able to carry out that wish. Only one egg can get fertilized and become a person. Which Egg would you have hatch?
I don’t think I “should” be giving my money to charity instead of having kids.
You use the word “should”. That’s precisely the misunderstanding that I was hoping to dissolve. I too, do not wish you to feel compelled to give money to charity instead of having kids out of some sense of moral duty.
That’s why I’m making a distinction between “immoral” and “less moral”. It’s usually not immoral to spend money on things that you like, but it’s less moral than minimizing your consumption and donating all the money to charity. I would admire a person who took the latter path more than a person who took the former path - and this despite the fact that I am currently on the former path (as in, I still eat out sometimes and stuff). I’d consider that person to be more good than I currently am, because their actions reveal that they have a preference function which weights morality more highly than mine does...but that doesn’t make me bad, just less good.
Tautologically, I prefer to achieve all my preferences, not just the moral ones. Tautologically, my aim is to be as good as I prefer to be, no more and no less. This should be true for all agents. For any given individual, having children is probably not the most moral thing, but it might be the most preferred thing.
That’s not how morality is defined, for me and I think most others. It’s not about what you would do. It’s about what how you wish people would act in a world where you personally were out of the picture.
We understand morality differently.
For me morality is defined as a set of my own axiomatic values (I generally think of morals as a set of values and of ethics as consequences of morals in terms of behavior). Other people have their own morality, of course. Many moralities are sufficiently similar so we can talk about systems of morality (which in the West used to be the province of religion, mostly).
I certainly do not think of morality as how I would like the world to be.
For me morality is defined as a set of my own axiomatic values
I’m parsing “morals” and “values” as equivalent terms (and I think you are too), so this statement doesn’t convey me any information about your definition of “moral” and “value”. I share your reading of “ethical” as more behaviorally focused.
I don’t perceive the point at which we disagree or diverge. Can you elaborate on what “values” mean to you and what distinguishes them from other preferences?
Well, the most obvious divergence is that for you morality is “not about what you would do. It’s about what how you wish people would act...”. For me morality is mostly about what I would do or would not do.
As to differences between values and other preferences, hmm… Let’s see:
Values are axiomatic. They are not internally derived from other preferences (though, of course, you can explain them externally).
Values are important.
Values are mostly stable and their change is usually seen as a big deal.
In case of a conflict between a value and a mere preference, value wins.
Well, the most obvious divergence is that for you morality is “not about what you would do. It’s about what how you wish people would act...”. For me morality is mostly about what I would do or would not do.
For me, “acting morally” is acting in such a way which is consistent with how you would have others behave (after removing the pathological “I wish others would give me cash” cases). It applies to myself and others
If for you, morality is only about what you would do, then you have no bases to judge the morality of others. This causes your definition of morality to diverge from the common one. Most people treat morality as something by which all people can be judged. You’ve got an unusual definition.
As to differences between values and other preferences, hmm… Let’s see:
By your description, your “values” are analogous to my “terminal preferences”. The difference is that I have terminal preferences unrelated to morality (my learning, my fun, my loved one’s happiness, etc) as well as terminal preferences related to morality (human learning, human comfort, etc with extension to some non-human entities), whereas all your values seem to class as moral. In my terms, you define yourself as one whose terminal preferences are all moral - a person who prefers to be maximally good by their own standards.
Unless you wish to be a perfectly moral person, with every action crafted to bring about good and none towards personal gain, either your values cannot be equivalent to morality, or your definition of morality includes selfish behavior.
(If you really do strive towards perfect morality, and if your morality is similar enough to mine, then that’s admirable. That implies that you are a force for an incredible amount of good.)
For me, “acting morally” is acting in such a way which is consistent with how you would have others behave
I see major problems with the Golden Rule (mostly stemming from the fact that people are different) but that’s a separate discussion.
you have no bases to judge the morality of others
Mostly correct. I can still judge the internal consistency of their morals as well as the match (or lack thereof) between what they say and what they do.
This causes your definition of morality to diverge from the common one.
Yep. That’s fine.
Most people treat morality as something by which all people can be judged.
Most people also treat morality as a set of rules sent from above. And, of course, I can and do judge people on the basis of my own morals. I just accept that they can and do have morals different from mine.
your “values” are analogous to my “terminal preferences”.
Yes, that’s close enough.
all your values seem to class as moral
Yes, but remember that my understanding of morality is different from yours.
a person who prefers to be maximally good by their own standards.
Well, of course, but I think I understand that sentence a bit differently from you. The problem is in the word “good” which I treat as pretty meaningless unconditionally and which has meaning only conditional on some specific morality which defines what is good and what is evil. Different moralities define good and evil differently. So technically speaking this sentence is correct, but in practice people with different morals will not perceive me as “preferring to be maximally good”.
or your definition of morality includes selfish behavior.
I am of the opinion that non-standard, completely personalized definitions of words should be avoided whenever possible, or it becomes impossible to communicate. My definition of morality stems from the way the word is commonly used.
And, of course, I can and do judge people on the basis of my own morals. I just accept that they can and do have morals different from mine.
This statement applies to me as well. However, earlier you said, ” For me morality is mostly about what I would do or would not do.” This means you cannot even judge others on the basis of your own morals! (When I say that moral instincts are the way one would prefer a disinterested party to behave, that doesn’t preclude other people having different morals. It’s just a way to separate moral instincts from other instincts.
Why, yes, it does. I am not an altruist.
Yet you must have altruistic impulses sometimes, right? Sometimes you want to be nice to people. And sometimes, you want to do things for no reason other than that you personally benefit.
The definition I gave defines the former preferences as usually moral, while the latter as usually morally neutral. (A definition which is in keeping with the common use). Your definition seems to just lump everything together under “moral”. I like my definition of morality better because it seems to draw more useful distinctions and is also in keeping with the common tongue.
non-standard, completely personalized definitions of words should be avoided
It’s not a definition problem here, it’s a concept problem. My concept of morality differs from the standard one. I could, of course, start inventing new words for it or decorate the word with qualifiers, but that doesn’t seem to be called for in this case.
Words are used for communication—did I make myself sufficiently clear about what I mean by the word “morality”?
This means you cannot even judge others on the basis of your own morals!
I should have expressed myself better. What I mean is that morality for me is local rather than global. It’s a personal, individual yardstick, not a universally agreed-upon measure. That’s why it’s applied to me (or, for any given person, to her) and not to the entire world. Having said that, I see no problem with judging other people’s behavior on the basis of my own morals. If I believe doing X is bad it’s still true when person A does X.
Your definition seems to just lump everything together under “moral”.
Not really. Again, I probably should have been clearer. Notice how I talked about values (which are similar to your terminal preferences) and wasn’t keen on using terms like good and evil? That’s basically the reason—you can say that I lump everything under “moral” but then my “moral” is much wider and less judgemental that standard “moral”.
We can use the more common definition of morality, but in the territory of my mind there is no bright line between values which are “moral” and values which are “terminal preferences”. So it’s not particularly useful for describing my beliefs.
I don’t have a clear enough idea of what utilitarianism entails exactly (what counts as utility? “happiness” is too simplified … how do you aggregate?); but overall I consider it more useful for thinking about say, public policy than it is about individual choices.
I don’t really know which moral system I follow, and am even slightly suspicious of the idea of trying to put it down formally as a “system”, since there’s a risk of changing one’s judgements to fit what system one has professed whereas it should go the other way around. I think it’s more useful to try to understand things like incentives or happiness or lost purposes or mechanism design or institutions or the history of morality than it is to try to describe/verbalize one’s moral “system”.
I don’t have a clear enough idea of what utilitarianism entails exactly
While there are several flavors of utilitarianism, they all involve some definition of utility which is computed per individual and then aggregated over the whole society. When making choices the moral option is the one that gives the highest aggregate utility. The most common variants for utility are “happiness” and “preference satisfaction” while the most common methods of aggregation are summing and averaging. Wikipedia may be helpful.
Note that Utilitarianism isn’t required for the argument in the post. You just need to think that others matter and do the multiplication.
overall I consider it more useful for thinking about say, public policy than it is about individual choices
It is widely used in public health, but I don’t see why we should have a different morality at large scale than small.
I think it’s more useful to try to understand things like incentives or happiness or lost purposes or mechanism design or institutions or the history of morality than it is to try to describe/verbalize one’s moral “system”.
So how do you go about determining whether something is moral?
By “average Lesswrong-user morality” I read “utilitarianism, but without utility being well understood”.
Oh...that’s not what I meant, but I can see why you thought that. My fault for phrasing it that way. Bad communication on my part.
I initially phrased it as “average Human morality”, but then I realized that I lacked confidence in the resulting statement. There are humans who see the maintenance of the reproductive family unit as an intrinsic good, and there might be a sufficient number of such people to make the average human morality more reproductively-centered
I’ll edit the parent comment. Would WEIRD+liberal suffice to capture what I mean?
I would estimate that more than 90% of human population would disagree with the statement “It is more moral to give money to an effective charity than to have children”.
I’d guesstimate 5%-60% would disagree with that statement, with a 95% confidence interval.
Our species has a long history of people who aim for moral perfection foregoing family life and becoming ascetics, nuns, etc … in pursuit of that goal. Such people have been historically admired and the sacrifice has been associated with morality.
I’m estimating based on a question in the following format:
“Person A does not donate to charity. He earns Y$/time, and devotes X$/time to running the family, spending the rest on himself. His actions have created Q happy and well-cared for children.
How moral are these person’s actions?
[pollid:568]
“Person B has no children. She earns Y$/time, and gives X$/time to charity, and spends the rest on herself. Her money has done good stuff P and saved Q lives.”
How moral are these person’s actions?
[pollid:569]
The answer would obviously depend on what the precise numbers are, and you’d ideally want to ask the questions separately and counterbalance so that you could see what people said to each question without any reference to the next question. (A direct comparison might trigger motivated cognition)
This is not intended as a real poll, just an illustration...although feel free to vote if you like.
Let’s just say for now that my estimate is for everyone with sufficient English to understand that poll. Americans would be an acceptable sample population.
Among Lesswrong, I suspect only 1%-40%|95%CI would disagree with the statement
However, for my estimate it is required that the questions are posed separately (so that a given respondent only sees one of the two questions, and so must make a judgement relative to absolute scale rather than a side-by-side comparison. Asking questions one at a time and counterbalancing would achieve this.)
A post-edit comment: “liberal morality” is not utilitarianism. Classic liberalism is concerned with individual rights and liberties and not with self-sacrifice to improve the lot others. I don’t believe that having children instead of donating to a charity is “less moral” under liberal morality.
In fact, doing good works instead of having children sounds like straightforward traditional Christian morality: enter the monastery and do as much good as you can.
That’s because Christianity as practiced is a religion of WEIRD-liberal people, as is Buddhism, Islam, post-classical Hinduism etc. The environments that produced those religions were relatively affluent and cosmopolitan.
For an example of non WEIRD-liberal thinking, read the Old Testament, or Norse texts, or the Rig Veda...all produced in harsh, scarce environments.
I know that neither Liberal nor WEIRD isn’t the right word, but what is? I’m talking about people who care less about in-out group boundaries, who care less about loyalty, less about tradition, less about retributive justice, and more about avoiding pain, increasing pleasure, keeping things fair, and preventing coercion.
I’m talking about the sorts of values which tend to increase with plentiful resources and education. Such values are over-represented on Lesswrong, and over-represented within the social bubble that Lesswronger’s tend to inhabit.
I don’t think I “should” be giving my money to charity instead of having kids. So either I’m not an average Lesswronger, or you’re wrong about the beliefs of LessWrongers. In any case, I don’t think it’s “quite clear”.
That’s not how morality is defined, for me and I think most others. It’s not about what you would do. It’s about what how you wish people would act in a world where you personally were out of the picture. (So “people shouldn’t hurt each other” is a moral instinct since you are out of the picture, “people should give me money” is not a moral instinct since you are in the picture).
Egg A contains an upper-middle class Westerner, will one day wish to have a child and be able to carry out that wish. Egg B contains an upper middle class Westerner, will one day wish to donate the equivalent amount of money to charity, and be able to carry out that wish. Only one egg can get fertilized and become a person. Which Egg would you have hatch?
You use the word “should”. That’s precisely the misunderstanding that I was hoping to dissolve. I too, do not wish you to feel compelled to give money to charity instead of having kids out of some sense of moral duty.
That’s why I’m making a distinction between “immoral” and “less moral”. It’s usually not immoral to spend money on things that you like, but it’s less moral than minimizing your consumption and donating all the money to charity. I would admire a person who took the latter path more than a person who took the former path - and this despite the fact that I am currently on the former path (as in, I still eat out sometimes and stuff). I’d consider that person to be more good than I currently am, because their actions reveal that they have a preference function which weights morality more highly than mine does...but that doesn’t make me bad, just less good.
Tautologically, I prefer to achieve all my preferences, not just the moral ones. Tautologically, my aim is to be as good as I prefer to be, no more and no less. This should be true for all agents. For any given individual, having children is probably not the most moral thing, but it might be the most preferred thing.
We understand morality differently.
For me morality is defined as a set of my own axiomatic values (I generally think of morals as a set of values and of ethics as consequences of morals in terms of behavior). Other people have their own morality, of course. Many moralities are sufficiently similar so we can talk about systems of morality (which in the West used to be the province of religion, mostly).
I certainly do not think of morality as how I would like the world to be.
I’m parsing “morals” and “values” as equivalent terms (and I think you are too), so this statement doesn’t convey me any information about your definition of “moral” and “value”. I share your reading of “ethical” as more behaviorally focused.
I don’t perceive the point at which we disagree or diverge. Can you elaborate on what “values” mean to you and what distinguishes them from other preferences?
Well, the most obvious divergence is that for you morality is “not about what you would do. It’s about what how you wish people would act...”. For me morality is mostly about what I would do or would not do.
As to differences between values and other preferences, hmm… Let’s see:
Values are axiomatic. They are not internally derived from other preferences (though, of course, you can explain them externally).
Values are important.
Values are mostly stable and their change is usually seen as a big deal.
In case of a conflict between a value and a mere preference, value wins.
For me, “acting morally” is acting in such a way which is consistent with how you would have others behave (after removing the pathological “I wish others would give me cash” cases). It applies to myself and others
If for you, morality is only about what you would do, then you have no bases to judge the morality of others. This causes your definition of morality to diverge from the common one. Most people treat morality as something by which all people can be judged. You’ve got an unusual definition.
By your description, your “values” are analogous to my “terminal preferences”. The difference is that I have terminal preferences unrelated to morality (my learning, my fun, my loved one’s happiness, etc) as well as terminal preferences related to morality (human learning, human comfort, etc with extension to some non-human entities), whereas all your values seem to class as moral. In my terms, you define yourself as one whose terminal preferences are all moral - a person who prefers to be maximally good by their own standards.
Unless you wish to be a perfectly moral person, with every action crafted to bring about good and none towards personal gain, either your values cannot be equivalent to morality, or your definition of morality includes selfish behavior.
(If you really do strive towards perfect morality, and if your morality is similar enough to mine, then that’s admirable. That implies that you are a force for an incredible amount of good.)
I see major problems with the Golden Rule (mostly stemming from the fact that people are different) but that’s a separate discussion.
Mostly correct. I can still judge the internal consistency of their morals as well as the match (or lack thereof) between what they say and what they do.
Yep. That’s fine.
Most people also treat morality as a set of rules sent from above. And, of course, I can and do judge people on the basis of my own morals. I just accept that they can and do have morals different from mine.
Yes, that’s close enough.
Yes, but remember that my understanding of morality is different from yours.
Well, of course, but I think I understand that sentence a bit differently from you. The problem is in the word “good” which I treat as pretty meaningless unconditionally and which has meaning only conditional on some specific morality which defines what is good and what is evil. Different moralities define good and evil differently. So technically speaking this sentence is correct, but in practice people with different morals will not perceive me as “preferring to be maximally good”.
Why, yes, it does. I am not an altruist.
I am of the opinion that non-standard, completely personalized definitions of words should be avoided whenever possible, or it becomes impossible to communicate. My definition of morality stems from the way the word is commonly used.
This statement applies to me as well. However, earlier you said, ” For me morality is mostly about what I would do or would not do.” This means you cannot even judge others on the basis of your own morals! (When I say that moral instincts are the way one would prefer a disinterested party to behave, that doesn’t preclude other people having different morals. It’s just a way to separate moral instincts from other instincts.
Yet you must have altruistic impulses sometimes, right? Sometimes you want to be nice to people. And sometimes, you want to do things for no reason other than that you personally benefit.
The definition I gave defines the former preferences as usually moral, while the latter as usually morally neutral. (A definition which is in keeping with the common use). Your definition seems to just lump everything together under “moral”. I like my definition of morality better because it seems to draw more useful distinctions and is also in keeping with the common tongue.
It’s not a definition problem here, it’s a concept problem. My concept of morality differs from the standard one. I could, of course, start inventing new words for it or decorate the word with qualifiers, but that doesn’t seem to be called for in this case.
Words are used for communication—did I make myself sufficiently clear about what I mean by the word “morality”?
I should have expressed myself better. What I mean is that morality for me is local rather than global. It’s a personal, individual yardstick, not a universally agreed-upon measure. That’s why it’s applied to me (or, for any given person, to her) and not to the entire world. Having said that, I see no problem with judging other people’s behavior on the basis of my own morals. If I believe doing X is bad it’s still true when person A does X.
Not really. Again, I probably should have been clearer. Notice how I talked about values (which are similar to your terminal preferences) and wasn’t keen on using terms like good and evil? That’s basically the reason—you can say that I lump everything under “moral” but then my “moral” is much wider and less judgemental that standard “moral”.
We can use the more common definition of morality, but in the territory of my mind there is no bright line between values which are “moral” and values which are “terminal preferences”. So it’s not particularly useful for describing my beliefs.
I would choose Egg A. I am interested in knowing if Lesswrong users agree.
[pollid:570]
By “average Lesswrong-user morality” I read “utilitarianism, but without utility being well settled”.
Briefly, what moral system do you follow?
I don’t have a clear enough idea of what utilitarianism entails exactly (what counts as utility? “happiness” is too simplified … how do you aggregate?); but overall I consider it more useful for thinking about say, public policy than it is about individual choices.
I don’t really know which moral system I follow, and am even slightly suspicious of the idea of trying to put it down formally as a “system”, since there’s a risk of changing one’s judgements to fit what system one has professed whereas it should go the other way around. I think it’s more useful to try to understand things like incentives or happiness or lost purposes or mechanism design or institutions or the history of morality than it is to try to describe/verbalize one’s moral “system”.
While there are several flavors of utilitarianism, they all involve some definition of utility which is computed per individual and then aggregated over the whole society. When making choices the moral option is the one that gives the highest aggregate utility. The most common variants for utility are “happiness” and “preference satisfaction” while the most common methods of aggregation are summing and averaging. Wikipedia may be helpful.
Note that Utilitarianism isn’t required for the argument in the post. You just need to think that others matter and do the multiplication.
It is widely used in public health, but I don’t see why we should have a different morality at large scale than small.
So how do you go about determining whether something is moral?
Oh...that’s not what I meant, but I can see why you thought that. My fault for phrasing it that way. Bad communication on my part.
I initially phrased it as “average Human morality”, but then I realized that I lacked confidence in the resulting statement. There are humans who see the maintenance of the reproductive family unit as an intrinsic good, and there might be a sufficient number of such people to make the average human morality more reproductively-centered
I’ll edit the parent comment. Would WEIRD+liberal suffice to capture what I mean?
I would estimate that more than 90% of human population would disagree with the statement “It is more moral to give money to an effective charity than to have children”.
I’d guesstimate 5%-60% would disagree with that statement, with a 95% confidence interval.
Our species has a long history of people who aim for moral perfection foregoing family life and becoming ascetics, nuns, etc … in pursuit of that goal. Such people have been historically admired and the sacrifice has been associated with morality.
I’m estimating based on a question in the following format:
“Person A does not donate to charity. He earns Y$/time, and devotes X$/time to running the family, spending the rest on himself. His actions have created Q happy and well-cared for children.
How moral are these person’s actions?
[pollid:568]
“Person B has no children. She earns Y$/time, and gives X$/time to charity, and spends the rest on herself. Her money has done good stuff P and saved Q lives.”
How moral are these person’s actions?
[pollid:569]
The answer would obviously depend on what the precise numbers are, and you’d ideally want to ask the questions separately and counterbalance so that you could see what people said to each question without any reference to the next question. (A direct comparison might trigger motivated cognition)
This is not intended as a real poll, just an illustration...although feel free to vote if you like.
Are you sampling from general humanity or from the LW crowd? They are very very different.
Let’s just say for now that my estimate is for everyone with sufficient English to understand that poll. Americans would be an acceptable sample population.
Among Lesswrong, I suspect only 1%-40%|95%CI would disagree with the statement
However, for my estimate it is required that the questions are posed separately (so that a given respondent only sees one of the two questions, and so must make a judgement relative to absolute scale rather than a side-by-side comparison. Asking questions one at a time and counterbalancing would achieve this.)
A post-edit comment: “liberal morality” is not utilitarianism. Classic liberalism is concerned with individual rights and liberties and not with self-sacrifice to improve the lot others. I don’t believe that having children instead of donating to a charity is “less moral” under liberal morality.
In fact, doing good works instead of having children sounds like straightforward traditional Christian morality: enter the monastery and do as much good as you can.
That’s because Christianity as practiced is a religion of WEIRD-liberal people, as is Buddhism, Islam, post-classical Hinduism etc. The environments that produced those religions were relatively affluent and cosmopolitan.
For an example of non WEIRD-liberal thinking, read the Old Testament, or Norse texts, or the Rig Veda...all produced in harsh, scarce environments.
I know that neither Liberal nor WEIRD isn’t the right word, but what is? I’m talking about people who care less about in-out group boundaries, who care less about loyalty, less about tradition, less about retributive justice, and more about avoiding pain, increasing pleasure, keeping things fair, and preventing coercion.
I’m talking about the sorts of values which tend to increase with plentiful resources and education. Such values are over-represented on Lesswrong, and over-represented within the social bubble that Lesswronger’s tend to inhabit.