This type of procedure may look inelegant for folks who expect population ethics to have an objectively correct solution. However, I think it’s confused to expect there to be such an objective solution. In my view at least, this makes the procedure described in the original post look pretty attractive as a way to move forward.
Because it includes some very similar considerations as are presented in the original post here, I’ll try to (for those who are curious enough to bear with me) describe the framework I’ve been using for thinking about population ethics:
Ethical value is subjective in the sense that, if someone’s life goal is to strive toward state x, it’s no one’s business to tell them that they should focus on y instead. (There may be exceptions, e.g., in case someone’s life goals are the result of brain washing).
For decisions that do not involve the creation of new sentient beings, preference utilitarianism or “bare minimum contractualism” seem like satisfying frameworks. Preference utilitarians are ambitiously cooperative/altruistic and scale back any other possible life goals at the expense of getting maximal preference satisfaction for everyone, whereas “bare-minimum contractualists” obey principles like “do no harm” while mostly focusing on their own life goals. A benevolent AI should follow preference utilitarianism, whereas individual people are free to decide for anything on the spectrum between full preference utilitarianism and bare-minimum contractualism. (Bernard William’s famous objection to utilitarianism is that it undermines a person’s “integrity” by alienating them from their own life goals. By focusing all their resources and attention on doing what’s best from everyone’s point of view, people don’t get to do anything that’s good for themselves. This seems okay if one consciously chooses altruism as a way of life, but overly demanding as an all-encompassing morality).
When it comes to questions that affect the creation of new beings, the principles behind preference utilitarianism or bare-minimum contractualism fail to constrain all of the option space. In other words: population ethics is under-defined.
That said, it’s not the case that “anything goes.” Just because present populations have all the power doesn’t mean that it’s morally permissible to ignore any other-regarding considerations about the well-being of possible future people. We can conceptualize a bare-minimum version of population ethics as a set of appeals or principles by which newly created beings can hold accountable their creators. This could include principles such as:
All else equal, it seems objectionable to create minds that lament their existence.
All else equal, it seems objectionable to create minds and place them in situations where their interests are only somewhat fulfilled, if one could have easily provided them with better circumstances.
All else equal, it seems objectionable to create minds destined to constant misery, yet with a strict preference for existence over non-existence.
(While the first principle is about which minds to create, the second two principles apply to how to create new minds.)
Is it ever objectionable to fail to create minds – for instance, in cases where they’d have a strong interest in their existence?
This type of principle would go beyond bare-minimum population ethics. It would be demanding to follow in the sense that it doesn’t just tell us what not to do, but also gives us something to optimize (the creation of new happy people)– it would take up all our caring capacity.
Just because we care about fulfilling actual people’s life goals doesn’t mean that we care about creating new people with satisfied life goals. These two things are different. Total utilitarianism is a plausible or defensible version of a “full-scope” population ethical theory, but it’s not a theory that everyone will agree with. Alternatives like average utilitarianism or negative utilitarianism are on equal footing. (As are non-utillitarian approaches to population ethics that say that the moral value of future civilization is some complex function that doesn’t scale linearly with increased population size.)
So what should we make of moral theories such as total utilitarianism, average utilitarianism or negative utilitarianism? They way I think of them, they are possible morality-inspired personal preferences, rather than personal preferences inspired by the correct all-encompassing morality. In other words, a total/average/negative utilitarian is someone who holds strong moral views related to the creation of new people, views that go beyond the bare-minimum principles discussed above. Those views are defensible in the sense that we can see where such people’s inspiration comes from, but they are not objectively true in the sense that those intuitions will appeal in the same way to everyone.
How should people with different population-ethical preferences approach disagreement?
One pretty natural and straightforward approach would the proposal in the original post here.
Ironically, this would amount to “solving” population ethics in a way that’s very similar to how common sense would address it. Here’s how I’d imagine non-philosophers to approach population ethics:
Parents are obligated to provide a very high standard of care for their children (bare-minimum principle).
People are free to decide against becoming parents (principle inspired by personal morality).
Parents are free to want to have as many children as possible (principle inspired by personal morality), as long as the children are happy in expectation (bare-minimum principle).
People are free to try to influence other people’s stances and parenting choices (principle inspired by personal morality), as long as they remain within the boundaries of what is acceptable in a civil society (bare-minimum principle).
For decisions that are made collectively, we’ll probably want some type of democratic compromise.
I get the impression that a lot of effective altruists have negative associations with moral theories that leave things underspecified. But think about what it would imply if nothing was underspecfied: As Bernard Williams pointed out, if the true morality left nothing underspecified, then morally-inclined people would have no freedom to choose what to live for. I no longer think it’s possible or even desirable to find such an all-encompassing morality.
One may object that the picture I’m painting cheapens the motivation behind some people’s strongly held population-ethical convictions. The objection could be summarized this way: “Total utilitarians aren’t just people who self-orientedly like there to be a lot of happiness in the future! Instead, they want there to be a lot of happiness in the future because that’s what they think makes up the most good.”
I think this objection has two components. The first component is inspired by a belief in moral realism, and to that, I’d reply that moral realism is false. The second component of the objection is an important intuition that I sympathize with. I think this intuition can still be accommodated in my framework. This works as follows: What I labelled “principle inspired by personal morality” wasn’t a euphemism for “some random thing people do to feel good about themselves.” People’s personal moral principles can be super serious and inspired by the utmost desire to do what’s good for others. It’s just important to internalize that there isn’t just one single way to do good for others. There are multiple flavors of doing good.
Thank you very much for this comment, it explained my thoughts better than I could have ever written.
Yes, I think moral realism is false and didn’t realize that was not a mainstream position in the EA community. I had trouble accepting it myself for the longest time and I was incredibly frustrated that all evidence seemed to point away from moral realism. Eventually I realized that freedom could only exist in the arbitrary and that a clockwork moral code would mean a clockwork life.
I’m only a first-year student so I’ll be very interested in seeing what a professional (like yourself) could extrapolate from this idea. The rough draft you showed me is already very promising and I hope you get around to eventually making a post about it.
I’m not entirely sure what moral realism even gets you. Regardless of whether morality is “real” i still have attitudes towards certain behaviors and outcomes, and attitudes towards other people’s attitudes. I suspect the moral realism debate is confused altogether.
I’m not entirely sure what moral realism even gets you.
Here’s what I wrote in Six Plausible Meta-Ethical Alternatives: “Most intelligent beings in the multiverse share similar preferences. This came about because there are facts about what preferences one should have, just like there exist facts about what decision theory one should use or what prior one should have, and species that manage to build intergalactic civilizations (or the equivalent in other universes) tend to discover all of these facts. There are occasional paperclip maximizers that arise, but they are a relatively minor presence or tend to be taken over by more sophisticated minds.”
Regardless of whether morality is “real” i still have attitudes towards certain behaviors and outcomes, and attitudes towards other people’s attitudes.
In the above scenario, once you become intelligent enough and philosophically sophisticated enough, you’ll realize that your current attitudes are wrong (or right, as the case may be) and change them to better fit the relevant moral facts.
Most intelligent beings in the multiverse share similar preferences.
I mean this could very well be true, but at best it points to some truths about convergent psychological evolution.
This came about because there are facts about what preferences one should have, just like there exist facts about what decision theory one should use or what prior one should have, and species that manage to build intergalactic civilizations
Sure, there are facts about what preferences would best enable the emergence of an intergalactic civilization. I struggle to see these as moral facts.
Also there’s definitely a manifest destiny evoking unquestioned moralizing of space exploration going on rn, almost like morality’s importance is only as an instrument to us becoming hegemonic masters of the universe. The angle you approached this question is value-laden in an idiosyncratic way (not in a particularly foreign way, here on less-wrong, but value-laden nonetheless)
One can recognize that one would be ”better off” with a different preference set without the alternate set being better in some objective sense.
change them to better fit the relevant moral facts.
I’m saying the self-reflective process that leads to increased parsimony between moral intuitions does not require objective realism of moral facts, or even the belief in moral realism. I guess this puts me somewhere between relativism and subjectivism according to your linked post?
Sure, there are facts about what preferences would best enable the emergence of an intergalactic civilization. I struggle to see these as moral facts.
There’s a misunderstanding/miscommunication here. I wasn’t suggesting “what preferences would best enable the emergence of an intergalactic civilization” are moral facts. Instead I was suggesting in that scenario that building an intergalactic civilization may require a certain amount of philosophical ability and willingness/tendency to be motivated by normative facts discovered through philosophical reasoning, and that philosophical ability could eventually enables that civilization to discover and be motivated by moral facts.
In other words, it’s [high philosophical ability/sophistication causes both intergalactic civilization and discovery of moral facts], not [discovery of “moral facts” causes intergalactic civilization].
Well, i struggle to articulate what exactly we disagree on, because I find no real issue with this comment. Maybe i would say “high philosophical ability/sophistication causes both intergalactic civilization and moral convergence.”? I hesitate to call the result of that moral convergence “moral fact,” though I can conceive of that convergence.
I’m not entirely sure what moral realism even gets you.
It gets you something that error theory doesn’t get you , which is that moral claims have truth values. And it gets you something that subjectivism doesn’t get you, which is some people being actually wrong, and not just different to you.
Regardless of whether morality is “real” i still have attitudes towards certain behaviors and outcomes, and attitudes towards other people’s attitudes
That’s parallel to pointing out that people still have opinions when objective truth is available. People should believe the truth (this site, passim) and similarly should follow the true morality.
uh… I guess cannot get around the regress involved in claiming my moral values superior to competing systems in an objective sense? I hesitate to lump together the same kind of missteps that are involved with a mistaken conception of reality (a mis-apprehension of non-moral facts) with whatever goes on internally when two people arrive at different values.
I think it’s possible to agree on all mind independent facts, without entailing perfect accord on all value propositions, and that moral reflection is fully possible without objective moral truth. Perhaps I do not get to point at a repulsive actor and say they are wrong in the strict sense of believing falsehoods, but i can deliver a verdict on their conduct all the same.
uh… I guess cannot get around the regress involved in claiming my moral values superior to competing systems in an objective sense?
It looks like some people can, since the attitudes of professional philosophers break down as:
Meta-ethics: moral realism 56.4%; moral anti-realism 27.7%; other 15.9%.
I can see how the conclusion would be difficult to reach if you make assumptions that are standard round here, such as
Morality is value
Morality is only value
All value is moral value.
But I suppose other people are making other assumptions.
Perhaps I do not get to point at a repulsive actor and say they are wrong in the strict sense of believing falsehoods, but i can deliver a verdict on their conduct all the same.
Some verdicts lead to jail sentences. If Alice does something that is against Bob’s subjective value system, and Bob does something that is against Alice’s subjective value system, who ends up in jail? Punishments are things that occur objectively, so need an objective justification.
Subjective ethics allows you to deliver a verdict in the sense of “tut-tutting”, but morality is something that connects up with laws and punishments, and that where subjectivism is weak.
To make Wei Dai’s answer more concrete, suppose something like the symmetry theory of valence is true; in that case, there’s a crisp, unambiguous formal characterization of all valence. Then add open individualism to the picture, and it suddenly becomes a lot more plausible that many civilizations converge not just towards similar ethics, but exactly identical ethics.
I’m immensely skeptical that open individualism will ever be more than a minority position (among humans, at least) But at any rate, convergence on an ethic doesn’t demonstrate objective correctness of that ethic from outside that ethic.
didn’t realize that was not a mainstream position in the EA community.
My impression is that moral realism based on irreducible normativity is more common in the broader EA community than on Lesswrong. But it comes in different versions. I also tend to refer to it as (a version of) “moral realism” if someone holds the belief that humans will reach a strong consensus about human values / normative ethical theories (if only they had ample time to reflect on the questions). Such convergence doesn’t necessarily require there to be irreducibly normative facts about what’s good or bad, but it still sounds like moral realism. The “we strongly expect convergence” position seemed to be somewhat prelevant on Lesswrong initially, though my impression was that this was more of a probable default assumption rather than something anyone confidently endorsed, and over time my impression is also that people have tentatively moved away from it.
I’m usually bad at explaining my thoughts too, but I’m persistent enough to keep trying. :P
if the true morality left nothing underspecified, then morally-inclined people would have no freedom to choose what to live for. I no longer think it’s possible or even desirable to find such an all-encompassing morality.
Consider the system “do what you want”. While we might not accept this system completely (perhaps rejecting that it is okay to harm others if you don’t care about their wellbeing), it is an all encompassing system, and it gives you complete freedom (including choosing what to live for).
You’re right that the system of ‘do what you want’ is an all-encompassing system. But it also leaves a lot of things underspecified (basically everything), which was (in my opinion) the more important insight.
This type of procedure may look inelegant for folks who expect population ethics to have an objectively correct solution. However, I think it’s confused to expect there to be such an objective solution. In my view at least, this makes the procedure described in the original post look pretty attractive as a way to move forward.
Because it includes some very similar considerations as are presented in the original post here, I’ll try to (for those who are curious enough to bear with me) describe the framework I’ve been using for thinking about population ethics:
Ethical value is subjective in the sense that, if someone’s life goal is to strive toward state x, it’s no one’s business to tell them that they should focus on y instead. (There may be exceptions, e.g., in case someone’s life goals are the result of brain washing).
For decisions that do not involve the creation of new sentient beings, preference utilitarianism or “bare minimum contractualism” seem like satisfying frameworks. Preference utilitarians are ambitiously cooperative/altruistic and scale back any other possible life goals at the expense of getting maximal preference satisfaction for everyone, whereas “bare-minimum contractualists” obey principles like “do no harm” while mostly focusing on their own life goals. A benevolent AI should follow preference utilitarianism, whereas individual people are free to decide for anything on the spectrum between full preference utilitarianism and bare-minimum contractualism. (Bernard William’s famous objection to utilitarianism is that it undermines a person’s “integrity” by alienating them from their own life goals. By focusing all their resources and attention on doing what’s best from everyone’s point of view, people don’t get to do anything that’s good for themselves. This seems okay if one consciously chooses altruism as a way of life, but overly demanding as an all-encompassing morality).
When it comes to questions that affect the creation of new beings, the principles behind preference utilitarianism or bare-minimum contractualism fail to constrain all of the option space. In other words: population ethics is under-defined.
That said, it’s not the case that “anything goes.” Just because present populations have all the power doesn’t mean that it’s morally permissible to ignore any other-regarding considerations about the well-being of possible future people. We can conceptualize a bare-minimum version of population ethics as a set of appeals or principles by which newly created beings can hold accountable their creators. This could include principles such as:
All else equal, it seems objectionable to create minds that lament their existence.
All else equal, it seems objectionable to create minds and place them in situations where their interests are only somewhat fulfilled, if one could have easily provided them with better circumstances.
All else equal, it seems objectionable to create minds destined to constant misery, yet with a strict preference for existence over non-existence.
(While the first principle is about which minds to create, the second two principles apply to how to create new minds.)
Is it ever objectionable to fail to create minds – for instance, in cases where they’d have a strong interest in their existence?
This type of principle would go beyond bare-minimum population ethics. It would be demanding to follow in the sense that it doesn’t just tell us what not to do, but also gives us something to optimize (the creation of new happy people)– it would take up all our caring capacity.
Just because we care about fulfilling actual people’s life goals doesn’t mean that we care about creating new people with satisfied life goals. These two things are different. Total utilitarianism is a plausible or defensible version of a “full-scope” population ethical theory, but it’s not a theory that everyone will agree with. Alternatives like average utilitarianism or negative utilitarianism are on equal footing. (As are non-utillitarian approaches to population ethics that say that the moral value of future civilization is some complex function that doesn’t scale linearly with increased population size.)
So what should we make of moral theories such as total utilitarianism, average utilitarianism or negative utilitarianism? They way I think of them, they are possible morality-inspired personal preferences, rather than personal preferences inspired by the correct all-encompassing morality. In other words, a total/average/negative utilitarian is someone who holds strong moral views related to the creation of new people, views that go beyond the bare-minimum principles discussed above. Those views are defensible in the sense that we can see where such people’s inspiration comes from, but they are not objectively true in the sense that those intuitions will appeal in the same way to everyone.
How should people with different population-ethical preferences approach disagreement?
One pretty natural and straightforward approach would the proposal in the original post here.
Ironically, this would amount to “solving” population ethics in a way that’s very similar to how common sense would address it. Here’s how I’d imagine non-philosophers to approach population ethics:
Parents are obligated to provide a very high standard of care for their children (bare-minimum principle).
People are free to decide against becoming parents (principle inspired by personal morality).
Parents are free to want to have as many children as possible (principle inspired by personal morality), as long as the children are happy in expectation (bare-minimum principle).
People are free to try to influence other people’s stances and parenting choices (principle inspired by personal morality), as long as they remain within the boundaries of what is acceptable in a civil society (bare-minimum principle).
For decisions that are made collectively, we’ll probably want some type of democratic compromise.
I get the impression that a lot of effective altruists have negative associations with moral theories that leave things underspecified. But think about what it would imply if nothing was underspecfied: As Bernard Williams pointed out, if the true morality left nothing underspecified, then morally-inclined people would have no freedom to choose what to live for. I no longer think it’s possible or even desirable to find such an all-encompassing morality.
One may object that the picture I’m painting cheapens the motivation behind some people’s strongly held population-ethical convictions. The objection could be summarized this way: “Total utilitarians aren’t just people who self-orientedly like there to be a lot of happiness in the future! Instead, they want there to be a lot of happiness in the future because that’s what they think makes up the most good.”
I think this objection has two components. The first component is inspired by a belief in moral realism, and to that, I’d reply that moral realism is false. The second component of the objection is an important intuition that I sympathize with. I think this intuition can still be accommodated in my framework. This works as follows: What I labelled “principle inspired by personal morality” wasn’t a euphemism for “some random thing people do to feel good about themselves.” People’s personal moral principles can be super serious and inspired by the utmost desire to do what’s good for others. It’s just important to internalize that there isn’t just one single way to do good for others. There are multiple flavors of doing good.
Thank you very much for this comment, it explained my thoughts better than I could have ever written.
Yes, I think moral realism is false and didn’t realize that was not a mainstream position in the EA community. I had trouble accepting it myself for the longest time and I was incredibly frustrated that all evidence seemed to point away from moral realism. Eventually I realized that freedom could only exist in the arbitrary and that a clockwork moral code would mean a clockwork life.
I’m only a first-year student so I’ll be very interested in seeing what a professional (like yourself) could extrapolate from this idea. The rough draft you showed me is already very promising and I hope you get around to eventually making a post about it.
I’m not entirely sure what moral realism even gets you. Regardless of whether morality is “real” i still have attitudes towards certain behaviors and outcomes, and attitudes towards other people’s attitudes. I suspect the moral realism debate is confused altogether.
Here’s what I wrote in Six Plausible Meta-Ethical Alternatives: “Most intelligent beings in the multiverse share similar preferences. This came about because there are facts about what preferences one should have, just like there exist facts about what decision theory one should use or what prior one should have, and species that manage to build intergalactic civilizations (or the equivalent in other universes) tend to discover all of these facts. There are occasional paperclip maximizers that arise, but they are a relatively minor presence or tend to be taken over by more sophisticated minds.”
In the above scenario, once you become intelligent enough and philosophically sophisticated enough, you’ll realize that your current attitudes are wrong (or right, as the case may be) and change them to better fit the relevant moral facts.
I mean this could very well be true, but at best it points to some truths about convergent psychological evolution.
Sure, there are facts about what preferences would best enable the emergence of an intergalactic civilization. I struggle to see these as moral facts.
Also there’s definitely a manifest destiny evoking unquestioned moralizing of space exploration going on rn, almost like morality’s importance is only as an instrument to us becoming hegemonic masters of the universe. The angle you approached this question is value-laden in an idiosyncratic way (not in a particularly foreign way, here on less-wrong, but value-laden nonetheless)
One can recognize that one would be ”better off” with a different preference set without the alternate set being better in some objective sense.
I’m saying the self-reflective process that leads to increased parsimony between moral intuitions does not require objective realism of moral facts, or even the belief in moral realism. I guess this puts me somewhere between relativism and subjectivism according to your linked post?
There’s a misunderstanding/miscommunication here. I wasn’t suggesting “what preferences would best enable the emergence of an intergalactic civilization” are moral facts. Instead I was suggesting in that scenario that building an intergalactic civilization may require a certain amount of philosophical ability and willingness/tendency to be motivated by normative facts discovered through philosophical reasoning, and that philosophical ability could eventually enables that civilization to discover and be motivated by moral facts.
In other words, it’s [high philosophical ability/sophistication causes both intergalactic civilization and discovery of moral facts], not [discovery of “moral facts” causes intergalactic civilization].
Well, i struggle to articulate what exactly we disagree on, because I find no real issue with this comment. Maybe i would say “high philosophical ability/sophistication causes both intergalactic civilization and moral convergence.”? I hesitate to call the result of that moral convergence “moral fact,” though I can conceive of that convergence.
It gets you something that error theory doesn’t get you , which is that moral claims have truth values. And it gets you something that subjectivism doesn’t get you, which is some people being actually wrong, and not just different to you.
That’s parallel to pointing out that people still have opinions when objective truth is available. People should believe the truth (this site, passim) and similarly should follow the true morality.
uh… I guess cannot get around the regress involved in claiming my moral values superior to competing systems in an objective sense? I hesitate to lump together the same kind of missteps that are involved with a mistaken conception of reality (a mis-apprehension of non-moral facts) with whatever goes on internally when two people arrive at different values.
I think it’s possible to agree on all mind independent facts, without entailing perfect accord on all value propositions, and that moral reflection is fully possible without objective moral truth. Perhaps I do not get to point at a repulsive actor and say they are wrong in the strict sense of believing falsehoods, but i can deliver a verdict on their conduct all the same.
It looks like some people can, since the attitudes of professional philosophers break down as:
Meta-ethics: moral realism 56.4%; moral anti-realism 27.7%; other 15.9%.
I can see how the conclusion would be difficult to reach if you make assumptions that are standard round here, such as
Morality is value
Morality is only value
All value is moral value.
But I suppose other people are making other assumptions.
Some verdicts lead to jail sentences. If Alice does something that is against Bob’s subjective value system, and Bob does something that is against Alice’s subjective value system, who ends up in jail? Punishments are things that occur objectively, so need an objective justification.
Subjective ethics allows you to deliver a verdict in the sense of “tut-tutting”, but morality is something that connects up with laws and punishments, and that where subjectivism is weak.
To make Wei Dai’s answer more concrete, suppose something like the symmetry theory of valence is true; in that case, there’s a crisp, unambiguous formal characterization of all valence. Then add open individualism to the picture, and it suddenly becomes a lot more plausible that many civilizations converge not just towards similar ethics, but exactly identical ethics.
I’m immensely skeptical that open individualism will ever be more than a minority position (among humans, at least) But at any rate, convergence on an ethic doesn’t demonstrate objective correctness of that ethic from outside that ethic.
My impression is that moral realism based on irreducible normativity is more common in the broader EA community than on Lesswrong. But it comes in different versions. I also tend to refer to it as (a version of) “moral realism” if someone holds the belief that humans will reach a strong consensus about human values / normative ethical theories (if only they had ample time to reflect on the questions). Such convergence doesn’t necessarily require there to be irreducibly normative facts about what’s good or bad, but it still sounds like moral realism. The “we strongly expect convergence” position seemed to be somewhat prelevant on Lesswrong initially, though my impression was that this was more of a probable default assumption rather than something anyone confidently endorsed, and over time my impression is also that people have tentatively moved away from it.
I’m usually bad at explaining my thoughts too, but I’m persistent enough to keep trying. :P
Consider the system “do what you want”. While we might not accept this system completely (perhaps rejecting that it is okay to harm others if you don’t care about their wellbeing), it is an all encompassing system, and it gives you complete freedom (including choosing what to live for).
You’re right that the system of ‘do what you want’ is an all-encompassing system. But it also leaves a lot of things underspecified (basically everything), which was (in my opinion) the more important insight.