The argument isn’t that simply having more people alive is better. That’s why I spend time arguing that people’s lives are worthwhile.
I mention two intuitions. The intuition that it’s good to be alive is quite widely shared, no? Even people who claim to disagree often act as if they agree. (My uncle repeatedly said he didn’t want to live any more, yet he carefully avoided Covid.)
The intuition that people’s lives have value in themselves, and not in relation to what else is going on, isn’t just a gut feeling. It relates to the idea that what has value is consciousness—feelings of joy or contentment, say—so that if someone experiences a lifetime reasonably worth living, then that life is reasonably worth living whatever else is going on, because the experiences are the same.
You may be right that adding up utilons is crazy, but my claims don’t depend on that. Any moral framework will do, if it positively values the fact of a person leading a reasonably good life.
Lastly, I’m surprised you see any aggression here.
The thing is that I don’t give imaginary people equal weight to real ones. It seems obvious to me that somebody who doesn’t exist anywhere in space or time doesn’t get any consideration. And that means that I am under no obligation to bring them into existence or to care whether anybody else does.
As for agression, all I can say is that I processed it that way.
As a basis for purely personal morality that may be fine, but as a way of evaluating policy choices or comparing societies it won’t be enough. Consider the question “how much should we reduce global warming”? Any decision involves alternative futures involving billions of people who haven’t been born yet. We have to consider their welfare. Put another way, the word “imaginary” is bearing a lot of weight in your argument: people who are imaginary in one scenario become real in another.
Well, that’s true, but I think it’s less a problem for me than it is for a lot of people here, because I don’t think there’s any respectable moral/ethical metric that you can maximize to begin with.
Ethics as a philosophical subject is on very shaky ground because it basically deals with creating pretty, consistent frameworks to systematize intuitions… but nobody ever told the intuitions that they had to be amenable to that. All forms of utilitarianism, specifically, have horrible problems with the lack of any defensible way to aggregate utilities. There are also issues about whose utility should count. Some people would include imaginary people, some would include animals, etc. But the alternatives to utilitarianism have their own problems.
So I, at least, am free to go for a lot of possible futures and take a lot of things into consideration. I can feel OK about using raw intuitions, or even aesthetic preferences, to choose between courses of action, including creating more-but-somewhat-sadder people versus fewer-but-somewhat-happier people. I don’t have any rigid system to narrow what I can care about, so I can choose to look at things like the diversity or complexity of their experiences, or how good the best ones are, or how bad the worst ones are, or in fact things that aren’t related to human experience at all. Or I can mix and match those at any moment. I’d feel uncomfortable with creating a bunch of people whose experiences were uniformly totally miserable, but that leaves me a lot of room to maneuver.
… and I’m even at least a little bit insulated from feeling like I have to actually do the absolute most I possibly can to predict every possible consequence of every action I take all of the time. I get the sense that that impossible self-demand really eats at a lot of people on Less Wrong.
Anyhow, if you’re like me, and you’ve somehow counterfactually been given the godlike power to knowingly choose between hunter-gatherers and agriculturalists as people to “make real”, even given total knowledge of the consequences, you can take into account not only their experiences, but your own aesthetic view of the kinds of worlds they generate. And I don’t find numbers aesthetically compelling.
It’d the same for more-globally-warmed and less-globally-warmed people, although in that case you know that there will also be major consequences for people who are actually alive right now.
Sure, I find that take on moral intuitions plausible. But if society has to make a real choice of the order of “how much to tax carbon”, I think that collectively we would not want to make the decision based on people saying “meh, no strong opinions here, future world X just seems kinda prettier”. We need some kind of principled framework, and for that… well, I guess you need moral philosophy!
I don’t think it’s plausible that there’ll ever be widespread agreement on any philosophical framework to be used to make policy decisions. In fact, I think that it’s much easier to make public policy decisions without trying to have a framework, precisely because the intuitions tend to be more shared than the systematizations.
I’ve never seen an actual political process that spent much time on a specific framework, and I’ve surely never heard of a constitution or other fundamental law or political consensus, anywhere, that said, let alone enforced, anything like “we’re a utilitarian society and will choose policies accordingly” or “we’re a virtue ethics society and will choose policies accordingly” or whatever.
The curious thing about your wording is that you go from ‘we would not want to make‘ to ‘we need some kind of principled framework’. The former does not automatically imply the latter.
Additionally, you presuppose the possibility of discovering a ‘principled framework’ without first establishing that such a thing even exists. I think the parent comment was trying to get at this core issue.
Any decision involves alternative futures involving billions of people who haven’t been born yet. We have to consider their welfare.
This logic holds if it is an unassailable given that they will be born. If you remove that presupposition and make it optional, then these people can be counted as imaginary as jbash says. They become a real part of the future, and thus of reality, only once we decide they shall be. We might not. Maybe we opt for the alternative of just allowing the currently alive human beings to live forever, and decline to make more.
PS: Anyone know a technical term for the cognitive heuristic that results in treating hypothetical entities that don’t exist yet as real things with moral weight, just because we use the same neural circuitry to comprehend them, that we use to comprehend a real entity?
I don’t think this argument makes sense. Of course the people who will be born are “imaginary”. If I choose between marrying Jane and Judith, then any future children in either event are at present “imaginary”. That would not be a good excuse for marrying Jane, a psychopath with three previous convictions for child murder. More generally, any choice involves two or more different hypothetical (“imaginary”) outcomes, of which all but one will not happen. Obviously, we have to think “what would happen if I do X”? It would be silly to say that this question is unimportant, because “all the outcomes aren’t even real!” That doesn’t change if the outcomes involve different people coming into existence.
I think the technical term you’re looking for is “imagination”.
If they will come into existence later, they have moral weight now. If I may butcher the concept of time, they already exist in some sense, being part of the weave of spacetime. But if they will never exist, it is an error to leap to their defense—there are no rights being denied. Does that make more sense?
The point is whether they exist conditional on us taking a particular action. If we do X a set of people will exist. If we do Y, a different set of people will exist. There’s not usually a reason to privilege X vs. Y as being “what will happen if we do nothing”, making the people in X somehow less conditional. The argument is “if we do X, then these people will exist and their rights (or welfare or whatever) will be satisfied/violated and that would be good/bad to some degree; if we do Y then these other people will exist, etc., and that would be good/bad to some degree.” It’s a comparison of hypothetical goods and bads—that’s the definition of a moral choice! So saying, “all these good/bads are just hypothetical” is not very helpful. It’s as if someone said “shall we order Chinese or pizza” and you refused to answer, because you can’t taste the pizza right now.
Actually, the worlview “it’s NOT good to be alive; the fact that almost everyone think that it’s good to be alive is just failure of human reflectivty” is pretty consistent. I don’t endorse it, but my best friend do.
There are also ethical (even utilitarian) frameworks that consider hypothetical people to be fundamentally different than real current people. I can say that I think we should maximize the average utility of all current people going into the future while also thinking that I should choose the future where the hypothetical people have the highest average happiness. How you weigh current people versus future hypothetical people is complex but beyond the scope of this post I think.
That is, if there are ten people alive today and I’m choosing between an option where the ten people each have 10 utils or 100 utils, obviously I should choose the 100 utils. But if I’m choosing between a future where 100 people will exist with 5 utils each or a future where 10 people will exist with 10 utils each, there is no person who is worse off in the second future compared to the first future, so no person is harmed by choosing the second future.
Frankly I don’t think that people have a moral intuition that actually matches your suggestions. Almost any couple in a developed world could probably support raising ten children, and all of those children would be happy to exist, but it just seems wrong to say that couples have a moral imperative to have as many children as possible. (I think that would still hold true even if pregnancy and childbirth were painless and free.)
Saying “it’s good to be alive” is not the same as saying people have a moral imperative to bring children into the world. It would probably improve human welfare if I gave all my assets to the poor and starved to death, but I don’t have a moral imperative to do it. Judgments of overall welfare are ways of deciding what to do collectively, but no individual has an absolute duty to maximize overall welfare at the expense of his own basic desires and life choices.
(This is my personal view, not especially carefully thought-out. Some people probably do think we have an absolute duty to maximize welfare. I think your example of having to have 10 children is a reductio ad absurdum of that view, not of the view that the marginal extra human life is a good thing.)
The argument isn’t that simply having more people alive is better. That’s why I spend time arguing that people’s lives are worthwhile.
I mention two intuitions. The intuition that it’s good to be alive is quite widely shared, no? Even people who claim to disagree often act as if they agree. (My uncle repeatedly said he didn’t want to live any more, yet he carefully avoided Covid.)
The intuition that people’s lives have value in themselves, and not in relation to what else is going on, isn’t just a gut feeling. It relates to the idea that what has value is consciousness—feelings of joy or contentment, say—so that if someone experiences a lifetime reasonably worth living, then that life is reasonably worth living whatever else is going on, because the experiences are the same.
You may be right that adding up utilons is crazy, but my claims don’t depend on that. Any moral framework will do, if it positively values the fact of a person leading a reasonably good life.
Lastly, I’m surprised you see any aggression here.
The thing is that I don’t give imaginary people equal weight to real ones. It seems obvious to me that somebody who doesn’t exist anywhere in space or time doesn’t get any consideration. And that means that I am under no obligation to bring them into existence or to care whether anybody else does.
As for agression, all I can say is that I processed it that way.
As a basis for purely personal morality that may be fine, but as a way of evaluating policy choices or comparing societies it won’t be enough. Consider the question “how much should we reduce global warming”? Any decision involves alternative futures involving billions of people who haven’t been born yet. We have to consider their welfare. Put another way, the word “imaginary” is bearing a lot of weight in your argument: people who are imaginary in one scenario become real in another.
Well, that’s true, but I think it’s less a problem for me than it is for a lot of people here, because I don’t think there’s any respectable moral/ethical metric that you can maximize to begin with.
Ethics as a philosophical subject is on very shaky ground because it basically deals with creating pretty, consistent frameworks to systematize intuitions… but nobody ever told the intuitions that they had to be amenable to that. All forms of utilitarianism, specifically, have horrible problems with the lack of any defensible way to aggregate utilities. There are also issues about whose utility should count. Some people would include imaginary people, some would include animals, etc. But the alternatives to utilitarianism have their own problems.
So I, at least, am free to go for a lot of possible futures and take a lot of things into consideration. I can feel OK about using raw intuitions, or even aesthetic preferences, to choose between courses of action, including creating more-but-somewhat-sadder people versus fewer-but-somewhat-happier people. I don’t have any rigid system to narrow what I can care about, so I can choose to look at things like the diversity or complexity of their experiences, or how good the best ones are, or how bad the worst ones are, or in fact things that aren’t related to human experience at all. Or I can mix and match those at any moment. I’d feel uncomfortable with creating a bunch of people whose experiences were uniformly totally miserable, but that leaves me a lot of room to maneuver.
… and I’m even at least a little bit insulated from feeling like I have to actually do the absolute most I possibly can to predict every possible consequence of every action I take all of the time. I get the sense that that impossible self-demand really eats at a lot of people on Less Wrong.
Anyhow, if you’re like me, and you’ve somehow counterfactually been given the godlike power to knowingly choose between hunter-gatherers and agriculturalists as people to “make real”, even given total knowledge of the consequences, you can take into account not only their experiences, but your own aesthetic view of the kinds of worlds they generate. And I don’t find numbers aesthetically compelling.
It’d the same for more-globally-warmed and less-globally-warmed people, although in that case you know that there will also be major consequences for people who are actually alive right now.
Sure, I find that take on moral intuitions plausible. But if society has to make a real choice of the order of “how much to tax carbon”, I think that collectively we would not want to make the decision based on people saying “meh, no strong opinions here, future world X just seems kinda prettier”. We need some kind of principled framework, and for that… well, I guess you need moral philosophy!
Sorry, missed this somehow.
I don’t think it’s plausible that there’ll ever be widespread agreement on any philosophical framework to be used to make policy decisions. In fact, I think that it’s much easier to make public policy decisions without trying to have a framework, precisely because the intuitions tend to be more shared than the systematizations.
I’ve never seen an actual political process that spent much time on a specific framework, and I’ve surely never heard of a constitution or other fundamental law or political consensus, anywhere, that said, let alone enforced, anything like “we’re a utilitarian society and will choose policies accordingly” or “we’re a virtue ethics society and will choose policies accordingly” or whatever.
The curious thing about your wording is that you go from ‘we would not want to make‘ to ‘we need some kind of principled framework’. The former does not automatically imply the latter.
Additionally, you presuppose the possibility of discovering a ‘principled framework’ without first establishing that such a thing even exists. I think the parent comment was trying to get at this core issue.
This logic holds if it is an unassailable given that they will be born. If you remove that presupposition and make it optional, then these people can be counted as imaginary as jbash says. They become a real part of the future, and thus of reality, only once we decide they shall be. We might not. Maybe we opt for the alternative of just allowing the currently alive human beings to live forever, and decline to make more.
PS: Anyone know a technical term for the cognitive heuristic that results in treating hypothetical entities that don’t exist yet as real things with moral weight, just because we use the same neural circuitry to comprehend them, that we use to comprehend a real entity?
I don’t think this argument makes sense. Of course the people who will be born are “imaginary”. If I choose between marrying Jane and Judith, then any future children in either event are at present “imaginary”. That would not be a good excuse for marrying Jane, a psychopath with three previous convictions for child murder. More generally, any choice involves two or more different hypothetical (“imaginary”) outcomes, of which all but one will not happen. Obviously, we have to think “what would happen if I do X”? It would be silly to say that this question is unimportant, because “all the outcomes aren’t even real!” That doesn’t change if the outcomes involve different people coming into existence.
I think the technical term you’re looking for is “imagination”.
If they will come into existence later, they have moral weight now. If I may butcher the concept of time, they already exist in some sense, being part of the weave of spacetime. But if they will never exist, it is an error to leap to their defense—there are no rights being denied. Does that make more sense?
The point is whether they exist conditional on us taking a particular action. If we do X a set of people will exist. If we do Y, a different set of people will exist. There’s not usually a reason to privilege X vs. Y as being “what will happen if we do nothing”, making the people in X somehow less conditional. The argument is “if we do X, then these people will exist and their rights (or welfare or whatever) will be satisfied/violated and that would be good/bad to some degree; if we do Y then these other people will exist, etc., and that would be good/bad to some degree.” It’s a comparison of hypothetical goods and bads—that’s the definition of a moral choice! So saying, “all these good/bads are just hypothetical” is not very helpful. It’s as if someone said “shall we order Chinese or pizza” and you refused to answer, because you can’t taste the pizza right now.
Actually, the worlview “it’s NOT good to be alive; the fact that almost everyone think that it’s good to be alive is just failure of human reflectivty” is pretty consistent. I don’t endorse it, but my best friend do.
Well, he says he does. I think it would be very sad if he acted on the idea, and I bet you agree.
There are also ethical (even utilitarian) frameworks that consider hypothetical people to be fundamentally different than real current people. I can say that I think we should maximize the average utility of all current people going into the future while also thinking that I should choose the future where the hypothetical people have the highest average happiness. How you weigh current people versus future hypothetical people is complex but beyond the scope of this post I think.
That is, if there are ten people alive today and I’m choosing between an option where the ten people each have 10 utils or 100 utils, obviously I should choose the 100 utils. But if I’m choosing between a future where 100 people will exist with 5 utils each or a future where 10 people will exist with 10 utils each, there is no person who is worse off in the second future compared to the first future, so no person is harmed by choosing the second future.
Frankly I don’t think that people have a moral intuition that actually matches your suggestions. Almost any couple in a developed world could probably support raising ten children, and all of those children would be happy to exist, but it just seems wrong to say that couples have a moral imperative to have as many children as possible. (I think that would still hold true even if pregnancy and childbirth were painless and free.)
Saying “it’s good to be alive” is not the same as saying people have a moral imperative to bring children into the world. It would probably improve human welfare if I gave all my assets to the poor and starved to death, but I don’t have a moral imperative to do it. Judgments of overall welfare are ways of deciding what to do collectively, but no individual has an absolute duty to maximize overall welfare at the expense of his own basic desires and life choices.
(This is my personal view, not especially carefully thought-out. Some people probably do think we have an absolute duty to maximize welfare. I think your example of having to have 10 children is a reductio ad absurdum of that view, not of the view that the marginal extra human life is a good thing.)