The thing is that I don’t give imaginary people equal weight to real ones. It seems obvious to me that somebody who doesn’t exist anywhere in space or time doesn’t get any consideration. And that means that I am under no obligation to bring them into existence or to care whether anybody else does.
As for agression, all I can say is that I processed it that way.
As a basis for purely personal morality that may be fine, but as a way of evaluating policy choices or comparing societies it won’t be enough. Consider the question “how much should we reduce global warming”? Any decision involves alternative futures involving billions of people who haven’t been born yet. We have to consider their welfare. Put another way, the word “imaginary” is bearing a lot of weight in your argument: people who are imaginary in one scenario become real in another.
Well, that’s true, but I think it’s less a problem for me than it is for a lot of people here, because I don’t think there’s any respectable moral/ethical metric that you can maximize to begin with.
Ethics as a philosophical subject is on very shaky ground because it basically deals with creating pretty, consistent frameworks to systematize intuitions… but nobody ever told the intuitions that they had to be amenable to that. All forms of utilitarianism, specifically, have horrible problems with the lack of any defensible way to aggregate utilities. There are also issues about whose utility should count. Some people would include imaginary people, some would include animals, etc. But the alternatives to utilitarianism have their own problems.
So I, at least, am free to go for a lot of possible futures and take a lot of things into consideration. I can feel OK about using raw intuitions, or even aesthetic preferences, to choose between courses of action, including creating more-but-somewhat-sadder people versus fewer-but-somewhat-happier people. I don’t have any rigid system to narrow what I can care about, so I can choose to look at things like the diversity or complexity of their experiences, or how good the best ones are, or how bad the worst ones are, or in fact things that aren’t related to human experience at all. Or I can mix and match those at any moment. I’d feel uncomfortable with creating a bunch of people whose experiences were uniformly totally miserable, but that leaves me a lot of room to maneuver.
… and I’m even at least a little bit insulated from feeling like I have to actually do the absolute most I possibly can to predict every possible consequence of every action I take all of the time. I get the sense that that impossible self-demand really eats at a lot of people on Less Wrong.
Anyhow, if you’re like me, and you’ve somehow counterfactually been given the godlike power to knowingly choose between hunter-gatherers and agriculturalists as people to “make real”, even given total knowledge of the consequences, you can take into account not only their experiences, but your own aesthetic view of the kinds of worlds they generate. And I don’t find numbers aesthetically compelling.
It’d the same for more-globally-warmed and less-globally-warmed people, although in that case you know that there will also be major consequences for people who are actually alive right now.
Sure, I find that take on moral intuitions plausible. But if society has to make a real choice of the order of “how much to tax carbon”, I think that collectively we would not want to make the decision based on people saying “meh, no strong opinions here, future world X just seems kinda prettier”. We need some kind of principled framework, and for that… well, I guess you need moral philosophy!
I don’t think it’s plausible that there’ll ever be widespread agreement on any philosophical framework to be used to make policy decisions. In fact, I think that it’s much easier to make public policy decisions without trying to have a framework, precisely because the intuitions tend to be more shared than the systematizations.
I’ve never seen an actual political process that spent much time on a specific framework, and I’ve surely never heard of a constitution or other fundamental law or political consensus, anywhere, that said, let alone enforced, anything like “we’re a utilitarian society and will choose policies accordingly” or “we’re a virtue ethics society and will choose policies accordingly” or whatever.
The curious thing about your wording is that you go from ‘we would not want to make‘ to ‘we need some kind of principled framework’. The former does not automatically imply the latter.
Additionally, you presuppose the possibility of discovering a ‘principled framework’ without first establishing that such a thing even exists. I think the parent comment was trying to get at this core issue.
Any decision involves alternative futures involving billions of people who haven’t been born yet. We have to consider their welfare.
This logic holds if it is an unassailable given that they will be born. If you remove that presupposition and make it optional, then these people can be counted as imaginary as jbash says. They become a real part of the future, and thus of reality, only once we decide they shall be. We might not. Maybe we opt for the alternative of just allowing the currently alive human beings to live forever, and decline to make more.
PS: Anyone know a technical term for the cognitive heuristic that results in treating hypothetical entities that don’t exist yet as real things with moral weight, just because we use the same neural circuitry to comprehend them, that we use to comprehend a real entity?
I don’t think this argument makes sense. Of course the people who will be born are “imaginary”. If I choose between marrying Jane and Judith, then any future children in either event are at present “imaginary”. That would not be a good excuse for marrying Jane, a psychopath with three previous convictions for child murder. More generally, any choice involves two or more different hypothetical (“imaginary”) outcomes, of which all but one will not happen. Obviously, we have to think “what would happen if I do X”? It would be silly to say that this question is unimportant, because “all the outcomes aren’t even real!” That doesn’t change if the outcomes involve different people coming into existence.
I think the technical term you’re looking for is “imagination”.
If they will come into existence later, they have moral weight now. If I may butcher the concept of time, they already exist in some sense, being part of the weave of spacetime. But if they will never exist, it is an error to leap to their defense—there are no rights being denied. Does that make more sense?
The point is whether they exist conditional on us taking a particular action. If we do X a set of people will exist. If we do Y, a different set of people will exist. There’s not usually a reason to privilege X vs. Y as being “what will happen if we do nothing”, making the people in X somehow less conditional. The argument is “if we do X, then these people will exist and their rights (or welfare or whatever) will be satisfied/violated and that would be good/bad to some degree; if we do Y then these other people will exist, etc., and that would be good/bad to some degree.” It’s a comparison of hypothetical goods and bads—that’s the definition of a moral choice! So saying, “all these good/bads are just hypothetical” is not very helpful. It’s as if someone said “shall we order Chinese or pizza” and you refused to answer, because you can’t taste the pizza right now.
The thing is that I don’t give imaginary people equal weight to real ones. It seems obvious to me that somebody who doesn’t exist anywhere in space or time doesn’t get any consideration. And that means that I am under no obligation to bring them into existence or to care whether anybody else does.
As for agression, all I can say is that I processed it that way.
As a basis for purely personal morality that may be fine, but as a way of evaluating policy choices or comparing societies it won’t be enough. Consider the question “how much should we reduce global warming”? Any decision involves alternative futures involving billions of people who haven’t been born yet. We have to consider their welfare. Put another way, the word “imaginary” is bearing a lot of weight in your argument: people who are imaginary in one scenario become real in another.
Well, that’s true, but I think it’s less a problem for me than it is for a lot of people here, because I don’t think there’s any respectable moral/ethical metric that you can maximize to begin with.
Ethics as a philosophical subject is on very shaky ground because it basically deals with creating pretty, consistent frameworks to systematize intuitions… but nobody ever told the intuitions that they had to be amenable to that. All forms of utilitarianism, specifically, have horrible problems with the lack of any defensible way to aggregate utilities. There are also issues about whose utility should count. Some people would include imaginary people, some would include animals, etc. But the alternatives to utilitarianism have their own problems.
So I, at least, am free to go for a lot of possible futures and take a lot of things into consideration. I can feel OK about using raw intuitions, or even aesthetic preferences, to choose between courses of action, including creating more-but-somewhat-sadder people versus fewer-but-somewhat-happier people. I don’t have any rigid system to narrow what I can care about, so I can choose to look at things like the diversity or complexity of their experiences, or how good the best ones are, or how bad the worst ones are, or in fact things that aren’t related to human experience at all. Or I can mix and match those at any moment. I’d feel uncomfortable with creating a bunch of people whose experiences were uniformly totally miserable, but that leaves me a lot of room to maneuver.
… and I’m even at least a little bit insulated from feeling like I have to actually do the absolute most I possibly can to predict every possible consequence of every action I take all of the time. I get the sense that that impossible self-demand really eats at a lot of people on Less Wrong.
Anyhow, if you’re like me, and you’ve somehow counterfactually been given the godlike power to knowingly choose between hunter-gatherers and agriculturalists as people to “make real”, even given total knowledge of the consequences, you can take into account not only their experiences, but your own aesthetic view of the kinds of worlds they generate. And I don’t find numbers aesthetically compelling.
It’d the same for more-globally-warmed and less-globally-warmed people, although in that case you know that there will also be major consequences for people who are actually alive right now.
Sure, I find that take on moral intuitions plausible. But if society has to make a real choice of the order of “how much to tax carbon”, I think that collectively we would not want to make the decision based on people saying “meh, no strong opinions here, future world X just seems kinda prettier”. We need some kind of principled framework, and for that… well, I guess you need moral philosophy!
Sorry, missed this somehow.
I don’t think it’s plausible that there’ll ever be widespread agreement on any philosophical framework to be used to make policy decisions. In fact, I think that it’s much easier to make public policy decisions without trying to have a framework, precisely because the intuitions tend to be more shared than the systematizations.
I’ve never seen an actual political process that spent much time on a specific framework, and I’ve surely never heard of a constitution or other fundamental law or political consensus, anywhere, that said, let alone enforced, anything like “we’re a utilitarian society and will choose policies accordingly” or “we’re a virtue ethics society and will choose policies accordingly” or whatever.
The curious thing about your wording is that you go from ‘we would not want to make‘ to ‘we need some kind of principled framework’. The former does not automatically imply the latter.
Additionally, you presuppose the possibility of discovering a ‘principled framework’ without first establishing that such a thing even exists. I think the parent comment was trying to get at this core issue.
This logic holds if it is an unassailable given that they will be born. If you remove that presupposition and make it optional, then these people can be counted as imaginary as jbash says. They become a real part of the future, and thus of reality, only once we decide they shall be. We might not. Maybe we opt for the alternative of just allowing the currently alive human beings to live forever, and decline to make more.
PS: Anyone know a technical term for the cognitive heuristic that results in treating hypothetical entities that don’t exist yet as real things with moral weight, just because we use the same neural circuitry to comprehend them, that we use to comprehend a real entity?
I don’t think this argument makes sense. Of course the people who will be born are “imaginary”. If I choose between marrying Jane and Judith, then any future children in either event are at present “imaginary”. That would not be a good excuse for marrying Jane, a psychopath with three previous convictions for child murder. More generally, any choice involves two or more different hypothetical (“imaginary”) outcomes, of which all but one will not happen. Obviously, we have to think “what would happen if I do X”? It would be silly to say that this question is unimportant, because “all the outcomes aren’t even real!” That doesn’t change if the outcomes involve different people coming into existence.
I think the technical term you’re looking for is “imagination”.
If they will come into existence later, they have moral weight now. If I may butcher the concept of time, they already exist in some sense, being part of the weave of spacetime. But if they will never exist, it is an error to leap to their defense—there are no rights being denied. Does that make more sense?
The point is whether they exist conditional on us taking a particular action. If we do X a set of people will exist. If we do Y, a different set of people will exist. There’s not usually a reason to privilege X vs. Y as being “what will happen if we do nothing”, making the people in X somehow less conditional. The argument is “if we do X, then these people will exist and their rights (or welfare or whatever) will be satisfied/violated and that would be good/bad to some degree; if we do Y then these other people will exist, etc., and that would be good/bad to some degree.” It’s a comparison of hypothetical goods and bads—that’s the definition of a moral choice! So saying, “all these good/bads are just hypothetical” is not very helpful. It’s as if someone said “shall we order Chinese or pizza” and you refused to answer, because you can’t taste the pizza right now.