simply wanting to create lives without considering living conditions does not seem to take this into account
I don’t think any of the people who support creating more lives believe we should do so regardless of living conditions, though they may assume that most human lives are worth living and that it takes exceptionally bad conditions for someone’s life to become not worth living.
Typically people may also assume that technological and societal progress continues, thus making it even more likely than today that the average person has a life worth living. E.g. Nick Bostrom’s paper Astronomical Waste notes, when talking about a speculative future human civilization capable of settling other galaxies:
I am assuming here that the human lives that could have been created would have been worthwhile ones. Since it is commonly supposed that even current human lives are typically worthwhile, this is a weak assumption. Any civilization advanced enough to colonize the local supercluster would likely also have the ability to establish at least the minimally favorable conditions required for future lives to be worth living.
In general, you easily end up with “maximizing human lives is good (up to a point)” as a conclusion if you accept some simple premises like:
It’s good to have humans who have lives worth living
Most new humans will have lives that are worth living
It’s better to have more of a good thing than less of it
Thus, if it’s good to have lives worth living (1) and most new humans will have lives that are worth living (2), then creating new lives will be mostly good. If it’s better to have more of a good thing than less of it (3), and creating new lives will be mostly good, then it’s better to create new lives than not to.
Now it’s true that at some point we’ll probably run into resource or other constraints so that the median new life won’t be worth living anymore. But I think anyone talking about maximizing life is just assuming it as obvious that the maximization goal will only hold up to a certain point.
(Of course it’s possible to dispute some of these premises—see e.g. here or here for arguments against. But it’s also possible to accept them.)
it is possible that maximizing animal life, or perhaps alien or artificial life, would create more utility, as these lives might be optimized with way less effort
Some of the people wanting to create more human lives might indeed agree with this! For instance, when they say “human”, they might actually have in mind some technologically enhanced posthuman species that’s a successor for our current species.
On the other hand, it’s also possible that people who say this just intrinsically value humans in particular.
It seems to me that, separately from whether we accept or reject premises #1 and/or #2,[1], we should notice that premise #3 has an equivocation built into it.
Namely, it does not specify: better for whom?
After all, it makes no sense to speak of things being simply “better”, without some agent or entity whose evaluations we take to be our metric for goodness. But if we attempt to fully specify premise #3, we run into difficulties.
We could say: “it is better for a person to have more of a good[2] thing than to have less of it”. And, sure, it is. But then where does that leave premise #3? For whom is it better, that we should have more humans who have lives worth living?
For those humans? But surely this is a non sequitur; even if, for any individual person, we accept the idea that it’s better for them that they should exist than that they should not (an idea I find to be nonsensical, but that’s another story), still it’s not clear how we get from that to it being better that there should be more people…
Or are we saying that’s making more humans is a good thing for already existing humans? Well, perhaps it is, but then we also have to claim, and show, that this is the case—and, crucially, this renders premise #2 largely irrelevant, since “how good are the lives of newly created humans” is not necessarily relevant to the question of “how good a thing is it, for already existing humans, that we should make more humans?”
Really, the problem with this whole category of arguments, this whole style of reasoning, is that it seems to consist of an attempt to take a “view from nowhere” on the subject of goodness, desirability, what is “best”, etc. But such a view is impossible, and any attempt to take a view like this must be incoherent.
It could just be that a world with additional happy people is better according to my utility function, just like a world with fewer painlessly killed people per unit of time is better according to my utility function. While I agree that goodness should be “goodness for someone” in the sense that my utility function should be something like a function only of the mental states of all moral patients (at all times, etc.), I disagree with the claim that the same people have to exist in two possible worlds for me to be able to say which is better, which is what you seem to be implying in your comment. One world can be better (according to my utility function) than another because of some aggregation of the well-beings of all moral patients within it being larger. I think most people have such utility functions. Without allowing for something like this, I can’t really see a way to construct an ethical model that tells essentially anything interesting about any decisions at all (at least for people who care about other people), as all decisions probably involve choosing between futures with very different sets of moral patients.
… I disagree with the claim that the same people have to exist in two possible worlds for me to be able to say which is better, which is what you seem to be implying in your comment.
Not quite—but I would say that it is not possible to describe one world as “better” than another in any quantifiable or reducible way (as distinct from “better, according to my irreducible and arbitrary judgment”—to which you are, of course, entitled), unless the two worlds contain the same people (which, please note, is only a necessary, not a sufficient, criterion).
I do not believe that aggregation of well-being across individuals is possible or coherent.
(Incidentally, I am also fairly sure that most people don’t have utility functions, period, but I imagine that your use of the term was metaphorical, and in practice should be read merely as “preferences” or something similar.)
Without allowing for something like this, I can’t really see a way to construct an ethical model that tells essentially anything interesting about any decisions at all (at least for people who care about other people), as all decisions probably involve choosing between futures with very different sets of moral patients.
Come now, this is not a sensible model of how we make decisions. If I must choose between (a) stealing my mother’s jewelry in order to buy drugs and (b) giving a homeless person a sandwich, there are all sorts of ethical considerations we may bring to bear on this question, but “choosing between futures with very different sets of moral patients” is simply irrelevant to the question. If your decision procedure in a case like this involves the consideration of far-future outcomes, requires the construction of utility aggregation procedures across large numbers of people, etc., etc., then your ethical framework is of no value and is almost certainly nonsense.
it makes no sense to speak of things being simply “better”, without some agent or entity whose evaluations we take to be our metric for goodness
If the agent/entity is hypothetical, we get an abstract preference without any actual agent/entity. And possibly a preference can be specified without specifying the rest of the agent. A metric of goodness doesn’t necessarily originate from something in particular.
You can of course define any metric you like, but what makes it a metric of “goodness” (as opposed to a metric of something else, like “badness”, or “flevness”), unless it is constructed to reflect what some agent or entity considers to be “good”?
I see human values as something built by long reflection, a heavily philosophical process where it’s unclear if humans (as opposed to human-adjacent aliens or AIs) doing the work is an important aspect of the outcome. This outcome is not something any extant agent knows. Maybe indirectly it’s what I consider good, but I don’t know what it is, so that phrasing is noncentral. Maybe long reflection is the entity that considers it good, but for this purpose it doesn’t hold the role of an agent, it’s not enacting the values, only declaring them.
I don’t think any of the people who support creating more lives believe we should do so regardless of living conditions, though they may assume that most human lives are worth living and that it takes exceptionally bad conditions for someone’s life to become not worth living.
Typically people may also assume that technological and societal progress continues, thus making it even more likely than today that the average person has a life worth living. E.g. Nick Bostrom’s paper Astronomical Waste notes, when talking about a speculative future human civilization capable of settling other galaxies:
In general, you easily end up with “maximizing human lives is good (up to a point)” as a conclusion if you accept some simple premises like:
It’s good to have humans who have lives worth living
Most new humans will have lives that are worth living
It’s better to have more of a good thing than less of it
Thus, if it’s good to have lives worth living (1) and most new humans will have lives that are worth living (2), then creating new lives will be mostly good. If it’s better to have more of a good thing than less of it (3), and creating new lives will be mostly good, then it’s better to create new lives than not to.
Now it’s true that at some point we’ll probably run into resource or other constraints so that the median new life won’t be worth living anymore. But I think anyone talking about maximizing life is just assuming it as obvious that the maximization goal will only hold up to a certain point.
(Of course it’s possible to dispute some of these premises—see e.g. here or here for arguments against. But it’s also possible to accept them.)
Some of the people wanting to create more human lives might indeed agree with this! For instance, when they say “human”, they might actually have in mind some technologically enhanced posthuman species that’s a successor for our current species.
On the other hand, it’s also possible that people who say this just intrinsically value humans in particular.
It seems to me that, separately from whether we accept or reject premises #1 and/or #2,[1], we should notice that premise #3 has an equivocation built into it.
Namely, it does not specify: better for whom?
After all, it makes no sense to speak of things being simply “better”, without some agent or entity whose evaluations we take to be our metric for goodness. But if we attempt to fully specify premise #3, we run into difficulties.
We could say: “it is better for a person to have more of a good[2] thing than to have less of it”. And, sure, it is. But then where does that leave premise #3? For whom is it better, that we should have more humans who have lives worth living?
For those humans? But surely this is a non sequitur; even if, for any individual person, we accept the idea that it’s better for them that they should exist than that they should not (an idea I find to be nonsensical, but that’s another story), still it’s not clear how we get from that to it being better that there should be more people…
Or are we saying that’s making more humans is a good thing for already existing humans? Well, perhaps it is, but then we also have to claim, and show, that this is the case—and, crucially, this renders premise #2 largely irrelevant, since “how good are the lives of newly created humans” is not necessarily relevant to the question of “how good a thing is it, for already existing humans, that we should make more humans?”
Really, the problem with this whole category of arguments, this whole style of reasoning, is that it seems to consist of an attempt to take a “view from nowhere” on the subject of goodness, desirability, what is “best”, etc. But such a view is impossible, and any attempt to take a view like this must be incoherent.
Personally, I reject premise #1, partly for reasons similar to the above argument about premise #3, though also for other reasons.
By “good” here we of course mean “good for that person” or “good by that person’s lights”, etc.
It could just be that a world with additional happy people is better according to my utility function, just like a world with fewer painlessly killed people per unit of time is better according to my utility function. While I agree that goodness should be “goodness for someone” in the sense that my utility function should be something like a function only of the mental states of all moral patients (at all times, etc.), I disagree with the claim that the same people have to exist in two possible worlds for me to be able to say which is better, which is what you seem to be implying in your comment. One world can be better (according to my utility function) than another because of some aggregation of the well-beings of all moral patients within it being larger. I think most people have such utility functions. Without allowing for something like this, I can’t really see a way to construct an ethical model that tells essentially anything interesting about any decisions at all (at least for people who care about other people), as all decisions probably involve choosing between futures with very different sets of moral patients.
Not quite—but I would say that it is not possible to describe one world as “better” than another in any quantifiable or reducible way (as distinct from “better, according to my irreducible and arbitrary judgment”—to which you are, of course, entitled), unless the two worlds contain the same people (which, please note, is only a necessary, not a sufficient, criterion).
I do not believe that aggregation of well-being across individuals is possible or coherent.
(Incidentally, I am also fairly sure that most people don’t have utility functions, period, but I imagine that your use of the term was metaphorical, and in practice should be read merely as “preferences” or something similar.)
Come now, this is not a sensible model of how we make decisions. If I must choose between (a) stealing my mother’s jewelry in order to buy drugs and (b) giving a homeless person a sandwich, there are all sorts of ethical considerations we may bring to bear on this question, but “choosing between futures with very different sets of moral patients” is simply irrelevant to the question. If your decision procedure in a case like this involves the consideration of far-future outcomes, requires the construction of utility aggregation procedures across large numbers of people, etc., etc., then your ethical framework is of no value and is almost certainly nonsense.
If the agent/entity is hypothetical, we get an abstract preference without any actual agent/entity. And possibly a preference can be specified without specifying the rest of the agent. A metric of goodness doesn’t necessarily originate from something in particular.
You can of course define any metric you like, but what makes it a metric of “goodness” (as opposed to a metric of something else, like “badness”, or “flevness”), unless it is constructed to reflect what some agent or entity considers to be “good”?
I see human values as something built by long reflection, a heavily philosophical process where it’s unclear if humans (as opposed to human-adjacent aliens or AIs) doing the work is an important aspect of the outcome. This outcome is not something any extant agent knows. Maybe indirectly it’s what I consider good, but I don’t know what it is, so that phrasing is noncentral. Maybe long reflection is the entity that considers it good, but for this purpose it doesn’t hold the role of an agent, it’s not enacting the values, only declaring them.