There are two major unexamined assumptions underlying this analysis.
The most flagrant is the assumption that the expected value of all work done now on x-risk is positive. You might hope that it is, but you can’t actually know or even have rationally high confidence in it. Without this assumption, you might be able to say that anything we do today is important, but can’t say that it’s equivalent to saving lives. You may equally well be doing something equivalent to ending lives.
Another serious unjustified assumption is that the correct measure is some aggregated utility that is linear in the number of people who come to exist. I have extreme doubts that murdering 7 billion people today is ethically justifiable if it would increase the population capacity of the universe a trillion years from now by 0.0000000000000000000000000000000000000001% even though it means that a lot more people get to live. Likewise I have an expectation that allowing capacity for one more potential person to exist a trillion years from now is morally much less worthwhile than saving an actual person today.
As to your second objection, I think that for many people the question of whether murdering people in order to save other people is a good idea is a separate moral question from which altruistic actions we should take to have the most positive impact. I am certainly not advocating murdering billions of people.
But whether saving present people or (in expectation) saving many more unborn future people is a better use of altruistic resources seems to be largely a matter of temperament. I have heard a few discussions of this and they never seem to make much sense to me. For me it is literally as simple as people being further away in time which is another dimension, not really any different than spatial dimensions, except that time flows in one direction and so we have much less information about it.
But uncertainty only calls into question whether or not we have impact in expectation, for me it has no bearing on the reality of this impact or the moral value of these lives. I cannot seem to comprehend why other people value future people less than present people, assuming you have equal ability to influence either. I would really like for there to be some rational solution, but it always feels like people are talking past each other in these types of discussions. If there is one child tortured today it cannot somehow be morally equivalent to ten children being tortured tomorrow. If I can ensure one person lives a life overflowing with joy today, I would be willing to forego this if I knew with certainty I could ensure one hundred people live lives overflowing with joy in one hundred years. I don’t feel like there is a time limit on morality, to be honest it still confuses me why exactly some people feel otherwise.
You also mentioned something about differing percentages of the population. Many of these questions don’t work in reality because there are a lot of flow-through effects, but if you ignore those, I also don’t see how 8,000 people today suffering lives of torture might be better than 8 early humans a couple hundred thousand years ago suffering lives of torture, even if that means it was 1 /1,000,000 of the population in the the first case (just a wild guess) and 1 / 1,000 of the population in the second case.
These questions might be complicated if you take the average view on population ethics instead of the total view, and I actually do give some credence to the average view, but I nonetheless think the amount of value created by averting X-risk is so huge that it probably outweighs this considerations, at least for the risk neutral.
I’m not actually talking about “a person being tortured today” versus “a person being tortured tomorrow”. I agree those are equivalent, from some hypothetical external viewpoint and assuming that various types of uncertainty are declared by fiat to be absent.
It’s about “a person who actually exists getting to continue their life that would otherwise be terminated” versus “a person being able to come to exist in the future versus not ever existing”. I have serious doubts that these are morally equivalent, and am inclined to believe that they are not even on a comparable scale. In particular, I think using the term “saving a life” for the latter is not only unjustified, but wilfully deceptive.
Even if there does turn out to be a strong argument for the two outcomes being comparable on some numerical scale, I expect to still strongly disfavour any use of terminology that equates them as this post does.
Ah, thanks for the clarification, this is very helpful. I made a few updates including changing the title of the piece and adding a note about this in the assumptions. Here is the assumption and footnote I added, which I think explains my views on this:
Whenever I say “lives saved” this is shorthand for “future lives saved from nonexistence.” This is not the same as saving existing lives, which may cause profound emotional pain for people left behind, and some may consider more tragic than future people never being born.[6]
Here is footnote 6, created for brevity of the main piece:
This post originally used the “term lives” saved without mentioning nonexistence, but JBlack on LessWrong pointed out that the term “lives saved” could be misleading in that it equates saving present lives with creating new future lives. While I take the total view and so feel these are relatively equivalent (if we exclude the flow-through effects, including the emotional pain caused to those left behind by the deceased), those who take other views such as the person-effecting view may feel very differently about this.
Here is a related assumption I added based on an EA Forum comment:
I assume a zero-discount rate for the value of future lives, meaning I assume the value of a life is not dependent on when that life occurs.
I hope this shows why I think the term is not unjustified, I certainly was not intending to be willfully deceptive and apologize if it seemed this way. I believe in the equal value of all conscious experience quite strongly, and this includes future people, so for me “lives saved” or “lives saved from nonexistence” carries the correct emotional tone and moral connotations from my point of view. I can definitely respect that other people may feel differently.
I am curious whether this clarifies our difference in intuitions, or if there is some other reason you see the ending of a life as worse than the non-existence of life.
I mentioned a few times that some and perhaps most x-risk work may have negative value ex post. I go into detail how work may likely be negative in footnote 13.
It seems somewhat unreasonable to me, however, to be virtually 100% confident that x-risk work is as likely to have zero or negative value ex ante as it is to have positive value.
I tried to include the extreme difficulty of influencing the future by giving work relatively low efficacy, i.e. in the moderate case 100,000 (hopefully extremely competent) people working on x-risk for 1000 years only cause a 10% reduction of x-risk in expectation, in other words effectively a 90% likelihood of failure. In the pessimistic estimate 100,000 people working on it for 10,000 years only cause a 1% reduction in x-risk.
Perhaps this could be a few orders of magnitude lower, say 1 billion people working on x-risk for 1 million years only reduce existential risk by 1/1trillion in expectation (if these numbers seem absurd you can use lower numbers of people or time, but this increases the number of lives saved per unit of work). This would make the pessimistic estimate have very low value, but the moderate estimate would still be highly valuable (10^18 lives per minute of work.)
All that is to say, I think that while you could be much more pessimistic, I don’t think it changes the conclusion by that much, except in the pessimistic case—unless you have extremely high certainty that we cannot predict what is likely to help prevent x-risk. I did give two more pessimistic scenarios in the appendix which I say may be plausible under certain assumptions, such as 100% certainty that X-risk is inevitable. I will add that this case is also valid if you assume a 100% certainty that we can’t predict what will reduce X-risk, as I think this is a valid point.
There are two major unexamined assumptions underlying this analysis.
The most flagrant is the assumption that the expected value of all work done now on x-risk is positive. You might hope that it is, but you can’t actually know or even have rationally high confidence in it. Without this assumption, you might be able to say that anything we do today is important, but can’t say that it’s equivalent to saving lives. You may equally well be doing something equivalent to ending lives.
Another serious unjustified assumption is that the correct measure is some aggregated utility that is linear in the number of people who come to exist. I have extreme doubts that murdering 7 billion people today is ethically justifiable if it would increase the population capacity of the universe a trillion years from now by 0.0000000000000000000000000000000000000001% even though it means that a lot more people get to live. Likewise I have an expectation that allowing capacity for one more potential person to exist a trillion years from now is morally much less worthwhile than saving an actual person today.
As to your second objection, I think that for many people the question of whether murdering people in order to save other people is a good idea is a separate moral question from which altruistic actions we should take to have the most positive impact. I am certainly not advocating murdering billions of people.
But whether saving present people or (in expectation) saving many more unborn future people is a better use of altruistic resources seems to be largely a matter of temperament. I have heard a few discussions of this and they never seem to make much sense to me. For me it is literally as simple as people being further away in time which is another dimension, not really any different than spatial dimensions, except that time flows in one direction and so we have much less information about it.
But uncertainty only calls into question whether or not we have impact in expectation, for me it has no bearing on the reality of this impact or the moral value of these lives. I cannot seem to comprehend why other people value future people less than present people, assuming you have equal ability to influence either. I would really like for there to be some rational solution, but it always feels like people are talking past each other in these types of discussions. If there is one child tortured today it cannot somehow be morally equivalent to ten children being tortured tomorrow. If I can ensure one person lives a life overflowing with joy today, I would be willing to forego this if I knew with certainty I could ensure one hundred people live lives overflowing with joy in one hundred years. I don’t feel like there is a time limit on morality, to be honest it still confuses me why exactly some people feel otherwise.
You also mentioned something about differing percentages of the population. Many of these questions don’t work in reality because there are a lot of flow-through effects, but if you ignore those, I also don’t see how 8,000 people today suffering lives of torture might be better than 8 early humans a couple hundred thousand years ago suffering lives of torture, even if that means it was 1 /1,000,000 of the population in the the first case (just a wild guess) and 1 / 1,000 of the population in the second case.
These questions might be complicated if you take the average view on population ethics instead of the total view, and I actually do give some credence to the average view, but I nonetheless think the amount of value created by averting X-risk is so huge that it probably outweighs this considerations, at least for the risk neutral.
I’m not actually talking about “a person being tortured today” versus “a person being tortured tomorrow”. I agree those are equivalent, from some hypothetical external viewpoint and assuming that various types of uncertainty are declared by fiat to be absent.
It’s about “a person who actually exists getting to continue their life that would otherwise be terminated” versus “a person being able to come to exist in the future versus not ever existing”. I have serious doubts that these are morally equivalent, and am inclined to believe that they are not even on a comparable scale. In particular, I think using the term “saving a life” for the latter is not only unjustified, but wilfully deceptive.
Even if there does turn out to be a strong argument for the two outcomes being comparable on some numerical scale, I expect to still strongly disfavour any use of terminology that equates them as this post does.
Ah, thanks for the clarification, this is very helpful. I made a few updates including changing the title of the piece and adding a note about this in the assumptions. Here is the assumption and footnote I added, which I think explains my views on this:
Whenever I say “lives saved” this is shorthand for “future lives saved from nonexistence.” This is not the same as saving existing lives, which may cause profound emotional pain for people left behind, and some may consider more tragic than future people never being born.[6]
Here is footnote 6, created for brevity of the main piece:
This post originally used the “term lives” saved without mentioning nonexistence, but JBlack on LessWrong pointed out that the term “lives saved” could be misleading in that it equates saving present lives with creating new future lives. While I take the total view and so feel these are relatively equivalent (if we exclude the flow-through effects, including the emotional pain caused to those left behind by the deceased), those who take other views such as the person-effecting view may feel very differently about this.
Here is a related assumption I added based on an EA Forum comment:
I assume a zero-discount rate for the value of future lives, meaning I assume the value of a life is not dependent on when that life occurs.
I hope this shows why I think the term is not unjustified, I certainly was not intending to be willfully deceptive and apologize if it seemed this way. I believe in the equal value of all conscious experience quite strongly, and this includes future people, so for me “lives saved” or “lives saved from nonexistence” carries the correct emotional tone and moral connotations from my point of view. I can definitely respect that other people may feel differently.
I am curious whether this clarifies our difference in intuitions, or if there is some other reason you see the ending of a life as worse than the non-existence of life.
Interesting objections!
I mentioned a few times that some and perhaps most x-risk work may have negative value ex post. I go into detail how work may likely be negative in footnote 13.
It seems somewhat unreasonable to me, however, to be virtually 100% confident that x-risk work is as likely to have zero or negative value ex ante as it is to have positive value.
I tried to include the extreme difficulty of influencing the future by giving work relatively low efficacy, i.e. in the moderate case 100,000 (hopefully extremely competent) people working on x-risk for 1000 years only cause a 10% reduction of x-risk in expectation, in other words effectively a 90% likelihood of failure. In the pessimistic estimate 100,000 people working on it for 10,000 years only cause a 1% reduction in x-risk.
Perhaps this could be a few orders of magnitude lower, say 1 billion people working on x-risk for 1 million years only reduce existential risk by 1/1trillion in expectation (if these numbers seem absurd you can use lower numbers of people or time, but this increases the number of lives saved per unit of work). This would make the pessimistic estimate have very low value, but the moderate estimate would still be highly valuable (10^18 lives per minute of work.)
All that is to say, I think that while you could be much more pessimistic, I don’t think it changes the conclusion by that much, except in the pessimistic case—unless you have extremely high certainty that we cannot predict what is likely to help prevent x-risk. I did give two more pessimistic scenarios in the appendix which I say may be plausible under certain assumptions, such as 100% certainty that X-risk is inevitable. I will add that this case is also valid if you assume a 100% certainty that we can’t predict what will reduce X-risk, as I think this is a valid point.