Right, the first clause is there as a necessary but not sufficient part of the standard reason for focusing on the far future, and the sentence works now that I’ve removed the “therefore.”
The reason I’d rather not phrase things as “morally urgent to bring new people into existence” is because that phrasing suggests presentist assumptions. I’d rather use a sentence with non-presentist assumptions, since presentism is probably rejected by a majority of physicists by now, and also rejected by me. (It’s also rejected by the majority of EAs with whom I’ve discussed the issue, but that’s not actually noteworthy because it’s such a biased sample of EAs.)
Is it the “bringing into existence” and the “new” that suggests presentism to you? (Which I also reject, btw. But I don’t think it’s of much relevance to the issue at hand.) Even without the “therefore”, it seems to me that the sentence suggests that the rejection of time preference is what does the crucial work on the way to Bostrom’s and Beckstead’s conclusions, when it’s rather the claim that it’s “morally urgent/required to cause the existence of people (with lives worth living) that wouldn’t otherwise have existed”, which is what my alternative sentence was meant to mean.
I confess I’m not that motivated to tweak the sentence even further, since it seems like a small semantic point, I don’t understand the advantages to your phrasing, and I’ve provided links to more thorough discussions of these issues, for example Beckstead’s dissertation. Maybe it would help if you explained what kind of reasoning you are using to identify which claims are “doing the crucial work”? Or we could just let it be.
Yeah, I’ve read Nick’s thesis, and I think the moral urgency of filling the universe with people is the more important basis of his conclusion than the rejection of time preference. The sentence suggests that the rejection of time preference is most important.
If I get him right, Nick agrees that the time issue is much less important than you suggested in your recent interview.
Sorry to insist! :) But when you disagree with Bostrom’s and Beckstead’s conclusions, people immediately assume that you must be valuing present people more than future ones. And I’m constantly like: “No! The crucial issue is whether the non-existence of people (where there could be some) poses a moral problem, i.e. whether it’s morally urgent to fill the universe with people. I doubt it.”
Okay, so we’re talking about two points: (1) whether current people have more value than future people, and (2) whether it would be super-good to create gazillions of super-good lives.
My sentence mentions both of those, in sequence: “Many EAs value future people roughly as much as currently-living people [1], and think that nearly all potential value is found in the well-being of the astronomical numbers of people who could populate the far future [2]...”
And you are suggesting… what? That I switch the order in which they appear, so that [2] appears before [1], and is thus emphasized? Or that I use your phrase “morally urgent to” instead of “nearly all potential value is found in...”? Or something else?
I forgot to clarify the rough argument for why (1) “value future people equally” is much less important or crucial than (2) “fill the universe with people” here.
If you accept (2), you’re almost guaranteed to be on board with where Bostrom and Beckstead are roughly going (even if you valued present people more!). It’s hardly possible to then block their argument on normative grounds, and criticism would have to be empirical, e.g. based on the claim that dystopian futures may be likelier than commonly assumed, which would decrease the value of x-risk reduction.
By contrast, if you accept (1), it’s still very much an open question whether you’ll be on board.
Also, intrinsic time preference is really not an issue among EAs. The idea that spatial and temporal distance are irrelevant when it comes to helping others is a pretty core element of the EA concept. What is an issue, though, is the question of what helping others actually means (or should mean). Who are the relevant others? Persons? Person-moments? Preferences? And how are they relevant? Should we ensure the non-existence of suffering? Or promote ecstasy too? Prevent the existence of unfulfilled preferences? Or create fulfilled ones too? Can you help someone by bringing them into existence? Or only by preventing their miserable existence/unfulfilled preferences? These issues are more controversial than the question of time preference. Unfortunately, they’re of astronomical significance.
I don’t really know if I’m suggesting any further specific change to the wording—sorry about that. It’s tricky… If you’re speaking to non-EAs, it’s important to emphasize the rejection of time preference. But there shouldn’t be a “therefore”, which (in my perception) is still implicitly there. And if you’re speaking to people who already reject time preference, it’s even more important to make it clear that this rejection doesn’t imply “fill the universe with people”. One solution could be to simply drop the reference to the (IMO non-decisive) rejection of time preference and go for something like: “Many EAs consider the creation of (happy) people valuable and morally urgent, and therefore think that nearly all potential value...”
Beckstead might object that the rejection of heavy time preference is important to his general conclusion (the overwhelming importance of shaping the far future). But if we’re talking that level of generality, then the reference to x-risk reduction should probably go or be qualified. For sufficiently negative-leaning EAs (such as Brian Tomasik) believe that x-risk reduction is net negative.
Perhaps the best solution would be to expand the section and start by mentioning how the (EA-uncontroversial) rejection of time preference is relevant to the overwhelming importance of shaping the far future. Once we’ve established that the far future likely dominates, the question arises how we should morally affect the far future. Depending on this question, very different conclusions can result e.g. with regard to the importance and even the sign of x-risk reduction.
I don’t want to expand the section, because that makes it stand out more than is compatible with my aims for the post. And since the post is aimed at non-EAs and new EAs, I don’t want to drop the point about time preference, as “intrinsic” time-discounting is a common view outside EA, especially for those with a background in economics rather than philosophy. So my preferred solution is to link to a fuller discussion of the issues, which I did (in particular, Beckstead’s thesis). Anyway, I appreciate your comments.
Right, the first clause is there as a necessary but not sufficient part of the standard reason for focusing on the far future, and the sentence works now that I’ve removed the “therefore.”
The reason I’d rather not phrase things as “morally urgent to bring new people into existence” is because that phrasing suggests presentist assumptions. I’d rather use a sentence with non-presentist assumptions, since presentism is probably rejected by a majority of physicists by now, and also rejected by me. (It’s also rejected by the majority of EAs with whom I’ve discussed the issue, but that’s not actually noteworthy because it’s such a biased sample of EAs.)
Is it the “bringing into existence” and the “new” that suggests presentism to you? (Which I also reject, btw. But I don’t think it’s of much relevance to the issue at hand.) Even without the “therefore”, it seems to me that the sentence suggests that the rejection of time preference is what does the crucial work on the way to Bostrom’s and Beckstead’s conclusions, when it’s rather the claim that it’s “morally urgent/required to cause the existence of people (with lives worth living) that wouldn’t otherwise have existed”, which is what my alternative sentence was meant to mean.
I confess I’m not that motivated to tweak the sentence even further, since it seems like a small semantic point, I don’t understand the advantages to your phrasing, and I’ve provided links to more thorough discussions of these issues, for example Beckstead’s dissertation. Maybe it would help if you explained what kind of reasoning you are using to identify which claims are “doing the crucial work”? Or we could just let it be.
Yeah, I’ve read Nick’s thesis, and I think the moral urgency of filling the universe with people is the more important basis of his conclusion than the rejection of time preference. The sentence suggests that the rejection of time preference is most important.
If I get him right, Nick agrees that the time issue is much less important than you suggested in your recent interview.
Sorry to insist! :) But when you disagree with Bostrom’s and Beckstead’s conclusions, people immediately assume that you must be valuing present people more than future ones. And I’m constantly like: “No! The crucial issue is whether the non-existence of people (where there could be some) poses a moral problem, i.e. whether it’s morally urgent to fill the universe with people. I doubt it.”
Okay, so we’re talking about two points: (1) whether current people have more value than future people, and (2) whether it would be super-good to create gazillions of super-good lives.
My sentence mentions both of those, in sequence: “Many EAs value future people roughly as much as currently-living people [1], and think that nearly all potential value is found in the well-being of the astronomical numbers of people who could populate the far future [2]...”
And you are suggesting… what? That I switch the order in which they appear, so that [2] appears before [1], and is thus emphasized? Or that I use your phrase “morally urgent to” instead of “nearly all potential value is found in...”? Or something else?
Sorry for the delay!
I forgot to clarify the rough argument for why (1) “value future people equally” is much less important or crucial than (2) “fill the universe with people” here.
If you accept (2), you’re almost guaranteed to be on board with where Bostrom and Beckstead are roughly going (even if you valued present people more!). It’s hardly possible to then block their argument on normative grounds, and criticism would have to be empirical, e.g. based on the claim that dystopian futures may be likelier than commonly assumed, which would decrease the value of x-risk reduction.
By contrast, if you accept (1), it’s still very much an open question whether you’ll be on board.
Also, intrinsic time preference is really not an issue among EAs. The idea that spatial and temporal distance are irrelevant when it comes to helping others is a pretty core element of the EA concept. What is an issue, though, is the question of what helping others actually means (or should mean). Who are the relevant others? Persons? Person-moments? Preferences? And how are they relevant? Should we ensure the non-existence of suffering? Or promote ecstasy too? Prevent the existence of unfulfilled preferences? Or create fulfilled ones too? Can you help someone by bringing them into existence? Or only by preventing their miserable existence/unfulfilled preferences? These issues are more controversial than the question of time preference. Unfortunately, they’re of astronomical significance.
I don’t really know if I’m suggesting any further specific change to the wording—sorry about that. It’s tricky… If you’re speaking to non-EAs, it’s important to emphasize the rejection of time preference. But there shouldn’t be a “therefore”, which (in my perception) is still implicitly there. And if you’re speaking to people who already reject time preference, it’s even more important to make it clear that this rejection doesn’t imply “fill the universe with people”. One solution could be to simply drop the reference to the (IMO non-decisive) rejection of time preference and go for something like: “Many EAs consider the creation of (happy) people valuable and morally urgent, and therefore think that nearly all potential value...”
Beckstead might object that the rejection of heavy time preference is important to his general conclusion (the overwhelming importance of shaping the far future). But if we’re talking that level of generality, then the reference to x-risk reduction should probably go or be qualified. For sufficiently negative-leaning EAs (such as Brian Tomasik) believe that x-risk reduction is net negative.
Perhaps the best solution would be to expand the section and start by mentioning how the (EA-uncontroversial) rejection of time preference is relevant to the overwhelming importance of shaping the far future. Once we’ve established that the far future likely dominates, the question arises how we should morally affect the far future. Depending on this question, very different conclusions can result e.g. with regard to the importance and even the sign of x-risk reduction.
I don’t want to expand the section, because that makes it stand out more than is compatible with my aims for the post. And since the post is aimed at non-EAs and new EAs, I don’t want to drop the point about time preference, as “intrinsic” time-discounting is a common view outside EA, especially for those with a background in economics rather than philosophy. So my preferred solution is to link to a fuller discussion of the issues, which I did (in particular, Beckstead’s thesis). Anyway, I appreciate your comments.