Whether the future “matters more than today” is not a question of impersonal fact. Things, as you no doubt know, do not ‘matter’ intransitively; they matter to someone. So the question is, does “the future” (however construed) matter to me more than “today” (likewise, however construed) does? Does “the future” matter to my hypothetical friend Alice more than today does, or to her neighbor Bob? Etc.
And any of these people are fully within their right to answer in the negative.
“There are 4 * 10^20 stars out there. You’re in a prime position to make sure they’re used for something valuable to you- as in, you’re currently experiencing the top 10^-30% most influential hours of human experience because of your early position in human history, etc. Are you going to change your plans and leverage your unique position?”
Note that you’re making a non-trivial claim here. In past discussions, on Less Wrong and in adjacent spaces, it has been pointed out that our ability to predict future consequences of our actions drops off rapidly as our time horizon recedes into the distance. It is not obvious to me that I am in any particularly favorable position to affect the course of the distant future in any but the most general ways (such as contributing to, or helping to avert, human extinction—and even there, many actions I might feasibly take could plausibly affect the likelihood of my desired outcome in either the one direction or the other).
“No, I think I’ll spend most of my effort doing the things I was already going to do.”
Really- Is that your final answer? What position would you need to be in to decide that planning for the long term future is worth most of your effort?
I would need to (a) have different values than those I currently have, and (b) gain (implausibly, given my current understanding of the world) the ability to predict the future consequences of my actions with an accuracy vastly greater than that which is currently possible (for me or for anyone else).
“Seeing as how a couple’s baby does not yet exist, it makes very little sense to say that saving money for their clothes and crib is something that they would be doing ‘for’ them.” No, wait, that’s ridiculous- It does make sense to say that you’re doing things “for” people who don’t exist.
Sorry, no. There is a categorical difference between bringing a person into existence and affecting a person’s future life, contingent on them being brought into existence. It of course makes sense to speak of doing the latter sort of thing “for” the person-to-be, but such isn’t the case for the former sort of thing.
There’s some more complicated discussion to be had on the specific merits of making sure that people exist, but I’m not (currently) interested in having that discussion. My point isn’t really related to that …
To the contrary: your point hinges on this. You may of course discuss or not discuss what you like, but by avoiding this topic, you avoid one of the critical considerations in your whole edifice of reasoning. Your conclusion is unsupportable without committing to a position on this question.
Also, in the context of artificial intelligence research, it’s an open question as to what the border of “Future Humanity” is.
Quite so—but surely this undermines your thesis, rather than supporting it?
Whether the future “matters more than today” is not a question of impersonal fact. Things, as you no doubt know, do not ‘matter’ intransitively; they matter to someone. So the question is, does “the future” (however construed) matter to me more than “today” (likewise, however construed) does? Does “the future” matter to my hypothetical friend Alice more than today does, or to her neighbor Bob? Etc.
And any of these people are fully within their right to answer in the negative.
Eh… We can draw conclusions about the values of individuals based on the ways in which they seem to act in the limit of additional time and information, the origins of humanity (selection for inclusive genetic fitness), by constructing thought experiments to solicit revealed beliefs, etc.
Other agents are allowed to claim that they have more insight than you into certain preferences of yours- they often do. Consider the special cases in which you can prove that the stated preferences of some humans allow you to siphon infinite money off of them. Also consider the special cases in which someone says something completely incoherent- “I prefer two things to one another under all conditions”, or some such. We know that they’re wrong. They can refuse to admit they’re wrong, but they can’t properly do that without giving us all of their money or in some sense putting their fingers in their ears.
These special cases are just special cases. In general, values are highly entangled with concrete physical information. You may say that you want to put your hand on that (unbeknownst to you) searing plate, but we can also know that you’re wrong. You don’t want to do that, and you’d agree if only you knew that the plate was searing hot.
They are fully within their right to answer in the negative, but they’re not allowed to decide that they’re correct. There is a correct answer to what they value, and they don’t necessarily have perfect insight into that.
Note that you’re making a non-trivial claim here. In past discussions, on Less Wrong and in adjacent spaces, it has been pointed out that our ability to predict future consequences of our actions drops off rapidly as our time horizon recedes into the distance. It is not obvious to me that I am in any particularly favorable position to affect the course of the distant future in any but the most general ways (such as contributing to, or helping to avert, human extinction—and even there, many actions I might feasibly take could plausibly affect the likelihood of my desired outcome in either the one direction or the other).
You don’t need to be able to predict the future with omniscient accuracy to realize that you are in an unusually important position for affecting the future.
If it’s not obvious, here we go: You’re an above average intelligence person living in the small period directly before Humanity is expected (By top experts- and with good cause) to develop artificial general intelligence. This technology will allow us to break the key scarcities of civilization:
Allowing vastly more efficient conversion of matter into agency through the fabrication of computer hardware. This process will, given the advent of artificial general intelligence, soon far surpass the efficiency with which we can construct Human agency. Humans take a very long time to make, and you must train each individual Human- you can’t directly copy Human software, and the indirect copying is very, very slow.
Allowing agents with intelligence vastly above that of the most intelligent humans (whose brains must all fit in a container of relatively limited size) in all strategically relevant regards- speed, quality, modularity, I/O speed, multitasking ability, adaptability, transparency, etc.
Allowing us to build agents able to access a much more direct method of recursively improving their own intelligence by buying or fabricating new hardware and directly improving their own code, triggering an extremely exploitable direct feedback loop.
The initial conditions of the first agent(s) we deploy which possesses these radical and simultaneously new options which will, on account of the overwhelming importance of these limitations on the existing state of affairs, precisely and “solely” determine the future.
This is a pretty popular opinion among the popular rationalist writers- I pass the torch on to them.
Sorry, no. There is a categorical difference between bringing a person into existence and affecting a person’s future life, contingent on them being brought into existence. It of course makes sense to speak of doing the latter sort of thing “for” the person-to-be, but such isn’t the case for the former sort of thing.
I was aware of the difference. The point (Which I directly stated at the end- convenient!) is that “It does make sense to say that you’re doing things ‘for’ people who don’t exist.” If this doesn’t directly address your point, the proper response to make would have been “Ok, I think you misunderstood what I was saying.” I think that I did misunderstand what you were saying, so disregard.
Aside from that, I still think that saying bringing someone into existence “for” them makes sense. I think you saying the thing doesn’t “make sense” is unfairly dismissive and overly argumentative. If someone said that they weren’t going to have an abortion “for” their baby, (or, if you disagree with me about the lines of what constitutes a “person”) that they were stopping some pain relieving experimental drug that was damaging their fertility “for” their future children, you’d receive all of the information they meant to convey about their motivations. It would definitely make sense. You might disagree with that reasoning, but it’s coherent. They have an emotional connection with their as of yet not locally instantiated children.
I personally do happen to disagree with this reasoning for reasons I will explain later- but it does make sense.
To the contrary: your point hinges on this. You may of course discuss or not discuss what you like, but by avoiding this topic, you avoid one of the critical considerations in your whole edifice of reasoning. Your conclusion is unsupportable without committing to a position on this question.
It isn’t, and I just told you that it isn’t. You should have tried to understand why I was saying that before arguing with me- I’m the person who made the comment in the first place, and I just directly told you that you were misinterpreting me.
My point is: “It’s that we should be spending most of our effort on planning for the long term future.” See later for an elaboration.
Quite so—but surely this undermines your thesis, rather than supporting it?
No- I’m not actually arguing for the specific act of ensuring that future humans exist. I think that all humans already exist, perhaps in infinite supply, and I thus see (tentatively) zero value in bringing about future humans in and of itself. My first comment was using a rhetorical flair that was intended to convey my general strategy for planning for the future; I’m more interested in solving the AI alignment problem (and otherwise avoiding human extinction/s-risks) than I am about current politically popular long term planning efforts and the problems that they address, such as climate change and conservation efforts.
I think that we should be interested in manipulating the relative ratios (complicated stuff) of future humans, which means that we should still be interested in “ensuring the existence” (read: manipulating the ratios of different types of) of “Future Humanity”, a nebulous phrase meant to convey the sort of outcome that I want to see to the value achievement dilemma. Personally, I think that the most promising plan for this is engineering an aligned AGI and supporting it throughout its recursive self improvement process.
Your response was kindof sour, so I’m not going to continue this conversation.
I read this comment with interest, and with the intent of responding to your points—it seemed to me that there was much confusion to be resolved here, to the benefit of all. Then I got to your last line.
It is severely rude to post a detailed fisking of an interlocutor’s post/comment, and to then walk away. If you wish to bow out of the discussion, that is your right, but it is both self-indulgent and disrespectful to first get in a last word (much less a last several hundred words).
You were welcome to write an actual response, and I definitely would have read it. I was merely announcing my advanced intent to not respond in detail to any following comments, and explaining why in brief, conservative terms. This is seemingly strictly better- it gives you new information which you can use to decide whether or not you want to respond. If I was being intentionally mean, I would have allowed you to write a detailed comment and never responded, potentially wasting your time.
If your idea of rudeness is constructed in this (admittedly inconvenient) way, I apologize.
Whether the future “matters more than today” is not a question of impersonal fact. Things, as you no doubt know, do not ‘matter’ intransitively; they matter to someone. So the question is, does “the future” (however construed) matter to me more than “today” (likewise, however construed) does? Does “the future” matter to my hypothetical friend Alice more than today does, or to her neighbor Bob? Etc.
And any of these people are fully within their right to answer in the negative.
Note that you’re making a non-trivial claim here. In past discussions, on Less Wrong and in adjacent spaces, it has been pointed out that our ability to predict future consequences of our actions drops off rapidly as our time horizon recedes into the distance. It is not obvious to me that I am in any particularly favorable position to affect the course of the distant future in any but the most general ways (such as contributing to, or helping to avert, human extinction—and even there, many actions I might feasibly take could plausibly affect the likelihood of my desired outcome in either the one direction or the other).
I would need to (a) have different values than those I currently have, and (b) gain (implausibly, given my current understanding of the world) the ability to predict the future consequences of my actions with an accuracy vastly greater than that which is currently possible (for me or for anyone else).
Sorry, no. There is a categorical difference between bringing a person into existence and affecting a person’s future life, contingent on them being brought into existence. It of course makes sense to speak of doing the latter sort of thing “for” the person-to-be, but such isn’t the case for the former sort of thing.
To the contrary: your point hinges on this. You may of course discuss or not discuss what you like, but by avoiding this topic, you avoid one of the critical considerations in your whole edifice of reasoning. Your conclusion is unsupportable without committing to a position on this question.
Quite so—but surely this undermines your thesis, rather than supporting it?
Eh… We can draw conclusions about the values of individuals based on the ways in which they seem to act in the limit of additional time and information, the origins of humanity (selection for inclusive genetic fitness), by constructing thought experiments to solicit revealed beliefs, etc.
Other agents are allowed to claim that they have more insight than you into certain preferences of yours- they often do. Consider the special cases in which you can prove that the stated preferences of some humans allow you to siphon infinite money off of them. Also consider the special cases in which someone says something completely incoherent- “I prefer two things to one another under all conditions”, or some such. We know that they’re wrong. They can refuse to admit they’re wrong, but they can’t properly do that without giving us all of their money or in some sense putting their fingers in their ears.
These special cases are just special cases. In general, values are highly entangled with concrete physical information. You may say that you want to put your hand on that (unbeknownst to you) searing plate, but we can also know that you’re wrong. You don’t want to do that, and you’d agree if only you knew that the plate was searing hot.
They are fully within their right to answer in the negative, but they’re not allowed to decide that they’re correct. There is a correct answer to what they value, and they don’t necessarily have perfect insight into that.
You don’t need to be able to predict the future with omniscient accuracy to realize that you are in an unusually important position for affecting the future.
If it’s not obvious, here we go: You’re an above average intelligence person living in the small period directly before Humanity is expected (By top experts- and with good cause) to develop artificial general intelligence. This technology will allow us to break the key scarcities of civilization:
Allowing vastly more efficient conversion of matter into agency through the fabrication of computer hardware. This process will, given the advent of artificial general intelligence, soon far surpass the efficiency with which we can construct Human agency. Humans take a very long time to make, and you must train each individual Human- you can’t directly copy Human software, and the indirect copying is very, very slow.
Allowing agents with intelligence vastly above that of the most intelligent humans (whose brains must all fit in a container of relatively limited size) in all strategically relevant regards- speed, quality, modularity, I/O speed, multitasking ability, adaptability, transparency, etc.
Allowing us to build agents able to access a much more direct method of recursively improving their own intelligence by buying or fabricating new hardware and directly improving their own code, triggering an extremely exploitable direct feedback loop.
The initial conditions of the first agent(s) we deploy which possesses these radical and simultaneously new options which will, on account of the overwhelming importance of these limitations on the existing state of affairs, precisely and “solely” determine the future.
This is a pretty popular opinion among the popular rationalist writers- I pass the torch on to them.
I was aware of the difference. The point (Which I directly stated at the end- convenient!) is that “It does make sense to say that you’re doing things ‘for’ people who don’t exist.” If this doesn’t directly address your point, the proper response to make would have been “Ok, I think you misunderstood what I was saying.” I think that I did misunderstand what you were saying, so disregard.
Aside from that, I still think that saying bringing someone into existence “for” them makes sense. I think you saying the thing doesn’t “make sense” is unfairly dismissive and overly argumentative. If someone said that they weren’t going to have an abortion “for” their baby, (or, if you disagree with me about the lines of what constitutes a “person”) that they were stopping some pain relieving experimental drug that was damaging their fertility “for” their future children, you’d receive all of the information they meant to convey about their motivations. It would definitely make sense. You might disagree with that reasoning, but it’s coherent. They have an emotional connection with their as of yet not locally instantiated children.
I personally do happen to disagree with this reasoning for reasons I will explain later- but it does make sense.
It isn’t, and I just told you that it isn’t. You should have tried to understand why I was saying that before arguing with me- I’m the person who made the comment in the first place, and I just directly told you that you were misinterpreting me.
My point is: “It’s that we should be spending most of our effort on planning for the long term future.” See later for an elaboration.
No- I’m not actually arguing for the specific act of ensuring that future humans exist. I think that all humans already exist, perhaps in infinite supply, and I thus see (tentatively) zero value in bringing about future humans in and of itself. My first comment was using a rhetorical flair that was intended to convey my general strategy for planning for the future; I’m more interested in solving the AI alignment problem (and otherwise avoiding human extinction/s-risks) than I am about current politically popular long term planning efforts and the problems that they address, such as climate change and conservation efforts.
I think that we should be interested in manipulating the relative ratios (complicated stuff) of future humans, which means that we should still be interested in “ensuring the existence” (read: manipulating the ratios of different types of) of “Future Humanity”, a nebulous phrase meant to convey the sort of outcome that I want to see to the value achievement dilemma. Personally, I think that the most promising plan for this is engineering an aligned AGI and supporting it throughout its recursive self improvement process.
Your response was kindof sour, so I’m not going to continue this conversation.
I read this comment with interest, and with the intent of responding to your points—it seemed to me that there was much confusion to be resolved here, to the benefit of all. Then I got to your last line.
It is severely rude to post a detailed fisking of an interlocutor’s post/comment, and to then walk away. If you wish to bow out of the discussion, that is your right, but it is both self-indulgent and disrespectful to first get in a last word (much less a last several hundred words).
Strongly downvoted.
You were welcome to write an actual response, and I definitely would have read it. I was merely announcing my advanced intent to not respond in detail to any following comments, and explaining why in brief, conservative terms. This is seemingly strictly better- it gives you new information which you can use to decide whether or not you want to respond. If I was being intentionally mean, I would have allowed you to write a detailed comment and never responded, potentially wasting your time.
If your idea of rudeness is constructed in this (admittedly inconvenient) way, I apologize.