the core concern of avoiding a human extinction scenario.
That is not the core concern of this site. We are in a human extinction scenario so long as the problem of death remains unsolved. Our interest is in escaping this scenario as quickly as possible. The difference is urgency; we are not trying to avoid a collision, but are trying to escape the burning wreckage.
If we can’t stop dying, we can’t stop extinction. Logically, if everyone dies, and there are a finite number of humans, there will necessarily be a last human who dies.
[edit] To those down-voting me: I take my lumps willingly, but could you at least tell me why you think I’m wrong?
That is not the core concern of this site. We are in a human extinction scenario so long as the problem of death remains unsolved. Our interest is in escaping this scenario as quickly as possible. The difference is urgency; we are not trying to avoid a collision, but are trying to escape the burning wreckage.
. . .
If we can’t stop dying, we can’t stop extinction. . . . To those down-voting me: I take my lumps willingly, but could you at least tell me why you think I’m wrong?
To solve the problem of death, you have to solve the problem of extinction and you have to solve the problem of death from old age.
But to solve the problem of extinction, you do not have to solve the problem of death from old age (as long as couples continue to have children at the replacement rate).
My guess is that the reason you failed immediately to make the distinction between the problem of death and the problem of extinction is that under your way of valuing things, if every human individual now living dies, the human species may as well go extinction for all you care. In other words, you do not assign intrinsic value to individuals not yet born or to the species as a whole distinct from its members. It would help me learn to think better about these issues if you would indicate how accurate my guess was.
My second guess, if my first guess is wrong, is that you failed to distinguish between the following 2 statements. The first is true, the second is what you wrote.
The probability that the species will become extinct because every individual human will die of old age is negligible compared the the extinction risk of insufficiently-careful AGI research.
I’m not talking about old age, I’m talking about death. This includes death from plague, asteroid, LHC mishap, or paperclip maximizer. I didn’t say “cure death” or “cure old age” but “[solve] the problem of death”. And for the record, to my mind, the likeliest solution involves AGI, developed extremely carefully—but as quickly as possible under that condition.
Having refreshed, I see you’ve changed the course of your reply to some degree. I’d like to respond further but I don’t have time to think it through right now. I will just add that while I don’t assign intrinsic value to individuals not yet born, I do intrinsically value the human species as a present and future entity—but not as much as I value individuals currently alive. That said, I need to spend some time thinking about this before I add to my answer. I may have been too hasty and accidentally weakened the implication of “extinction” through a poor turn of phrase.
I don’t assign intrinsic value to individuals not yet born
Note that this is dynamically inconsistent: given the opportunity, this value implies that at time T, you would want to bind yourself so that at all times greater than T, you would still only intrinsically care about people who were alive at time T. (Unless you have ‘overriding’ values of not modifying yourself, or of your intrinsic valuations changing in certain ways, etc., but that sounds awfully messy and possibly unstable.)
(Also, that’s assuming causal decision theory. TDT/UDT probably gives a different result due to negotiations with similar agents binding themselves at different times, but I don’t want to work that out right now.)
The probability that the species will become extinct because every individual human will die of old age is negligible compared the the extinction risk of insufficiently-careful AGI research.
I’m not talking about old age, I’m talking about death. This includes death from plague, asteroid, LHC mishap, or paperclip maximizer.
. . .
Having refreshed, I see you’ve changed the course of your reply to some degree.
I did, when I realized my first reply was vulnerable to the response which you in fact made and which I quote above. (I should probably let my replies sit for 15 minutes before submitting/uploading them to reduce the probability of situations like this one, which can get confusing.)
(And thank you for your reply to my question about your values.)
I don’t see why you’re being downvoted either, but one obvious point (besides Richard’s) is that if for some reason there can only be finitely many humans, probably the same reason means humans can only live finitely long.
That is not the core concern of this site. We are in a human extinction scenario so long as the problem of death remains unsolved. Our interest is in escaping this scenario as quickly as possible. The difference is urgency; we are not trying to avoid a collision, but are trying to escape the burning wreckage.
Conventionally, there’s a difference between death and extinction.
If we can’t stop dying, we can’t stop extinction. Logically, if everyone dies, and there are a finite number of humans, there will necessarily be a last human who dies.
[edit] To those down-voting me: I take my lumps willingly, but could you at least tell me why you think I’m wrong?
To solve the problem of death, you have to solve the problem of extinction and you have to solve the problem of death from old age.
But to solve the problem of extinction, you do not have to solve the problem of death from old age (as long as couples continue to have children at the replacement rate).
My guess is that the reason you failed immediately to make the distinction between the problem of death and the problem of extinction is that under your way of valuing things, if every human individual now living dies, the human species may as well go extinction for all you care. In other words, you do not assign intrinsic value to individuals not yet born or to the species as a whole distinct from its members. It would help me learn to think better about these issues if you would indicate how accurate my guess was.
My second guess, if my first guess is wrong, is that you failed to distinguish between the following 2 statements. The first is true, the second is what you wrote.
If we can’t stop extinction, we can’t stop dying.
If we can’t stop dying, we can’t stop extinction.
I’m not talking about old age, I’m talking about death. This includes death from plague, asteroid, LHC mishap, or paperclip maximizer. I didn’t say “cure death” or “cure old age” but “[solve] the problem of death”. And for the record, to my mind, the likeliest solution involves AGI, developed extremely carefully—but as quickly as possible under that condition.
Having refreshed, I see you’ve changed the course of your reply to some degree. I’d like to respond further but I don’t have time to think it through right now. I will just add that while I don’t assign intrinsic value to individuals not yet born, I do intrinsically value the human species as a present and future entity—but not as much as I value individuals currently alive. That said, I need to spend some time thinking about this before I add to my answer. I may have been too hasty and accidentally weakened the implication of “extinction” through a poor turn of phrase.
Note that this is dynamically inconsistent: given the opportunity, this value implies that at time T, you would want to bind yourself so that at all times greater than T, you would still only intrinsically care about people who were alive at time T. (Unless you have ‘overriding’ values of not modifying yourself, or of your intrinsic valuations changing in certain ways, etc., but that sounds awfully messy and possibly unstable.)
(Also, that’s assuming causal decision theory. TDT/UDT probably gives a different result due to negotiations with similar agents binding themselves at different times, but I don’t want to work that out right now.)
I did, when I realized my first reply was vulnerable to the response which you in fact made and which I quote above. (I should probably let my replies sit for 15 minutes before submitting/uploading them to reduce the probability of situations like this one, which can get confusing.)
(And thank you for your reply to my question about your values.)
I don’t see why you’re being downvoted either, but one obvious point (besides Richard’s) is that if for some reason there can only be finitely many humans, probably the same reason means humans can only live finitely long.
Every human being in history so far has died and yet human are not extinct. Not sure what you mean.