I realize that I am being voted down here, but am not sure why actually. This site is dedicated to rationality and the core concern of avoiding a human extinction scenario. So far Rand and lesswrong seem a pretty close match. Don’t you think it would be nice to know exactly where Rand took a wrong turn so that it can be explicitly avoided in this project? Rand making some random remarks on music taste surely does not invalidate her recognition that being rational and avoiding extinction are of crucial importance.
So where did she take a wrong turn exactly and how is this wrong turn avoided here? Nobody interested in finding out?
the core concern of avoiding a human extinction scenario.
That is not the core concern of this site. We are in a human extinction scenario so long as the problem of death remains unsolved. Our interest is in escaping this scenario as quickly as possible. The difference is urgency; we are not trying to avoid a collision, but are trying to escape the burning wreckage.
If we can’t stop dying, we can’t stop extinction. Logically, if everyone dies, and there are a finite number of humans, there will necessarily be a last human who dies.
[edit] To those down-voting me: I take my lumps willingly, but could you at least tell me why you think I’m wrong?
That is not the core concern of this site. We are in a human extinction scenario so long as the problem of death remains unsolved. Our interest is in escaping this scenario as quickly as possible. The difference is urgency; we are not trying to avoid a collision, but are trying to escape the burning wreckage.
. . .
If we can’t stop dying, we can’t stop extinction. . . . To those down-voting me: I take my lumps willingly, but could you at least tell me why you think I’m wrong?
To solve the problem of death, you have to solve the problem of extinction and you have to solve the problem of death from old age.
But to solve the problem of extinction, you do not have to solve the problem of death from old age (as long as couples continue to have children at the replacement rate).
My guess is that the reason you failed immediately to make the distinction between the problem of death and the problem of extinction is that under your way of valuing things, if every human individual now living dies, the human species may as well go extinction for all you care. In other words, you do not assign intrinsic value to individuals not yet born or to the species as a whole distinct from its members. It would help me learn to think better about these issues if you would indicate how accurate my guess was.
My second guess, if my first guess is wrong, is that you failed to distinguish between the following 2 statements. The first is true, the second is what you wrote.
The probability that the species will become extinct because every individual human will die of old age is negligible compared the the extinction risk of insufficiently-careful AGI research.
I’m not talking about old age, I’m talking about death. This includes death from plague, asteroid, LHC mishap, or paperclip maximizer. I didn’t say “cure death” or “cure old age” but “[solve] the problem of death”. And for the record, to my mind, the likeliest solution involves AGI, developed extremely carefully—but as quickly as possible under that condition.
Having refreshed, I see you’ve changed the course of your reply to some degree. I’d like to respond further but I don’t have time to think it through right now. I will just add that while I don’t assign intrinsic value to individuals not yet born, I do intrinsically value the human species as a present and future entity—but not as much as I value individuals currently alive. That said, I need to spend some time thinking about this before I add to my answer. I may have been too hasty and accidentally weakened the implication of “extinction” through a poor turn of phrase.
I don’t assign intrinsic value to individuals not yet born
Note that this is dynamically inconsistent: given the opportunity, this value implies that at time T, you would want to bind yourself so that at all times greater than T, you would still only intrinsically care about people who were alive at time T. (Unless you have ‘overriding’ values of not modifying yourself, or of your intrinsic valuations changing in certain ways, etc., but that sounds awfully messy and possibly unstable.)
(Also, that’s assuming causal decision theory. TDT/UDT probably gives a different result due to negotiations with similar agents binding themselves at different times, but I don’t want to work that out right now.)
The probability that the species will become extinct because every individual human will die of old age is negligible compared the the extinction risk of insufficiently-careful AGI research.
I’m not talking about old age, I’m talking about death. This includes death from plague, asteroid, LHC mishap, or paperclip maximizer.
. . .
Having refreshed, I see you’ve changed the course of your reply to some degree.
I did, when I realized my first reply was vulnerable to the response which you in fact made and which I quote above. (I should probably let my replies sit for 15 minutes before submitting/uploading them to reduce the probability of situations like this one, which can get confusing.)
(And thank you for your reply to my question about your values.)
I don’t see why you’re being downvoted either, but one obvious point (besides Richard’s) is that if for some reason there can only be finitely many humans, probably the same reason means humans can only live finitely long.
I realize that I am being voted down here, but am not sure why actually.
I’ve downvoted your comments in this thread because I don’t think serious discussion of the relevance of Objectivism to existential risk reduction meets Less Wrong’s quality standard; Ayn Rand just doesn’t have anything useful to teach us. Nothing personal, just a matter of “I would like to see fewer comments like this one.” (I do hope to see comments from you in the future.)
Rand making some random remarks on music taste surely does not invalidate her recognition that being rational and avoiding extinction are or crucial importance.
Ayn Rand would hardly be alone in assenting to the propositions that “Rationality is good” and “The end of the world would be bad.” A more relevant question would be whether Rand’s teachings make a significant contribution to this community’s understanding of how to systematically achieve more accurate beliefs and a lower probability of doom. As dearly as I loved Atlas Shrugged, I’m still going to have to answer no.
Well to begin with I don’t really think Rand was concerned about human extinction, though I haven’t read much so maybe you can enlighten me. She also used the word reason a lot. But it doesn’t really follow that she was actually employing the concept that we call reason. If she wasn’t then thats where she went wrong. He writing is seriously chalk full of obfuscation, conflation and any close to every logical fallacy. Even the quotes you gave above are either inane trivialities or unsupported assertions. There is never an attempt to empirically justify her claims about human nature. If you tried to program an AI using Objectivism it would be a disaster. I don’t think you could ever get the thing running because all the terms are so poorly defined.
So it just seems like a waste of time to listen to Eliezer talk about this.
Edit: I think I only voted down the initial suggestion though. Not the ensuing discussion.
I realize that I am being voted down here, but am not sure why actually. This site is dedicated to rationality and the core concern of avoiding a human extinction scenario. So far Rand and lesswrong seem a pretty close match. Don’t you think it would be nice to know exactly where Rand took a wrong turn so that it can be explicitly avoided in this project? Rand making some random remarks on music taste surely does not invalidate her recognition that being rational and avoiding extinction are of crucial importance.
So where did she take a wrong turn exactly and how is this wrong turn avoided here? Nobody interested in finding out?
That is not the core concern of this site. We are in a human extinction scenario so long as the problem of death remains unsolved. Our interest is in escaping this scenario as quickly as possible. The difference is urgency; we are not trying to avoid a collision, but are trying to escape the burning wreckage.
Conventionally, there’s a difference between death and extinction.
If we can’t stop dying, we can’t stop extinction. Logically, if everyone dies, and there are a finite number of humans, there will necessarily be a last human who dies.
[edit] To those down-voting me: I take my lumps willingly, but could you at least tell me why you think I’m wrong?
To solve the problem of death, you have to solve the problem of extinction and you have to solve the problem of death from old age.
But to solve the problem of extinction, you do not have to solve the problem of death from old age (as long as couples continue to have children at the replacement rate).
My guess is that the reason you failed immediately to make the distinction between the problem of death and the problem of extinction is that under your way of valuing things, if every human individual now living dies, the human species may as well go extinction for all you care. In other words, you do not assign intrinsic value to individuals not yet born or to the species as a whole distinct from its members. It would help me learn to think better about these issues if you would indicate how accurate my guess was.
My second guess, if my first guess is wrong, is that you failed to distinguish between the following 2 statements. The first is true, the second is what you wrote.
If we can’t stop extinction, we can’t stop dying.
If we can’t stop dying, we can’t stop extinction.
I’m not talking about old age, I’m talking about death. This includes death from plague, asteroid, LHC mishap, or paperclip maximizer. I didn’t say “cure death” or “cure old age” but “[solve] the problem of death”. And for the record, to my mind, the likeliest solution involves AGI, developed extremely carefully—but as quickly as possible under that condition.
Having refreshed, I see you’ve changed the course of your reply to some degree. I’d like to respond further but I don’t have time to think it through right now. I will just add that while I don’t assign intrinsic value to individuals not yet born, I do intrinsically value the human species as a present and future entity—but not as much as I value individuals currently alive. That said, I need to spend some time thinking about this before I add to my answer. I may have been too hasty and accidentally weakened the implication of “extinction” through a poor turn of phrase.
Note that this is dynamically inconsistent: given the opportunity, this value implies that at time T, you would want to bind yourself so that at all times greater than T, you would still only intrinsically care about people who were alive at time T. (Unless you have ‘overriding’ values of not modifying yourself, or of your intrinsic valuations changing in certain ways, etc., but that sounds awfully messy and possibly unstable.)
(Also, that’s assuming causal decision theory. TDT/UDT probably gives a different result due to negotiations with similar agents binding themselves at different times, but I don’t want to work that out right now.)
I did, when I realized my first reply was vulnerable to the response which you in fact made and which I quote above. (I should probably let my replies sit for 15 minutes before submitting/uploading them to reduce the probability of situations like this one, which can get confusing.)
(And thank you for your reply to my question about your values.)
I don’t see why you’re being downvoted either, but one obvious point (besides Richard’s) is that if for some reason there can only be finitely many humans, probably the same reason means humans can only live finitely long.
Every human being in history so far has died and yet human are not extinct. Not sure what you mean.
I’ve downvoted your comments in this thread because I don’t think serious discussion of the relevance of Objectivism to existential risk reduction meets Less Wrong’s quality standard; Ayn Rand just doesn’t have anything useful to teach us. Nothing personal, just a matter of “I would like to see fewer comments like this one.” (I do hope to see comments from you in the future.)
Ayn Rand would hardly be alone in assenting to the propositions that “Rationality is good” and “The end of the world would be bad.” A more relevant question would be whether Rand’s teachings make a significant contribution to this community’s understanding of how to systematically achieve more accurate beliefs and a lower probability of doom. As dearly as I loved Atlas Shrugged, I’m still going to have to answer no.
Well to begin with I don’t really think Rand was concerned about human extinction, though I haven’t read much so maybe you can enlighten me. She also used the word reason a lot. But it doesn’t really follow that she was actually employing the concept that we call reason. If she wasn’t then thats where she went wrong. He writing is seriously chalk full of obfuscation, conflation and any close to every logical fallacy. Even the quotes you gave above are either inane trivialities or unsupported assertions. There is never an attempt to empirically justify her claims about human nature. If you tried to program an AI using Objectivism it would be a disaster. I don’t think you could ever get the thing running because all the terms are so poorly defined.
So it just seems like a waste of time to listen to Eliezer talk about this.
Edit: I think I only voted down the initial suggestion though. Not the ensuing discussion.