Objectvist ethics claims to be grounded in rational thought alone. Are you familiar enough with the main tenets of that particular philosophy and would you like to comment in what way you see it of possible use in regards to FAI theory?
In practice, most people inspired by Objectivism have not been able to achieve the sort of things that Rand and her heroes achieved. As far as I can tell, other than Rand herself, no dogmatic Objectivists have done so. Most strikingly, the most influential Objectivist came to head the Federal Reserve Bank. Given this, I conclude that Objectivism isn’t the stuff that makes you win, so it’s not rationality. That said, I’m very interested in discussing rationality with reflective people who ARE trying to win.
“Given this, I conclude that Objectivism isn’t the stuff that makes you win, so it’s not rationality.”
Do you think it is worthwhile to find out where exactly their rationality broke down to avoid a similar outcome here? How would you characterize ‘winning’ exactly?
Winning = FAI before UFAI, though there are lots of sub-goals to that.
It’s definitely worth understanding where other people’s rationality breaks down, but I think I understand it reasonably well, both in terms of general principles and the specific history of Objectivism, which has been pretty well documented. We do have a huge amount of written material on rationality breaking down and I think I know rather more than we have published. Major points include Rand’s disinterest in science, especially science that felt mystical to her like modern physics or hypnosis, and her failure to notice her foundational confusions and respond with due skepticism to long inferential chains built on them.
That said, I’d be happy to discuss the topic with Nathaniel Branden some time if he’s interested in doing so. I’m sure that his life experience would contribute usefully to my understanding and that it isn’t all found in existing bodies of literature either.
To be clear, MV, are you saying that for you or others with a similar Ultimate Value winning is defined that way? I was under the impression that Winning meant more generally “achieving that which is beyond rationality and which motivates its use,” around here.
Hmm—interesting. I thought this could be of interest, considering that there is a large overlap in the desire to be rational on this site and combating the existential risks a rouge AI poses. Reason and existence are central to Objectivism too after all:
“it is only the concept of ‘Life’ that makes the concept of ‘Value’ possible,” and, “the fact that a living entity is, determines what it ought to do.” She writes: “there is only one fundamental alternative in the universe: existence or non-existence—and it pertains to a single class of entities: to living organisms.” also “Man knows that he has to be right. To be wrong in action means danger to his life. To be wrong in person – to be evil – means to be unfit for existence.”
I did not find an analysis in Guardians of Ayn Rand that concerned itself with those basic virtues of ‘existence’ and ‘reason’. I personally find objectivism flawed for focusing on the individual and not on the group but that is a different matter.
Objectvist ethics claims to be grounded in rational thought alone. [...] Reason and existence are central to Objectivism too after all
Objectivism claims to be grounded in rational thought, but that doesn’t mean it is. Ayn Rand said a lot of things that I’ve personally found interesting or inspiring, but taking Objectivism seriously as a theory of how the world really works is just silly. The rationality I know is grounded in an empiricism which Rand just utterly fails at. She makes all these sorts of fascinating pronouncements on the nature of “man” (sic) and economic organization seemingly without even considering the drop-dead basic sorts of questions. Well, what if I’m wrong? What would we expect to see and not see if my theory is right?
I realize that I am being voted down here, but am not sure why actually. This site is dedicated to rationality and the core concern of avoiding a human extinction scenario. So far Rand and lesswrong seem a pretty close match. Don’t you think it would be nice to know exactly where Rand took a wrong turn so that it can be explicitly avoided in this project? Rand making some random remarks on music taste surely does not invalidate her recognition that being rational and avoiding extinction are of crucial importance.
So where did she take a wrong turn exactly and how is this wrong turn avoided here? Nobody interested in finding out?
the core concern of avoiding a human extinction scenario.
That is not the core concern of this site. We are in a human extinction scenario so long as the problem of death remains unsolved. Our interest is in escaping this scenario as quickly as possible. The difference is urgency; we are not trying to avoid a collision, but are trying to escape the burning wreckage.
If we can’t stop dying, we can’t stop extinction. Logically, if everyone dies, and there are a finite number of humans, there will necessarily be a last human who dies.
[edit] To those down-voting me: I take my lumps willingly, but could you at least tell me why you think I’m wrong?
That is not the core concern of this site. We are in a human extinction scenario so long as the problem of death remains unsolved. Our interest is in escaping this scenario as quickly as possible. The difference is urgency; we are not trying to avoid a collision, but are trying to escape the burning wreckage.
. . .
If we can’t stop dying, we can’t stop extinction. . . . To those down-voting me: I take my lumps willingly, but could you at least tell me why you think I’m wrong?
To solve the problem of death, you have to solve the problem of extinction and you have to solve the problem of death from old age.
But to solve the problem of extinction, you do not have to solve the problem of death from old age (as long as couples continue to have children at the replacement rate).
My guess is that the reason you failed immediately to make the distinction between the problem of death and the problem of extinction is that under your way of valuing things, if every human individual now living dies, the human species may as well go extinction for all you care. In other words, you do not assign intrinsic value to individuals not yet born or to the species as a whole distinct from its members. It would help me learn to think better about these issues if you would indicate how accurate my guess was.
My second guess, if my first guess is wrong, is that you failed to distinguish between the following 2 statements. The first is true, the second is what you wrote.
The probability that the species will become extinct because every individual human will die of old age is negligible compared the the extinction risk of insufficiently-careful AGI research.
I’m not talking about old age, I’m talking about death. This includes death from plague, asteroid, LHC mishap, or paperclip maximizer. I didn’t say “cure death” or “cure old age” but “[solve] the problem of death”. And for the record, to my mind, the likeliest solution involves AGI, developed extremely carefully—but as quickly as possible under that condition.
Having refreshed, I see you’ve changed the course of your reply to some degree. I’d like to respond further but I don’t have time to think it through right now. I will just add that while I don’t assign intrinsic value to individuals not yet born, I do intrinsically value the human species as a present and future entity—but not as much as I value individuals currently alive. That said, I need to spend some time thinking about this before I add to my answer. I may have been too hasty and accidentally weakened the implication of “extinction” through a poor turn of phrase.
I don’t assign intrinsic value to individuals not yet born
Note that this is dynamically inconsistent: given the opportunity, this value implies that at time T, you would want to bind yourself so that at all times greater than T, you would still only intrinsically care about people who were alive at time T. (Unless you have ‘overriding’ values of not modifying yourself, or of your intrinsic valuations changing in certain ways, etc., but that sounds awfully messy and possibly unstable.)
(Also, that’s assuming causal decision theory. TDT/UDT probably gives a different result due to negotiations with similar agents binding themselves at different times, but I don’t want to work that out right now.)
The probability that the species will become extinct because every individual human will die of old age is negligible compared the the extinction risk of insufficiently-careful AGI research.
I’m not talking about old age, I’m talking about death. This includes death from plague, asteroid, LHC mishap, or paperclip maximizer.
. . .
Having refreshed, I see you’ve changed the course of your reply to some degree.
I did, when I realized my first reply was vulnerable to the response which you in fact made and which I quote above. (I should probably let my replies sit for 15 minutes before submitting/uploading them to reduce the probability of situations like this one, which can get confusing.)
(And thank you for your reply to my question about your values.)
I don’t see why you’re being downvoted either, but one obvious point (besides Richard’s) is that if for some reason there can only be finitely many humans, probably the same reason means humans can only live finitely long.
I realize that I am being voted down here, but am not sure why actually.
I’ve downvoted your comments in this thread because I don’t think serious discussion of the relevance of Objectivism to existential risk reduction meets Less Wrong’s quality standard; Ayn Rand just doesn’t have anything useful to teach us. Nothing personal, just a matter of “I would like to see fewer comments like this one.” (I do hope to see comments from you in the future.)
Rand making some random remarks on music taste surely does not invalidate her recognition that being rational and avoiding extinction are or crucial importance.
Ayn Rand would hardly be alone in assenting to the propositions that “Rationality is good” and “The end of the world would be bad.” A more relevant question would be whether Rand’s teachings make a significant contribution to this community’s understanding of how to systematically achieve more accurate beliefs and a lower probability of doom. As dearly as I loved Atlas Shrugged, I’m still going to have to answer no.
Well to begin with I don’t really think Rand was concerned about human extinction, though I haven’t read much so maybe you can enlighten me. She also used the word reason a lot. But it doesn’t really follow that she was actually employing the concept that we call reason. If she wasn’t then thats where she went wrong. He writing is seriously chalk full of obfuscation, conflation and any close to every logical fallacy. Even the quotes you gave above are either inane trivialities or unsupported assertions. There is never an attempt to empirically justify her claims about human nature. If you tried to program an AI using Objectivism it would be a disaster. I don’t think you could ever get the thing running because all the terms are so poorly defined.
So it just seems like a waste of time to listen to Eliezer talk about this.
Edit: I think I only voted down the initial suggestion though. Not the ensuing discussion.
Objectvist ethics claims to be grounded in rational thought alone. Are you familiar enough with the main tenets of that particular philosophy and would you like to comment in what way you see it of possible use in regards to FAI theory?
In practice, most people inspired by Objectivism have not been able to achieve the sort of things that Rand and her heroes achieved. As far as I can tell, other than Rand herself, no dogmatic Objectivists have done so. Most strikingly, the most influential Objectivist came to head the Federal Reserve Bank. Given this, I conclude that Objectivism isn’t the stuff that makes you win, so it’s not rationality. That said, I’m very interested in discussing rationality with reflective people who ARE trying to win.
“Given this, I conclude that Objectivism isn’t the stuff that makes you win, so it’s not rationality.”
Do you think it is worthwhile to find out where exactly their rationality broke down to avoid a similar outcome here? How would you characterize ‘winning’ exactly?
Winning = FAI before UFAI, though there are lots of sub-goals to that.
It’s definitely worth understanding where other people’s rationality breaks down, but I think I understand it reasonably well, both in terms of general principles and the specific history of Objectivism, which has been pretty well documented. We do have a huge amount of written material on rationality breaking down and I think I know rather more than we have published. Major points include Rand’s disinterest in science, especially science that felt mystical to her like modern physics or hypnosis, and her failure to notice her foundational confusions and respond with due skepticism to long inferential chains built on them.
That said, I’d be happy to discuss the topic with Nathaniel Branden some time if he’s interested in doing so. I’m sure that his life experience would contribute usefully to my understanding and that it isn’t all found in existing bodies of literature either.
To be clear, MV, are you saying that for you or others with a similar Ultimate Value winning is defined that way? I was under the impression that Winning meant more generally “achieving that which is beyond rationality and which motivates its use,” around here.
Relevant Eliezer post: Guardians of Ayn Rand
There was an entire thread below this level—with reply and counter reply. Any idea what happened to it?
Hmm—interesting. I thought this could be of interest, considering that there is a large overlap in the desire to be rational on this site and combating the existential risks a rouge AI poses. Reason and existence are central to Objectivism too after all:
“it is only the concept of ‘Life’ that makes the concept of ‘Value’ possible,” and, “the fact that a living entity is, determines what it ought to do.” She writes: “there is only one fundamental alternative in the universe: existence or non-existence—and it pertains to a single class of entities: to living organisms.” also “Man knows that he has to be right. To be wrong in action means danger to his life. To be wrong in person – to be evil – means to be unfit for existence.”
I did not find an analysis in Guardians of Ayn Rand that concerned itself with those basic virtues of ‘existence’ and ‘reason’. I personally find objectivism flawed for focusing on the individual and not on the group but that is a different matter.
Objectivism claims to be grounded in rational thought, but that doesn’t mean it is. Ayn Rand said a lot of things that I’ve personally found interesting or inspiring, but taking Objectivism seriously as a theory of how the world really works is just silly. The rationality I know is grounded in an empiricism which Rand just utterly fails at. She makes all these sorts of fascinating pronouncements on the nature of “man” (sic) and economic organization seemingly without even considering the drop-dead basic sorts of questions. Well, what if I’m wrong? What would we expect to see and not see if my theory is right?
I realize that I am being voted down here, but am not sure why actually. This site is dedicated to rationality and the core concern of avoiding a human extinction scenario. So far Rand and lesswrong seem a pretty close match. Don’t you think it would be nice to know exactly where Rand took a wrong turn so that it can be explicitly avoided in this project? Rand making some random remarks on music taste surely does not invalidate her recognition that being rational and avoiding extinction are of crucial importance.
So where did she take a wrong turn exactly and how is this wrong turn avoided here? Nobody interested in finding out?
That is not the core concern of this site. We are in a human extinction scenario so long as the problem of death remains unsolved. Our interest is in escaping this scenario as quickly as possible. The difference is urgency; we are not trying to avoid a collision, but are trying to escape the burning wreckage.
Conventionally, there’s a difference between death and extinction.
If we can’t stop dying, we can’t stop extinction. Logically, if everyone dies, and there are a finite number of humans, there will necessarily be a last human who dies.
[edit] To those down-voting me: I take my lumps willingly, but could you at least tell me why you think I’m wrong?
To solve the problem of death, you have to solve the problem of extinction and you have to solve the problem of death from old age.
But to solve the problem of extinction, you do not have to solve the problem of death from old age (as long as couples continue to have children at the replacement rate).
My guess is that the reason you failed immediately to make the distinction between the problem of death and the problem of extinction is that under your way of valuing things, if every human individual now living dies, the human species may as well go extinction for all you care. In other words, you do not assign intrinsic value to individuals not yet born or to the species as a whole distinct from its members. It would help me learn to think better about these issues if you would indicate how accurate my guess was.
My second guess, if my first guess is wrong, is that you failed to distinguish between the following 2 statements. The first is true, the second is what you wrote.
If we can’t stop extinction, we can’t stop dying.
If we can’t stop dying, we can’t stop extinction.
I’m not talking about old age, I’m talking about death. This includes death from plague, asteroid, LHC mishap, or paperclip maximizer. I didn’t say “cure death” or “cure old age” but “[solve] the problem of death”. And for the record, to my mind, the likeliest solution involves AGI, developed extremely carefully—but as quickly as possible under that condition.
Having refreshed, I see you’ve changed the course of your reply to some degree. I’d like to respond further but I don’t have time to think it through right now. I will just add that while I don’t assign intrinsic value to individuals not yet born, I do intrinsically value the human species as a present and future entity—but not as much as I value individuals currently alive. That said, I need to spend some time thinking about this before I add to my answer. I may have been too hasty and accidentally weakened the implication of “extinction” through a poor turn of phrase.
Note that this is dynamically inconsistent: given the opportunity, this value implies that at time T, you would want to bind yourself so that at all times greater than T, you would still only intrinsically care about people who were alive at time T. (Unless you have ‘overriding’ values of not modifying yourself, or of your intrinsic valuations changing in certain ways, etc., but that sounds awfully messy and possibly unstable.)
(Also, that’s assuming causal decision theory. TDT/UDT probably gives a different result due to negotiations with similar agents binding themselves at different times, but I don’t want to work that out right now.)
I did, when I realized my first reply was vulnerable to the response which you in fact made and which I quote above. (I should probably let my replies sit for 15 minutes before submitting/uploading them to reduce the probability of situations like this one, which can get confusing.)
(And thank you for your reply to my question about your values.)
I don’t see why you’re being downvoted either, but one obvious point (besides Richard’s) is that if for some reason there can only be finitely many humans, probably the same reason means humans can only live finitely long.
Every human being in history so far has died and yet human are not extinct. Not sure what you mean.
I’ve downvoted your comments in this thread because I don’t think serious discussion of the relevance of Objectivism to existential risk reduction meets Less Wrong’s quality standard; Ayn Rand just doesn’t have anything useful to teach us. Nothing personal, just a matter of “I would like to see fewer comments like this one.” (I do hope to see comments from you in the future.)
Ayn Rand would hardly be alone in assenting to the propositions that “Rationality is good” and “The end of the world would be bad.” A more relevant question would be whether Rand’s teachings make a significant contribution to this community’s understanding of how to systematically achieve more accurate beliefs and a lower probability of doom. As dearly as I loved Atlas Shrugged, I’m still going to have to answer no.
Well to begin with I don’t really think Rand was concerned about human extinction, though I haven’t read much so maybe you can enlighten me. She also used the word reason a lot. But it doesn’t really follow that she was actually employing the concept that we call reason. If she wasn’t then thats where she went wrong. He writing is seriously chalk full of obfuscation, conflation and any close to every logical fallacy. Even the quotes you gave above are either inane trivialities or unsupported assertions. There is never an attempt to empirically justify her claims about human nature. If you tried to program an AI using Objectivism it would be a disaster. I don’t think you could ever get the thing running because all the terms are so poorly defined.
So it just seems like a waste of time to listen to Eliezer talk about this.
Edit: I think I only voted down the initial suggestion though. Not the ensuing discussion.