To the extent that they can be considered to be in good faith (and not just verbal tokens intended to influence) some of them only support the position you used them for if you genuinely do not understand them (implying that there is no answer).
I’m probably too tired to parse this right now. I believe there probably is an answer, but it is buried under hundreds of posts about marginal issues. All those writings on rationality, there is nothing I disagree with. Many people know about all this even outside of the LW community. But what is it that they don’t know that EY and the SIAI knows? What I was trying to say is that if I have come across it then it was not convincing enough to take it as serious as some people here obviously do.
It looks like that I’m not alone. Goertzel, Hanson, Egan and lots of other people don’t see it as well. So what are we missing, what is it that we haven’t read or understood?
Goertzel: I could and will list the errors I see in his arguments (if nobody there has done so first). For now I’ll just say his response to claim #2 seems to conflate humans and AIs. But unless I’ve missed something big, which certainly seems possible, he didn’t make his decision based on those arguments. They don’t seem good enough on their face to convince anyone. For example, I don’t think he could really believe that he and other researchers would unconsciously restrict the AI’s movement in the space of possible minds to the safe area(s), but if we reject that possibility some version of #4 seems to follow logically from 1 and 2.
Egan: don’t know. What I’ve seen looks unimpressive, though certainly he has reason to doubt ‘transhumanist’ predictions for the near future. (SIAI instead seems to assume that if humans can produce AGI, then either we’ll do so eventually or we’ll die out first. Also, that we could produce artificial X-maximizing intelligence more easily then we can produce artificial nearly-any-other-human-trait, which seems likely based on the tool I use to write this and the history of said tool.) Do you have a particular statement or implied statement of his in mind?
Hanson: maybe I shouldn’t point any of this out, but EY started by pursuing a Heinlein Hero quest to save the world through his own rationality. He then found himself compelled to reinvent democracy and regulation (albeit in a form closely tailored to the case at hand and without any strict logical implications for normal politics). His conservative/libertarian economist friend called these new views wrongheaded despite verbally agreeing with him that EY should act on those views. Said friend also posted a short essay about “heritage” that allowed him to paint those who disagreed with his particular libertarian vision as egg-headed elitists.
He wasn’t quoting Goertzel, Egan, and Hanson—though his formatting made it look like he was. He was commenting on your claim that these three “don’t see it”.
Sorry, I don’t know what quotes you mean. You can find a link to the “heritage” post in the wiki-compilation of the debate. Though perhaps you meant to reply to someone else?
Never mind, I just skimmed over it and thought you were quoting someone. If you delete your comment I’ll delete this one. I’ll read your orginal comment again now.
I’m probably too tired to parse this right now. I believe there probably is an answer, but it is buried under hundreds of posts about marginal issues. All those writings on rationality, there is nothing I disagree with. Many people know about all this even outside of the LW community. But what is it that they don’t know that EY and the SIAI knows? What I was trying to say is that if I have come across it then it was not convincing enough to take it as serious as some people here obviously do.
It looks like that I’m not alone. Goertzel, Hanson, Egan and lots of other people don’t see it as well. So what are we missing, what is it that we haven’t read or understood?
Goertzel: I could and will list the errors I see in his arguments (if nobody there has done so first). For now I’ll just say his response to claim #2 seems to conflate humans and AIs. But unless I’ve missed something big, which certainly seems possible, he didn’t make his decision based on those arguments. They don’t seem good enough on their face to convince anyone. For example, I don’t think he could really believe that he and other researchers would unconsciously restrict the AI’s movement in the space of possible minds to the safe area(s), but if we reject that possibility some version of #4 seems to follow logically from 1 and 2.
Egan: don’t know. What I’ve seen looks unimpressive, though certainly he has reason to doubt ‘transhumanist’ predictions for the near future. (SIAI instead seems to assume that if humans can produce AGI, then either we’ll do so eventually or we’ll die out first. Also, that we could produce artificial X-maximizing intelligence more easily then we can produce artificial nearly-any-other-human-trait, which seems likely based on the tool I use to write this and the history of said tool.) Do you have a particular statement or implied statement of his in mind?
Hanson: maybe I shouldn’t point any of this out, but EY started by pursuing a Heinlein Hero quest to save the world through his own rationality. He then found himself compelled to reinvent democracy and regulation (albeit in a form closely tailored to the case at hand and without any strict logical implications for normal politics). His conservative/libertarian economist friend called these new views wrongheaded despite verbally agreeing with him that EY should act on those views. Said friend also posted a short essay about “heritage” that allowed him to paint those who disagreed with his particular libertarian vision as egg-headed elitists.
From where you got those quotes? References?
He wasn’t quoting Goertzel, Egan, and Hanson—though his formatting made it look like he was. He was commenting on your claim that these three “don’t see it”.
Whoops, I’m sorry, never mind.
Sorry, I don’t know what quotes you mean. You can find a link to the “heritage” post in the wiki-compilation of the debate. Though perhaps you meant to reply to someone else?
Never mind, I just skimmed over it and thought you were quoting someone. If you delete your comment I’ll delete this one. I’ll read your orginal comment again now.