He told me that if he came up with more evidence like sorcery I’d just go ahead and find new rhetorical demands.
[See context for implied meaning if the excerpt isn’t clear]. I claimed approximately the same thing that you say yourself below.
I’m a supporter and donor and what I’m trying to do here is coming up with the best possible arguments to undermine the credence of the SIAI. Almost nobody else is doing that, so I’m trying my best here.
I’ve got nothing against the Devil, it’s the Advocacy that is mostly bullshit. Saying you are ‘Devil’s Advocate’ isn’t an excuse to use bad arguments. That would be an insult to the Devil!
I don’t even know where to start here, so I won’t. But I haven’t come across anything yet that I had trouble understanding.
You conveyed most of your argument via rhetorical questions. To the extent that they can be considered to be in good faith (and not just verbal tokens intended to influence) some of them only support the position you used them for if you genuinely do not understand them (implying that there is no answer). I believe I quoted an example in the context.
Making an assertion into a question does not give a license to say whatever you want with no risk of direct contradiction. (Even though that is how the tactic is used in practice.)
More concise answer: Then don’t ask stupid questions!
To the extent that they can be considered to be in good faith (and not just verbal tokens intended to influence) some of them only support the position you used them for if you genuinely do not understand them (implying that there is no answer).
I’m probably too tired to parse this right now. I believe there probably is an answer, but it is buried under hundreds of posts about marginal issues. All those writings on rationality, there is nothing I disagree with. Many people know about all this even outside of the LW community. But what is it that they don’t know that EY and the SIAI knows? What I was trying to say is that if I have come across it then it was not convincing enough to take it as serious as some people here obviously do.
It looks like that I’m not alone. Goertzel, Hanson, Egan and lots of other people don’t see it as well. So what are we missing, what is it that we haven’t read or understood?
Goertzel: I could and will list the errors I see in his arguments (if nobody there has done so first). For now I’ll just say his response to claim #2 seems to conflate humans and AIs. But unless I’ve missed something big, which certainly seems possible, he didn’t make his decision based on those arguments. They don’t seem good enough on their face to convince anyone. For example, I don’t think he could really believe that he and other researchers would unconsciously restrict the AI’s movement in the space of possible minds to the safe area(s), but if we reject that possibility some version of #4 seems to follow logically from 1 and 2.
Egan: don’t know. What I’ve seen looks unimpressive, though certainly he has reason to doubt ‘transhumanist’ predictions for the near future. (SIAI instead seems to assume that if humans can produce AGI, then either we’ll do so eventually or we’ll die out first. Also, that we could produce artificial X-maximizing intelligence more easily then we can produce artificial nearly-any-other-human-trait, which seems likely based on the tool I use to write this and the history of said tool.) Do you have a particular statement or implied statement of his in mind?
Hanson: maybe I shouldn’t point any of this out, but EY started by pursuing a Heinlein Hero quest to save the world through his own rationality. He then found himself compelled to reinvent democracy and regulation (albeit in a form closely tailored to the case at hand and without any strict logical implications for normal politics). His conservative/libertarian economist friend called these new views wrongheaded despite verbally agreeing with him that EY should act on those views. Said friend also posted a short essay about “heritage” that allowed him to paint those who disagreed with his particular libertarian vision as egg-headed elitists.
He wasn’t quoting Goertzel, Egan, and Hanson—though his formatting made it look like he was. He was commenting on your claim that these three “don’t see it”.
Sorry, I don’t know what quotes you mean. You can find a link to the “heritage” post in the wiki-compilation of the debate. Though perhaps you meant to reply to someone else?
Never mind, I just skimmed over it and thought you were quoting someone. If you delete your comment I’ll delete this one. I’ll read your orginal comment again now.
Saying you are ‘Devil’s Advocate’ isn’t an excuse to use bad arguments.
I don’t think I used a bad argument, otherwise I wouldn’t have done it.
You conveyed most of your argument via rhetorical questions.
Wow, you overestimate my education and maybe intelligence here. I have no formal education except primary school. I haven’t taken a rhetoric course or something. I honestly believe that what I have stated would be the opinion of a lot of educated people outside of this community if they came across the arguments on this site and by the SIAI. That is, data and empirical criticism are missing given the extensive use of the idea that is AI going FOOM to justify all kinds of further argumentation.
Wow, you overestimate my education and maybe intelligence here. I have no formal education except primary school. I haven’t taken a rhetoric course or something.
“Rhetorical question” is just the name. Asking questions to try convince people rather than telling them outright is something most people pick up by the time they are 8.
I honestly believe that what I have stated would be the opinion of a lot of educated people outside of this community if they came across the arguments on this site and by the SIAI
I think this is true.
. That is
This isn’t. That is, the ‘that is’ doesn’t doesn’t fit. What educated people will think really isn’t determined by things like the below. (People are stupid, the world is mad, etc)
data and empirical criticism are missing given the extensive use of the idea that is AI going FOOM to justify all kinds of further argumentation.
I agree with this. Well, not the ‘empirical’ part (that’s hard to do without destroying the universe.)
You conveyed most of your argument via rhetorical questions.
Wow, you overestimate my education and maybe intelligence here. I have no formal education except primary school. I haven’t taken a rhetoric course or something.
I’m fighting against giants here. Someone who only mastered elementary school. I believe it should be easy to refute my arguments or show me where I am wrong, point me to some documents I should read up on. But I just don’t see that happening. I talk to other smart people online as well, that way I was actually able to overcome religion. But seldom there have been people less persuasive than you when it comes to risks associated with artificial intelligence and the technological singularity. Yes, maybe I’m unable to comprehend it right now, I grant you that. Whatever the reason, I’m not conviced and will say so as long as it takes. Of course you don’t need to convince me, but I don’t need to stop questioning either.
Here is a very good comment by Ben Goertzel that pinpoints it:
This is what discussions with SIAI people on the Scary Idea almost always come down to!
The prototypical dialogue goes like this.
SIAI Guy: If you make a human-level AGI using OpenCog, without a provably Friendly design, it will almost surely kill us all.
Ben: Why?
SIAI Guy: The argument is really complex, but if you read Less Wrong you should understand it
Ben: I read the Less Wrong blog posts. Isn’t there somewhere that the argument is presented formally and systematically?
SIAI Guy: No. It’s really complex, and nobody in-the-know had time to really spell it out like that.
But seldom there have been people less persuasive than you when it comes to risks associated with artificial intelligence and the technological singularity.
I don’t know if there is a persuasive argument about all these risks. The point of all this rationality-improving blogging is that when you debug your thinking, when you can follow long chains of reasoning and feel certain you haven’t made a mistake, when you’re free from motivated cognition—when you can look where the evidence points instead of finding evidence that points where you’re looking! - then you can reason out the risks involved in recursively self-improving self-modifying goal-oriented optimizing processes.
[See context for implied meaning if the excerpt isn’t clear]. I claimed approximately the same thing that you say yourself below.
I’ve got nothing against the Devil, it’s the Advocacy that is mostly bullshit. Saying you are ‘Devil’s Advocate’ isn’t an excuse to use bad arguments. That would be an insult to the Devil!
You conveyed most of your argument via rhetorical questions. To the extent that they can be considered to be in good faith (and not just verbal tokens intended to influence) some of them only support the position you used them for if you genuinely do not understand them (implying that there is no answer). I believe I quoted an example in the context.
Making an assertion into a question does not give a license to say whatever you want with no risk of direct contradiction. (Even though that is how the tactic is used in practice.)
More concise answer: Then don’t ask stupid questions!
I’m probably too tired to parse this right now. I believe there probably is an answer, but it is buried under hundreds of posts about marginal issues. All those writings on rationality, there is nothing I disagree with. Many people know about all this even outside of the LW community. But what is it that they don’t know that EY and the SIAI knows? What I was trying to say is that if I have come across it then it was not convincing enough to take it as serious as some people here obviously do.
It looks like that I’m not alone. Goertzel, Hanson, Egan and lots of other people don’t see it as well. So what are we missing, what is it that we haven’t read or understood?
Goertzel: I could and will list the errors I see in his arguments (if nobody there has done so first). For now I’ll just say his response to claim #2 seems to conflate humans and AIs. But unless I’ve missed something big, which certainly seems possible, he didn’t make his decision based on those arguments. They don’t seem good enough on their face to convince anyone. For example, I don’t think he could really believe that he and other researchers would unconsciously restrict the AI’s movement in the space of possible minds to the safe area(s), but if we reject that possibility some version of #4 seems to follow logically from 1 and 2.
Egan: don’t know. What I’ve seen looks unimpressive, though certainly he has reason to doubt ‘transhumanist’ predictions for the near future. (SIAI instead seems to assume that if humans can produce AGI, then either we’ll do so eventually or we’ll die out first. Also, that we could produce artificial X-maximizing intelligence more easily then we can produce artificial nearly-any-other-human-trait, which seems likely based on the tool I use to write this and the history of said tool.) Do you have a particular statement or implied statement of his in mind?
Hanson: maybe I shouldn’t point any of this out, but EY started by pursuing a Heinlein Hero quest to save the world through his own rationality. He then found himself compelled to reinvent democracy and regulation (albeit in a form closely tailored to the case at hand and without any strict logical implications for normal politics). His conservative/libertarian economist friend called these new views wrongheaded despite verbally agreeing with him that EY should act on those views. Said friend also posted a short essay about “heritage” that allowed him to paint those who disagreed with his particular libertarian vision as egg-headed elitists.
From where you got those quotes? References?
He wasn’t quoting Goertzel, Egan, and Hanson—though his formatting made it look like he was. He was commenting on your claim that these three “don’t see it”.
Whoops, I’m sorry, never mind.
Sorry, I don’t know what quotes you mean. You can find a link to the “heritage” post in the wiki-compilation of the debate. Though perhaps you meant to reply to someone else?
Never mind, I just skimmed over it and thought you were quoting someone. If you delete your comment I’ll delete this one. I’ll read your orginal comment again now.
I don’t think I used a bad argument, otherwise I wouldn’t have done it.
Wow, you overestimate my education and maybe intelligence here. I have no formal education except primary school. I haven’t taken a rhetoric course or something. I honestly believe that what I have stated would be the opinion of a lot of educated people outside of this community if they came across the arguments on this site and by the SIAI. That is, data and empirical criticism are missing given the extensive use of the idea that is AI going FOOM to justify all kinds of further argumentation.
“Rhetorical question” is just the name. Asking questions to try convince people rather than telling them outright is something most people pick up by the time they are 8.
I think this is true.
This isn’t. That is, the ‘that is’ doesn’t doesn’t fit. What educated people will think really isn’t determined by things like the below. (People are stupid, the world is mad, etc)
I agree with this. Well, not the ‘empirical’ part (that’s hard to do without destroying the universe.)
Indeed, what an irony...
I’m fighting against giants here. Someone who only mastered elementary school. I believe it should be easy to refute my arguments or show me where I am wrong, point me to some documents I should read up on. But I just don’t see that happening. I talk to other smart people online as well, that way I was actually able to overcome religion. But seldom there have been people less persuasive than you when it comes to risks associated with artificial intelligence and the technological singularity. Yes, maybe I’m unable to comprehend it right now, I grant you that. Whatever the reason, I’m not conviced and will say so as long as it takes. Of course you don’t need to convince me, but I don’t need to stop questioning either.
Here is a very good comment by Ben Goertzel that pinpoints it:
I don’t know if there is a persuasive argument about all these risks. The point of all this rationality-improving blogging is that when you debug your thinking, when you can follow long chains of reasoning and feel certain you haven’t made a mistake, when you’re free from motivated cognition—when you can look where the evidence points instead of finding evidence that points where you’re looking! - then you can reason out the risks involved in recursively self-improving self-modifying goal-oriented optimizing processes.
My argument is fairly simple -
If humans found it sufficiently useful to wipe chimpanzees off the face of the earth, we could and would do so.
The level of AI I’m discussing is at least as much smarter than us as we are of chimpanzees.