Saying you are ‘Devil’s Advocate’ isn’t an excuse to use bad arguments.
I don’t think I used a bad argument, otherwise I wouldn’t have done it.
You conveyed most of your argument via rhetorical questions.
Wow, you overestimate my education and maybe intelligence here. I have no formal education except primary school. I haven’t taken a rhetoric course or something. I honestly believe that what I have stated would be the opinion of a lot of educated people outside of this community if they came across the arguments on this site and by the SIAI. That is, data and empirical criticism are missing given the extensive use of the idea that is AI going FOOM to justify all kinds of further argumentation.
Wow, you overestimate my education and maybe intelligence here. I have no formal education except primary school. I haven’t taken a rhetoric course or something.
“Rhetorical question” is just the name. Asking questions to try convince people rather than telling them outright is something most people pick up by the time they are 8.
I honestly believe that what I have stated would be the opinion of a lot of educated people outside of this community if they came across the arguments on this site and by the SIAI
I think this is true.
. That is
This isn’t. That is, the ‘that is’ doesn’t doesn’t fit. What educated people will think really isn’t determined by things like the below. (People are stupid, the world is mad, etc)
data and empirical criticism are missing given the extensive use of the idea that is AI going FOOM to justify all kinds of further argumentation.
I agree with this. Well, not the ‘empirical’ part (that’s hard to do without destroying the universe.)
You conveyed most of your argument via rhetorical questions.
Wow, you overestimate my education and maybe intelligence here. I have no formal education except primary school. I haven’t taken a rhetoric course or something.
I’m fighting against giants here. Someone who only mastered elementary school. I believe it should be easy to refute my arguments or show me where I am wrong, point me to some documents I should read up on. But I just don’t see that happening. I talk to other smart people online as well, that way I was actually able to overcome religion. But seldom there have been people less persuasive than you when it comes to risks associated with artificial intelligence and the technological singularity. Yes, maybe I’m unable to comprehend it right now, I grant you that. Whatever the reason, I’m not conviced and will say so as long as it takes. Of course you don’t need to convince me, but I don’t need to stop questioning either.
Here is a very good comment by Ben Goertzel that pinpoints it:
This is what discussions with SIAI people on the Scary Idea almost always come down to!
The prototypical dialogue goes like this.
SIAI Guy: If you make a human-level AGI using OpenCog, without a provably Friendly design, it will almost surely kill us all.
Ben: Why?
SIAI Guy: The argument is really complex, but if you read Less Wrong you should understand it
Ben: I read the Less Wrong blog posts. Isn’t there somewhere that the argument is presented formally and systematically?
SIAI Guy: No. It’s really complex, and nobody in-the-know had time to really spell it out like that.
But seldom there have been people less persuasive than you when it comes to risks associated with artificial intelligence and the technological singularity.
I don’t know if there is a persuasive argument about all these risks. The point of all this rationality-improving blogging is that when you debug your thinking, when you can follow long chains of reasoning and feel certain you haven’t made a mistake, when you’re free from motivated cognition—when you can look where the evidence points instead of finding evidence that points where you’re looking! - then you can reason out the risks involved in recursively self-improving self-modifying goal-oriented optimizing processes.
I don’t think I used a bad argument, otherwise I wouldn’t have done it.
Wow, you overestimate my education and maybe intelligence here. I have no formal education except primary school. I haven’t taken a rhetoric course or something. I honestly believe that what I have stated would be the opinion of a lot of educated people outside of this community if they came across the arguments on this site and by the SIAI. That is, data and empirical criticism are missing given the extensive use of the idea that is AI going FOOM to justify all kinds of further argumentation.
“Rhetorical question” is just the name. Asking questions to try convince people rather than telling them outright is something most people pick up by the time they are 8.
I think this is true.
This isn’t. That is, the ‘that is’ doesn’t doesn’t fit. What educated people will think really isn’t determined by things like the below. (People are stupid, the world is mad, etc)
I agree with this. Well, not the ‘empirical’ part (that’s hard to do without destroying the universe.)
Indeed, what an irony...
I’m fighting against giants here. Someone who only mastered elementary school. I believe it should be easy to refute my arguments or show me where I am wrong, point me to some documents I should read up on. But I just don’t see that happening. I talk to other smart people online as well, that way I was actually able to overcome religion. But seldom there have been people less persuasive than you when it comes to risks associated with artificial intelligence and the technological singularity. Yes, maybe I’m unable to comprehend it right now, I grant you that. Whatever the reason, I’m not conviced and will say so as long as it takes. Of course you don’t need to convince me, but I don’t need to stop questioning either.
Here is a very good comment by Ben Goertzel that pinpoints it:
I don’t know if there is a persuasive argument about all these risks. The point of all this rationality-improving blogging is that when you debug your thinking, when you can follow long chains of reasoning and feel certain you haven’t made a mistake, when you’re free from motivated cognition—when you can look where the evidence points instead of finding evidence that points where you’re looking! - then you can reason out the risks involved in recursively self-improving self-modifying goal-oriented optimizing processes.
My argument is fairly simple -
If humans found it sufficiently useful to wipe chimpanzees off the face of the earth, we could and would do so.
The level of AI I’m discussing is at least as much smarter than us as we are of chimpanzees.