You conveyed most of your argument via rhetorical questions.
Wow, you overestimate my education and maybe intelligence here. I have no formal education except primary school. I haven’t taken a rhetoric course or something.
I’m fighting against giants here. Someone who only mastered elementary school. I believe it should be easy to refute my arguments or show me where I am wrong, point me to some documents I should read up on. But I just don’t see that happening. I talk to other smart people online as well, that way I was actually able to overcome religion. But seldom there have been people less persuasive than you when it comes to risks associated with artificial intelligence and the technological singularity. Yes, maybe I’m unable to comprehend it right now, I grant you that. Whatever the reason, I’m not conviced and will say so as long as it takes. Of course you don’t need to convince me, but I don’t need to stop questioning either.
Here is a very good comment by Ben Goertzel that pinpoints it:
This is what discussions with SIAI people on the Scary Idea almost always come down to!
The prototypical dialogue goes like this.
SIAI Guy: If you make a human-level AGI using OpenCog, without a provably Friendly design, it will almost surely kill us all.
Ben: Why?
SIAI Guy: The argument is really complex, but if you read Less Wrong you should understand it
Ben: I read the Less Wrong blog posts. Isn’t there somewhere that the argument is presented formally and systematically?
SIAI Guy: No. It’s really complex, and nobody in-the-know had time to really spell it out like that.
But seldom there have been people less persuasive than you when it comes to risks associated with artificial intelligence and the technological singularity.
I don’t know if there is a persuasive argument about all these risks. The point of all this rationality-improving blogging is that when you debug your thinking, when you can follow long chains of reasoning and feel certain you haven’t made a mistake, when you’re free from motivated cognition—when you can look where the evidence points instead of finding evidence that points where you’re looking! - then you can reason out the risks involved in recursively self-improving self-modifying goal-oriented optimizing processes.
Indeed, what an irony...
I’m fighting against giants here. Someone who only mastered elementary school. I believe it should be easy to refute my arguments or show me where I am wrong, point me to some documents I should read up on. But I just don’t see that happening. I talk to other smart people online as well, that way I was actually able to overcome religion. But seldom there have been people less persuasive than you when it comes to risks associated with artificial intelligence and the technological singularity. Yes, maybe I’m unable to comprehend it right now, I grant you that. Whatever the reason, I’m not conviced and will say so as long as it takes. Of course you don’t need to convince me, but I don’t need to stop questioning either.
Here is a very good comment by Ben Goertzel that pinpoints it:
I don’t know if there is a persuasive argument about all these risks. The point of all this rationality-improving blogging is that when you debug your thinking, when you can follow long chains of reasoning and feel certain you haven’t made a mistake, when you’re free from motivated cognition—when you can look where the evidence points instead of finding evidence that points where you’re looking! - then you can reason out the risks involved in recursively self-improving self-modifying goal-oriented optimizing processes.
My argument is fairly simple -
If humans found it sufficiently useful to wipe chimpanzees off the face of the earth, we could and would do so.
The level of AI I’m discussing is at least as much smarter than us as we are of chimpanzees.