[if] someone makes AI and uses it to rule the world with the power to modify people, it will be Eliezer Yudkowsky
What makes you think that?
For example, do you think he’s the only person working on building AI powerful enough to change the world? Or that, of the people working on it, he’s the only one competent enough to succeed? Or that, of the people who can succeed, he’s the only one who would “use” the resulting AI to rule the world and modify people? Or something else?
He’s the only person I know of who wants to build an AI that will take over the world and do what he wants. He’s also smart enough to have a chance, which is disturbing.
Have you read his paper on CEV? To the best of my knowledge, that’s the clearest place he’s laid out what he wants an AGI to do, and I wouldn’t really label it “take over the world and do what [Eliezer Yudkowsky] wants” except for broad use of those terms to the point of dropping their typical connotations.
Don’t worry. We are in good hands. Eliezer understands the dillemas involved and will ensure that we can avoid non-friendly AI. The SI are dedicated to Friendly AI and the completion of their goal.
I can virtually guarantee you that he’s not the only one who wants to build such an AI. Google, IBM, and the heads of major three-letter government agencies all come to mind as the kind of players who would want to implement their own pet genie, and are actively working toward that goal. That said, it’s possible that EY is the only one who has a chance of success… I personally wouldn’t give him, or any other human, that much credit, but I do acknowledge the possibility.
For what it’s worth, I disagree with many (if not most) LessWrongers (LessWrongites ? LessWrongoids ?) on the subject of the Singularity. I am far from convinced that the Singularity is even possible in principle, and I am fairly certain that, even if it were possible, it would not occur within my lifetime, or my (hypothetical) children’s lifetimes.
EDIT: added a crucial “not” in the last sentence. Oops.
I also think the singularity is much less likely than most Lesswrongers. Which is quite comforting, because my estimated probability for the singularity is still higher than my estimated probability that the problem of friendly AI is tractable.
Just chiming in here because I think the question about the singularity on the LW survey was not well-designed to capture the opinion of those who don’t think it likely to happen at all, so the median LW perception of the singularity may not be what it appears.
Yeah… spending time on Less Wrong helps one in general appreciate how much existential risk there is, especially from technologies, and how little attention is paid to it. Thinking about the Great Filter will just make everything seem even worse.
A runaway AI might wind up being very destructive, but quite probably not wholly destructive.
It seems likely that it would find some of the knowledge humanity has built up over the
millenia useful, regardless of what specific goals it had. In that sense, I think that even if
a paperclip optimizer is built and eats the world, we won’t have been wholly forgotten in
the way we would if, e.g. the sun exploded and vaporized our planet. I don’t find this to be
much comfort, but how comforting or not it is is a matter of personal taste.
As I mentioned here,
I’ve seen a presentation on Watson, and it looks to me like its architecture is compatible with recursive
self-improvement (though that is not the immediate goal for it). Clippy does seem rather probable...
One caveat: I tend to overestimate risks. I overestimated the severity of y2k, and I’ve overestimated a
variety of personal risks.
Yes, I was indeed alluding to User:Clippy. Actually, I should have tweaked the reference, since it it the possibility of a paperclip maximiser that hasFOOMed that really represents the threat.
What makes you think that?
For example, do you think he’s the only person working on building AI powerful enough to change the world?
Or that, of the people working on it, he’s the only one competent enough to succeed?
Or that, of the people who can succeed, he’s the only one who would “use” the resulting AI to rule the world and modify people?
Or something else?
He’s the only person I know of who wants to build an AI that will take over the world and do what he wants. He’s also smart enough to have a chance, which is disturbing.
Have you read his paper on CEV? To the best of my knowledge, that’s the clearest place he’s laid out what he wants an AGI to do, and I wouldn’t really label it “take over the world and do what [Eliezer Yudkowsky] wants” except for broad use of those terms to the point of dropping their typical connotations.
Don’t worry. We are in good hands. Eliezer understands the dillemas involved and will ensure that we can avoid non-friendly AI. The SI are dedicated to Friendly AI and the completion of their goal.
I can virtually guarantee you that he’s not the only one who wants to build such an AI. Google, IBM, and the heads of major three-letter government agencies all come to mind as the kind of players who would want to implement their own pet genie, and are actively working toward that goal. That said, it’s possible that EY is the only one who has a chance of success… I personally wouldn’t give him, or any other human, that much credit, but I do acknowledge the possibility.
Thank you. I’ve just updated on that. I now consider it even more likely that the world will be destroyed within my lifetime.
For what it’s worth, I disagree with many (if not most) LessWrongers (LessWrongites ? LessWrongoids ?) on the subject of the Singularity. I am far from convinced that the Singularity is even possible in principle, and I am fairly certain that, even if it were possible, it would not occur within my lifetime, or my (hypothetical) children’s lifetimes.
EDIT: added a crucial “not” in the last sentence. Oops.
I also think the singularity is much less likely than most Lesswrongers. Which is quite comforting, because my estimated probability for the singularity is still higher than my estimated probability that the problem of friendly AI is tractable.
Just chiming in here because I think the question about the singularity on the LW survey was not well-designed to capture the opinion of those who don’t think it likely to happen at all, so the median LW perception of the singularity may not be what it appears.
Yeah… spending time on Less Wrong helps one in general appreciate how much existential risk there is, especially from technologies, and how little attention is paid to it. Thinking about the Great Filter will just make everything seem even worse.
A runaway AI might wind up being very destructive, but quite probably not wholly destructive. It seems likely that it would find some of the knowledge humanity has built up over the millenia useful, regardless of what specific goals it had. In that sense, I think that even if a paperclip optimizer is built and eats the world, we won’t have been wholly forgotten in the way we would if, e.g. the sun exploded and vaporized our planet. I don’t find this to be much comfort, but how comforting or not it is is a matter of personal taste.
As I mentioned here, I’ve seen a presentation on Watson, and it looks to me like its architecture is compatible with recursive self-improvement (though that is not the immediate goal for it). Clippy does seem rather probable...
One caveat: I tend to overestimate risks. I overestimated the severity of y2k, and I’ve overestimated a variety of personal risks.
“I see that you’re trying to extrapolate human volition. Would you like some help ?” converts the Earth into computronium
Soreff was probably alluding to User:Clippy, someone role-playing a non-FOOMed paperclip maximiser.
Though yours is good too :-)
Yes, I was indeed alluding to User:Clippy. Actually, I should have tweaked the reference, since it it the possibility of a paperclip maximiser that has FOOMed that really represents the threat.
Ah, thanks, that makes sense.