But in a cooperative endeavor like that, who’s going to listen to me explaining I don’t want to change in the way that would most benefit them?
Those of us who endorse respecting individual choices when we can afford to, because we prefer that our individual choices be respected when we can afford it.
I am not in principle opposed to people having all the strengths and none of the weaknesses of multiple types [..] I don’t think that in practice it will work for most people
If you think it will work for some people, but not most, are you in principle opposed to giving whatever-it-is-that-distinguishes-the-people-it-works-for for to anyone who wants it?
More broadly: I mostly consider all of this “what would EY do” stuff a distraction; the question that interests me is what I ought to want done and why I ought to want it done, not who or what does it. If large-scale celibacy is a good idea, I want to understand why it’s a good idea. Being told that some authority figure (any authority figure) advocated it doesn’t achieve that. Similarly, if it’s a bad idea, I want to understand why it’s a bad idea.
If you think it will work for some people, but not most, are you in principle opposed to giving whatever-it-is-that-distinguishes-the-people-it-works-for for to anyone who wants it?
Whatever-it-is-that-distinguishes-the-people-it-works-for seems to be inherent in the skills in question (that is, the configuration that brings about a certain ability also necessarily brings about a weakness in another area), so I don’t think that’s possible. If it were, I can only imagine it taking the form of people being able to shift configuration very rapidly into whatever works best for the situation, and in some cases, I find that very implausible. If I’m wrong, sure, why not? If it’s possible, it’s only the logical extension of teaching people to use their strengths and shore up their weaknesses. This being an inherent impossibility (or so I think; I could be wrong), it doesn’t so much matter whether I’m opposed to it or not, but yeah, it’s fine with me.
You make a good point, but I expect that assuming that someone makes AI and uses it to rule the world with the power to modify people, it will be Eliezer Yudkowsky, so whether he would abuse that power is more important than whether my next-door neighbors would if they could or even what I would do, and so what EY wants is at least worth considering, because the failure mode if he does something bad is way too catastrophic.
[if] someone makes AI and uses it to rule the world with the power to modify people, it will be Eliezer Yudkowsky
What makes you think that?
For example, do you think he’s the only person working on building AI powerful enough to change the world? Or that, of the people working on it, he’s the only one competent enough to succeed? Or that, of the people who can succeed, he’s the only one who would “use” the resulting AI to rule the world and modify people? Or something else?
He’s the only person I know of who wants to build an AI that will take over the world and do what he wants. He’s also smart enough to have a chance, which is disturbing.
Have you read his paper on CEV? To the best of my knowledge, that’s the clearest place he’s laid out what he wants an AGI to do, and I wouldn’t really label it “take over the world and do what [Eliezer Yudkowsky] wants” except for broad use of those terms to the point of dropping their typical connotations.
Don’t worry. We are in good hands. Eliezer understands the dillemas involved and will ensure that we can avoid non-friendly AI. The SI are dedicated to Friendly AI and the completion of their goal.
I can virtually guarantee you that he’s not the only one who wants to build such an AI. Google, IBM, and the heads of major three-letter government agencies all come to mind as the kind of players who would want to implement their own pet genie, and are actively working toward that goal. That said, it’s possible that EY is the only one who has a chance of success… I personally wouldn’t give him, or any other human, that much credit, but I do acknowledge the possibility.
For what it’s worth, I disagree with many (if not most) LessWrongers (LessWrongites ? LessWrongoids ?) on the subject of the Singularity. I am far from convinced that the Singularity is even possible in principle, and I am fairly certain that, even if it were possible, it would not occur within my lifetime, or my (hypothetical) children’s lifetimes.
EDIT: added a crucial “not” in the last sentence. Oops.
I also think the singularity is much less likely than most Lesswrongers. Which is quite comforting, because my estimated probability for the singularity is still higher than my estimated probability that the problem of friendly AI is tractable.
Just chiming in here because I think the question about the singularity on the LW survey was not well-designed to capture the opinion of those who don’t think it likely to happen at all, so the median LW perception of the singularity may not be what it appears.
Yeah… spending time on Less Wrong helps one in general appreciate how much existential risk there is, especially from technologies, and how little attention is paid to it. Thinking about the Great Filter will just make everything seem even worse.
A runaway AI might wind up being very destructive, but quite probably not wholly destructive.
It seems likely that it would find some of the knowledge humanity has built up over the
millenia useful, regardless of what specific goals it had. In that sense, I think that even if
a paperclip optimizer is built and eats the world, we won’t have been wholly forgotten in
the way we would if, e.g. the sun exploded and vaporized our planet. I don’t find this to be
much comfort, but how comforting or not it is is a matter of personal taste.
As I mentioned here,
I’ve seen a presentation on Watson, and it looks to me like its architecture is compatible with recursive
self-improvement (though that is not the immediate goal for it). Clippy does seem rather probable...
One caveat: I tend to overestimate risks. I overestimated the severity of y2k, and I’ve overestimated a
variety of personal risks.
Yes, I was indeed alluding to User:Clippy. Actually, I should have tweaked the reference, since it it the possibility of a paperclip maximiser that hasFOOMed that really represents the threat.
Those of us who endorse respecting individual choices when we can afford to, because we prefer that our individual choices be respected when we can afford it.
If you think it will work for some people, but not most, are you in principle opposed to giving whatever-it-is-that-distinguishes-the-people-it-works-for for to anyone who wants it?
More broadly: I mostly consider all of this “what would EY do” stuff a distraction; the question that interests me is what I ought to want done and why I ought to want it done, not who or what does it. If large-scale celibacy is a good idea, I want to understand why it’s a good idea. Being told that some authority figure (any authority figure) advocated it doesn’t achieve that. Similarly, if it’s a bad idea, I want to understand why it’s a bad idea.
Whatever-it-is-that-distinguishes-the-people-it-works-for seems to be inherent in the skills in question (that is, the configuration that brings about a certain ability also necessarily brings about a weakness in another area), so I don’t think that’s possible. If it were, I can only imagine it taking the form of people being able to shift configuration very rapidly into whatever works best for the situation, and in some cases, I find that very implausible. If I’m wrong, sure, why not? If it’s possible, it’s only the logical extension of teaching people to use their strengths and shore up their weaknesses. This being an inherent impossibility (or so I think; I could be wrong), it doesn’t so much matter whether I’m opposed to it or not, but yeah, it’s fine with me.
You make a good point, but I expect that assuming that someone makes AI and uses it to rule the world with the power to modify people, it will be Eliezer Yudkowsky, so whether he would abuse that power is more important than whether my next-door neighbors would if they could or even what I would do, and so what EY wants is at least worth considering, because the failure mode if he does something bad is way too catastrophic.
What makes you think that?
For example, do you think he’s the only person working on building AI powerful enough to change the world?
Or that, of the people working on it, he’s the only one competent enough to succeed?
Or that, of the people who can succeed, he’s the only one who would “use” the resulting AI to rule the world and modify people?
Or something else?
He’s the only person I know of who wants to build an AI that will take over the world and do what he wants. He’s also smart enough to have a chance, which is disturbing.
Have you read his paper on CEV? To the best of my knowledge, that’s the clearest place he’s laid out what he wants an AGI to do, and I wouldn’t really label it “take over the world and do what [Eliezer Yudkowsky] wants” except for broad use of those terms to the point of dropping their typical connotations.
Don’t worry. We are in good hands. Eliezer understands the dillemas involved and will ensure that we can avoid non-friendly AI. The SI are dedicated to Friendly AI and the completion of their goal.
I can virtually guarantee you that he’s not the only one who wants to build such an AI. Google, IBM, and the heads of major three-letter government agencies all come to mind as the kind of players who would want to implement their own pet genie, and are actively working toward that goal. That said, it’s possible that EY is the only one who has a chance of success… I personally wouldn’t give him, or any other human, that much credit, but I do acknowledge the possibility.
Thank you. I’ve just updated on that. I now consider it even more likely that the world will be destroyed within my lifetime.
For what it’s worth, I disagree with many (if not most) LessWrongers (LessWrongites ? LessWrongoids ?) on the subject of the Singularity. I am far from convinced that the Singularity is even possible in principle, and I am fairly certain that, even if it were possible, it would not occur within my lifetime, or my (hypothetical) children’s lifetimes.
EDIT: added a crucial “not” in the last sentence. Oops.
I also think the singularity is much less likely than most Lesswrongers. Which is quite comforting, because my estimated probability for the singularity is still higher than my estimated probability that the problem of friendly AI is tractable.
Just chiming in here because I think the question about the singularity on the LW survey was not well-designed to capture the opinion of those who don’t think it likely to happen at all, so the median LW perception of the singularity may not be what it appears.
Yeah… spending time on Less Wrong helps one in general appreciate how much existential risk there is, especially from technologies, and how little attention is paid to it. Thinking about the Great Filter will just make everything seem even worse.
A runaway AI might wind up being very destructive, but quite probably not wholly destructive. It seems likely that it would find some of the knowledge humanity has built up over the millenia useful, regardless of what specific goals it had. In that sense, I think that even if a paperclip optimizer is built and eats the world, we won’t have been wholly forgotten in the way we would if, e.g. the sun exploded and vaporized our planet. I don’t find this to be much comfort, but how comforting or not it is is a matter of personal taste.
As I mentioned here, I’ve seen a presentation on Watson, and it looks to me like its architecture is compatible with recursive self-improvement (though that is not the immediate goal for it). Clippy does seem rather probable...
One caveat: I tend to overestimate risks. I overestimated the severity of y2k, and I’ve overestimated a variety of personal risks.
“I see that you’re trying to extrapolate human volition. Would you like some help ?” converts the Earth into computronium
Soreff was probably alluding to User:Clippy, someone role-playing a non-FOOMed paperclip maximiser.
Though yours is good too :-)
Yes, I was indeed alluding to User:Clippy. Actually, I should have tweaked the reference, since it it the possibility of a paperclip maximiser that has FOOMed that really represents the threat.
Ah, thanks, that makes sense.