I went out looking for myself and I just watched the bloggingheads video (6:42) where Robert Wright says to Eliezer “It sounds like what you’re saying is we need to build a God” and Eliezer is like “Why don’t we call it a very powerful optimizing agent?” and grins like he’s just fooled someone
Like he’s just fooled someone? I see him talking like he’s patiently humoring an ignorant child who is struggling to distinguish between “Any person who gives presents at Christmas time” and “The literal freaking Santa Claus, complete with magical flying reindeer”. He isn’t acting like he has ‘fooled’ anyone or acting in any way ‘sneaky’.
and Robert Wright thinks and he’s like “Why don’t we call that a euphemism for God?” which destroys Eliezer’s grin.
While I wouldn’t have been grinning previously whatever my expression had been it would change in response to that question in the direction of irritation and impatience. The answer to “Why don’t we call that a euphemism for God?” is “Because that’d be wrong and totally muddled thinking”. When your mission is to create an actual very powerful optimization agent and that—and not gods—is actually what you spend your time researching then a very powerful optimization agent isn’t a ‘euphemism’ for anything. It’s the actual core goal. Maybe, at a stretch, “God” can be used as a euphemism for “very powerful optimizing agent” but never the reverse.
I’m not commenting here on the question of whether there is a legitimate PR concern regarding people pattern matching to religious themes having dire, hysterical and murderous reactions. Let’s even assume that kind of PR concern legitimate for the purpose of this comment. Even then there is a distinct difference between “failure to successfully fool people” and “failure to educate fools”. It would be the latter task that Eliezer has failed at here and the former charge would be invalid. (I felt the paragraph I quoted to be unfair on Eliezer with respect to blurring that distinction.)
I don’t think that an AI that goes FOOM would be exactly the same as any of the “Gods” humanity has been envisioning and may not even resemble such a God (especially because, if it were a success, it would theoretically not behave in self-contradictory ways like making sinful people, knowing exactly what they’re going to do, making them to do just that, telling them not to act like what they are and then punishing them for behaving the way it designed them to). I don’t see a reason to believe that it is possible for any intellect to be omniscient, omnipotent or perfect. That includes an AI. These, to me, would be the main differences.
Robert Wright appears to be aware of this, as his specific wording was “It seems to me that in some sense what you’re saying is that we need to build a God.”
If you are taking this as a question about what to CALL the thing, then I agree completely that the AI should not be called a God. But he said “in some sense” which means that his question is about something deeper than choosing a word. The wording he’s using is asking something more like “Do you think we should build something similar to a God?”
The way that I interpret this question is not “What do we call this thing?” but more “You think we should build a WHAT?” with the connotations of “What are you thinking?” because the salient thing is that building something even remotely similar to a God would be very, very dangerous.
The reason I interpreted it this way is partly because instead of interpreting everything I hear literally, I will often interpret wording based on what’s salient about it in the context of the situation. For instance, if I saw a scene where someone was running toward someone else with a knife and I asked “Are you about to commit murder?” I would NOT accept “Why don’t we call it knife relocation?” as an acceptable answer.
Afterward, Robert Wright says that Eliezer is being euphemistic. This perception that Eliezer’s answer was an attempt to substitute nice sounding wording for something awful confirms, to me, that Robert’s intent was not to ask “What word should we use for this?” but was intended more like “You think we should build a WHAT? What are you thinking?”
Now, it could be argued that Eliezer accidentally failed to detect the salient connotation. It could be argued, and probably fairly effectively (against me anyway) that the reason for Eliezer’s mistake is that he was having one of his arrogant moments and he genuinely thought that, because of a gigantic intelligence difference between Robert and himself, that Robert was asking a moronic question based on the stupid perception that a super powerful AI would be exactly the same as a real God (whatever that means). In this case, I would classify that as a “social skills / character flaw induced faux pas”.
In my personal interpretation of Eliezer’s behavior, I’m giving him more credit than that—I am assuming that he has previously encountered people by that point (2010) who have flipped out about the possibility that he wants to build a God and have voiced valid and poignant concerns like “Why do you believe it is possible to succeed at controlling something a bazillion times smarter than you?” or “Why would you want us imperfect humans to make something so insanely powerful if it’s more or less guaranteed to be flawed?” I’m assuming that Eliezer interpreted correctly when the salient part of someone’s question is not in it’s literal wording but in connotations relating to the situation.
This is why it looks, to me, like Eliezer’s intent was to brush him off by choosing to answer this question as if it were a question about what word to use and hoping that Robert didn’t have the nerve to go for the throat with valid and poignant questions like the examples above.
The topic of whether this was an unintentional faux pas or an intentional brush-off isn’t the most important thing here.
The most important questions, in my opinion, are:
“Does Eliezer intend to build something this powerful?”
“Does Eliezer really think that something a bazillion times as intelligent as himself can be controlled?”
“Do you and I agree/disagree that it’s a good idea to build something this powerful / that it can be controlled?”
If you are taking this as a question about what to CALL the thing, then I agree completely that the AI should not be called a God. But he said “in some sense” which means that his question is about something deeper than choosing a word. The wording he’s using is asking something more like “Do you think we should build something similar to a God?”
If forced to use that term and answer the question as you ask it, with a “Yes” or “No” then the correct answer would be “No”. He is not trying to create a God, he has done years of work working out what he is trying to create and it is completely different to a God in nearly all features except “very powerful”. If you insinst on that vocabulary you’re going to get “No, I don’t” as an answer. That the artificial intelligence Eliezer would want to create seems to Wright (and perhaps yourself) like it should be described as, considered a euphemism for, or reasoned about as if it is God is a feature of Wright’s lack of domain knowledge.
There is no disingenuity here. Eliezer can honestly say “We should create a very powerful (and carefully designed) optimizing agent” but he cannot honestly say “We should create a God”. (You may begin to understand some of the reasons why there is such a difference when you start considering questions like “Can it be controlled?”. Or at least when you start considering the answers to the same.) So Eliezer gave Wright the chance to get the answer he wanted (“Hell yes, I want to make a very powerful optimising agent!”) rather than the answer the question you suggest would have given him (“Hell no! Don’t create a God! That entails making at least two of the fundamental and critical ethical and practical blunders in FAI design that you probably aren’t able to comprehend yet!”)
The reason I interpreted it this way is partly because instead of interpreting everything I hear literally, I will often interpret wording based on what’s salient about it in the context of the situation. For instance, if I saw a scene where someone was running toward someone else with a knife and I asked “Are you about to commit murder?” I would NOT accept “Why don’t we call it knife relocation?” as an acceptable answer.
I reject the analogy. Eliezer’s answer isn’t like the knife relocation answer. (If anything the connotations are the reverse. More transparency and candidness rather than less.)
Now, it could be argued that Eliezer accidentally failed to detect the salient connotation. It could be argued, and probably fairly effectively (against me anyway) that the reason for Eliezer’s mistake is that he was having one of his arrogant moments and he genuinely thought that, because of a gigantic intelligence difference between Robert and himself, that Robert was asking a moronic question based on the stupid perception that a super powerful AI would be exactly the same as a real God (whatever that means).
It could be that there really is an overwhelming difference in crystallized intelligence between Eliezer and Robert. The question—at least relative to Eliezer’s standards—was moronic. Or at least had connotations of ignorance of salient features of the landscape.
In this case, I would classify that as a “social skills / character flaw induced faux pas”.
There may be a social skills related faux pas here—and it is one where it is usually socially appropriate to say wrong things in an entirely muddled model of reality rather than educate the people you are speaking to. Maybe that means that Eliezer shouldn’t talk to people like Robert. Perhaps he should get someone trained explicitly with spinning webs of eloquent bullshit to optimally communicate with the uneducated. However the character flaws that I take it you are referring to—Eliezer’s arrogance and soforth, just aren’t at play here.
In my personal interpretation of Eliezer’s behavior, I’m giving him more credit than that
The net amount of credit given is low. You are ascribing a certain intention to Eliezer’s actions where that intention is clearly not achieved. “I infer he is trying to do X and he in fact fails to do X”. In such cases generosity suggests that if they don’t seem to be achieving X, haven’t said X is what they are trying to achieve and X is inherently lacking in virtue then then by golly maybe they were in fact trying to achieve Y! (Eliezer really isn’t likely to be that actively incompetent at deviousness.)
I am assuming that he has previously encountered people by that point (2010) who have flipped out about the possibility that he wants to build a God
You assign a high likelyhood to people flipping out (and even persecuting Eliezer) in such a way. Nyan considers it less likely. It may be that Eliezer doesn’t have people (and particularly people of Robert Wright’s intellectual caliber) flip out at him like that.
and have voiced valid and poignant concerns like “Why do you believe it is possible to succeed at controlling something a bazillion times smarter than you?” or “Why would you want us imperfect humans to make something so insanely powerful if it’s more or less guaranteed to be flawed?”
The kind of people to whom there is the remote possibility that it would be useful to even bother to attempt to explain the answers to such questions are also the kind of people who are capable of asking them, without insisting on asking then belligerently emphasizing wrong questions about ‘God’. This is particularly the case with the first of those questions where the question of ‘controlling’ only comes up because of intuitive misunderstanding on how one would relate to such an agent—ie. thinking of it as a “God” which is something we already intuit as “like a human or mammal but way powerful”.
“Does Eliezer intend to build something this powerful?”
If he can prove safety mathematically then yes, he does.
At around the time I visited Berkeley there was a jest among some of the SingInst folks “We’re thinking of renaming ourselves from The Singularity Institute For Artificial Intelligence to The Singularity Institute For Or Against Artificial Intelligence Depending On What Seems To Be The Best Altruistic Approach All Things Considered”.
There are risks to creating something this powerful and, in fact, the goal of Eliezer and SIAI aren’t “research AGI”… plenty of researchers work on that. They are focused on Friendliness. Essentially… they are focused on the very dangers that you describe here and are dedicating themselves to combating those dangers.
Note that it is impossible to evaluate a decision to take an action without considering what alternative choice there is. Choosing to dedicate one’s efforts to developing an FAI (“safe and desirable very powerful optimizing agent”) has a very different meaning if the alternative is millennia of peace and tranquility than the same decision to work on FAI if the alternative is “someone is going to create a very powerful optimizing agent anyway but not bother with rigorous safety research”.
“Does Eliezer really think that something a bazillion times as intelligent as himself can be controlled?”
If you’re planning to try to control the super-intelligence you have already lost. The task is of selecting from the space of all possible mind designs a mind that will do things that you want done.
“Do you and I agree/disagree that it’s a good idea to build something this powerful
Estimate: Slightly disagree. The biggest differences in perception may be surrounding what the consequences of inaction are.
/ that it can be controlled?”
Estimate: Disagree significantly. I believe your understanding of likely superintelligence behavior and self development has too much of an anthropocentric bias. Your anticipations are (in my estimation) strongly influenced by how ethical, intellectual and personal development works in gifted humans.
The above disagreement actually doesn’t necessarily change overall risk assessment. I just expect the specific technical problems to be overcome in order to prevent “Super-intelligent rocks fall! Everybody dies.” to be slightly different in nature. Probably with more emphasis on abstract mathematical concerns.
Like he’s just fooled someone? I see him talking like he’s patiently humoring an ignorant child who is struggling to distinguish between “Any person who gives presents at Christmas time” and “The literal freaking Santa Claus, complete with magical flying reindeer”. He isn’t acting like he has ‘fooled’ anyone or acting in any way ‘sneaky’.
While I wouldn’t have been grinning previously whatever my expression had been it would change in response to that question in the direction of irritation and impatience. The answer to “Why don’t we call that a euphemism for God?” is “Because that’d be wrong and totally muddled thinking”. When your mission is to create an actual very powerful optimization agent and that—and not gods—is actually what you spend your time researching then a very powerful optimization agent isn’t a ‘euphemism’ for anything. It’s the actual core goal. Maybe, at a stretch, “God” can be used as a euphemism for “very powerful optimizing agent” but never the reverse.
I’m not commenting here on the question of whether there is a legitimate PR concern regarding people pattern matching to religious themes having dire, hysterical and murderous reactions. Let’s even assume that kind of PR concern legitimate for the purpose of this comment. Even then there is a distinct difference between “failure to successfully fool people” and “failure to educate fools”. It would be the latter task that Eliezer has failed at here and the former charge would be invalid. (I felt the paragraph I quoted to be unfair on Eliezer with respect to blurring that distinction.)
I don’t think that an AI that goes FOOM would be exactly the same as any of the “Gods” humanity has been envisioning and may not even resemble such a God (especially because, if it were a success, it would theoretically not behave in self-contradictory ways like making sinful people, knowing exactly what they’re going to do, making them to do just that, telling them not to act like what they are and then punishing them for behaving the way it designed them to). I don’t see a reason to believe that it is possible for any intellect to be omniscient, omnipotent or perfect. That includes an AI. These, to me, would be the main differences.
Robert Wright appears to be aware of this, as his specific wording was “It seems to me that in some sense what you’re saying is that we need to build a God.”
If you are taking this as a question about what to CALL the thing, then I agree completely that the AI should not be called a God. But he said “in some sense” which means that his question is about something deeper than choosing a word. The wording he’s using is asking something more like “Do you think we should build something similar to a God?”
The way that I interpret this question is not “What do we call this thing?” but more “You think we should build a WHAT?” with the connotations of “What are you thinking?” because the salient thing is that building something even remotely similar to a God would be very, very dangerous.
The reason I interpreted it this way is partly because instead of interpreting everything I hear literally, I will often interpret wording based on what’s salient about it in the context of the situation. For instance, if I saw a scene where someone was running toward someone else with a knife and I asked “Are you about to commit murder?” I would NOT accept “Why don’t we call it knife relocation?” as an acceptable answer.
Afterward, Robert Wright says that Eliezer is being euphemistic. This perception that Eliezer’s answer was an attempt to substitute nice sounding wording for something awful confirms, to me, that Robert’s intent was not to ask “What word should we use for this?” but was intended more like “You think we should build a WHAT? What are you thinking?”
Now, it could be argued that Eliezer accidentally failed to detect the salient connotation. It could be argued, and probably fairly effectively (against me anyway) that the reason for Eliezer’s mistake is that he was having one of his arrogant moments and he genuinely thought that, because of a gigantic intelligence difference between Robert and himself, that Robert was asking a moronic question based on the stupid perception that a super powerful AI would be exactly the same as a real God (whatever that means). In this case, I would classify that as a “social skills / character flaw induced faux pas”.
In my personal interpretation of Eliezer’s behavior, I’m giving him more credit than that—I am assuming that he has previously encountered people by that point (2010) who have flipped out about the possibility that he wants to build a God and have voiced valid and poignant concerns like “Why do you believe it is possible to succeed at controlling something a bazillion times smarter than you?” or “Why would you want us imperfect humans to make something so insanely powerful if it’s more or less guaranteed to be flawed?” I’m assuming that Eliezer interpreted correctly when the salient part of someone’s question is not in it’s literal wording but in connotations relating to the situation.
This is why it looks, to me, like Eliezer’s intent was to brush him off by choosing to answer this question as if it were a question about what word to use and hoping that Robert didn’t have the nerve to go for the throat with valid and poignant questions like the examples above.
The topic of whether this was an unintentional faux pas or an intentional brush-off isn’t the most important thing here.
The most important questions, in my opinion, are:
“Does Eliezer intend to build something this powerful?”
“Does Eliezer really think that something a bazillion times as intelligent as himself can be controlled?”
“Do you and I agree/disagree that it’s a good idea to build something this powerful / that it can be controlled?”
If forced to use that term and answer the question as you ask it, with a “Yes” or “No” then the correct answer would be “No”. He is not trying to create a God, he has done years of work working out what he is trying to create and it is completely different to a God in nearly all features except “very powerful”. If you insinst on that vocabulary you’re going to get “No, I don’t” as an answer. That the artificial intelligence Eliezer would want to create seems to Wright (and perhaps yourself) like it should be described as, considered a euphemism for, or reasoned about as if it is God is a feature of Wright’s lack of domain knowledge.
There is no disingenuity here. Eliezer can honestly say “We should create a very powerful (and carefully designed) optimizing agent” but he cannot honestly say “We should create a God”. (You may begin to understand some of the reasons why there is such a difference when you start considering questions like “Can it be controlled?”. Or at least when you start considering the answers to the same.) So Eliezer gave Wright the chance to get the answer he wanted (“Hell yes, I want to make a very powerful optimising agent!”) rather than the answer the question you suggest would have given him (“Hell no! Don’t create a God! That entails making at least two of the fundamental and critical ethical and practical blunders in FAI design that you probably aren’t able to comprehend yet!”)
I reject the analogy. Eliezer’s answer isn’t like the knife relocation answer. (If anything the connotations are the reverse. More transparency and candidness rather than less.)
It could be that there really is an overwhelming difference in crystallized intelligence between Eliezer and Robert. The question—at least relative to Eliezer’s standards—was moronic. Or at least had connotations of ignorance of salient features of the landscape.
There may be a social skills related faux pas here—and it is one where it is usually socially appropriate to say wrong things in an entirely muddled model of reality rather than educate the people you are speaking to. Maybe that means that Eliezer shouldn’t talk to people like Robert. Perhaps he should get someone trained explicitly with spinning webs of eloquent bullshit to optimally communicate with the uneducated. However the character flaws that I take it you are referring to—Eliezer’s arrogance and soforth, just aren’t at play here.
The net amount of credit given is low. You are ascribing a certain intention to Eliezer’s actions where that intention is clearly not achieved. “I infer he is trying to do X and he in fact fails to do X”. In such cases generosity suggests that if they don’t seem to be achieving X, haven’t said X is what they are trying to achieve and X is inherently lacking in virtue then then by golly maybe they were in fact trying to achieve Y! (Eliezer really isn’t likely to be that actively incompetent at deviousness.)
You assign a high likelyhood to people flipping out (and even persecuting Eliezer) in such a way. Nyan considers it less likely. It may be that Eliezer doesn’t have people (and particularly people of Robert Wright’s intellectual caliber) flip out at him like that.
The kind of people to whom there is the remote possibility that it would be useful to even bother to attempt to explain the answers to such questions are also the kind of people who are capable of asking them, without insisting on asking then belligerently emphasizing wrong questions about ‘God’. This is particularly the case with the first of those questions where the question of ‘controlling’ only comes up because of intuitive misunderstanding on how one would relate to such an agent—ie. thinking of it as a “God” which is something we already intuit as “like a human or mammal but way powerful”.
If he can prove safety mathematically then yes, he does.
At around the time I visited Berkeley there was a jest among some of the SingInst folks “We’re thinking of renaming ourselves from The Singularity Institute For Artificial Intelligence to The Singularity Institute For Or Against Artificial Intelligence Depending On What Seems To Be The Best Altruistic Approach All Things Considered”.
There are risks to creating something this powerful and, in fact, the goal of Eliezer and SIAI aren’t “research AGI”… plenty of researchers work on that. They are focused on Friendliness. Essentially… they are focused on the very dangers that you describe here and are dedicating themselves to combating those dangers.
Note that it is impossible to evaluate a decision to take an action without considering what alternative choice there is. Choosing to dedicate one’s efforts to developing an FAI (“safe and desirable very powerful optimizing agent”) has a very different meaning if the alternative is millennia of peace and tranquility than the same decision to work on FAI if the alternative is “someone is going to create a very powerful optimizing agent anyway but not bother with rigorous safety research”.
If you’re planning to try to control the super-intelligence you have already lost. The task is of selecting from the space of all possible mind designs a mind that will do things that you want done.
Estimate: Slightly disagree. The biggest differences in perception may be surrounding what the consequences of inaction are.
Estimate: Disagree significantly. I believe your understanding of likely superintelligence behavior and self development has too much of an anthropocentric bias. Your anticipations are (in my estimation) strongly influenced by how ethical, intellectual and personal development works in gifted humans.
The above disagreement actually doesn’t necessarily change overall risk assessment. I just expect the specific technical problems to be overcome in order to prevent “Super-intelligent rocks fall! Everybody dies.” to be slightly different in nature. Probably with more emphasis on abstract mathematical concerns.