Okay. There’s a peculiar habit in this place where people say things that can easily be interpreted as something that will draw persecution. Then I point it out, and nobody cares.
This is concerning. My intuitions suggest that it’s not a big deal. I infer that you think it’s a big deal. Someone is miscalibrated.
Do you have a history with persecution that makes you more attuned to it? I am blissfully ignorant.
Okay. It probably seems kind of stupid that I failed to realize that. Is there a post that I should read?
I don’t know if there’s an explicit post about it. I picked it up from everything on Friendly AI, the terrible uncaringness of the universe, etc. It is most likely not explicitly represented as replacing a negligent god anywhere outside my own musings, unless I’ve forgotten.
This is concerning. My intuitions suggest that it’s not a big deal. I infer that you think it’s a big deal. Someone is miscalibrated.
I really like this nice, clear, direct observation.
Do you have a history with persecution that makes you more attuned to it? I am blissfully ignorant.
Yes, but more relevantly, humanity has a history with persecution—lots of intelligent people and people who want to change the world from Socrates to Gandhi have been persecuted.
Here Eliezer is in a world full of Christians who believe that dreaded Satan is going to reincarnate soon, claim to be a God, promise to solve all the problems, and take over earth. Religious people have been known to become violent for religious reasons. Surely building an incarnation of Satan would, if that were their interpretation of it, qualify as more or less the ultimate reason to launch a religious war. These Christians outnumber Eliezer by a lot. And Eliezer, according to you, is talking about building WHAT?
My take on the “build a God-like AI” idea is that it is pretty crazy. I might like this idea less than the Christians probably do seeing as how I don’t have any sense that Jesus is going to come back and reconstruct us after it does it’s optimization...
I don’t know if there’s an explicit post about it. I picked it up from everything on Friendly AI, the terrible uncaringness of the universe, etc. It is most likely not explicitly represented as replacing a negligent god anywhere outside my own musings, unless I’ve forgotten.
I went out looking for myself and I just watched the bloggingheads video (6:42) where Robert Wright says to Eliezer “It sounds like what you’re saying is we need to build a God” and Eliezer is like “Why don’t we call it a very powerful optimizing agent?” and grins like he’s just fooled someone and Robert Wright thinks and he’s like “Why don’t we call that a euphemism for God?” which destroys Eliezer’s grin.
If Eliezer’s intentions are to build a God, then he’s far less risk-averse than the type of person who would simply try to avoid being burned at the stake. In that case the problem isn’t that he makes himself look bad...
I went out looking for myself and I just watched the bloggingheads video (6:42) where Robert Wright says to Eliezer “It sounds like what you’re saying is we need to build a God” and Eliezer is like “Why don’t we call it a very powerful optimizing agent?” and grins like he’s just fooled someone
Like he’s just fooled someone? I see him talking like he’s patiently humoring an ignorant child who is struggling to distinguish between “Any person who gives presents at Christmas time” and “The literal freaking Santa Claus, complete with magical flying reindeer”. He isn’t acting like he has ‘fooled’ anyone or acting in any way ‘sneaky’.
and Robert Wright thinks and he’s like “Why don’t we call that a euphemism for God?” which destroys Eliezer’s grin.
While I wouldn’t have been grinning previously whatever my expression had been it would change in response to that question in the direction of irritation and impatience. The answer to “Why don’t we call that a euphemism for God?” is “Because that’d be wrong and totally muddled thinking”. When your mission is to create an actual very powerful optimization agent and that—and not gods—is actually what you spend your time researching then a very powerful optimization agent isn’t a ‘euphemism’ for anything. It’s the actual core goal. Maybe, at a stretch, “God” can be used as a euphemism for “very powerful optimizing agent” but never the reverse.
I’m not commenting here on the question of whether there is a legitimate PR concern regarding people pattern matching to religious themes having dire, hysterical and murderous reactions. Let’s even assume that kind of PR concern legitimate for the purpose of this comment. Even then there is a distinct difference between “failure to successfully fool people” and “failure to educate fools”. It would be the latter task that Eliezer has failed at here and the former charge would be invalid. (I felt the paragraph I quoted to be unfair on Eliezer with respect to blurring that distinction.)
I don’t think that an AI that goes FOOM would be exactly the same as any of the “Gods” humanity has been envisioning and may not even resemble such a God (especially because, if it were a success, it would theoretically not behave in self-contradictory ways like making sinful people, knowing exactly what they’re going to do, making them to do just that, telling them not to act like what they are and then punishing them for behaving the way it designed them to). I don’t see a reason to believe that it is possible for any intellect to be omniscient, omnipotent or perfect. That includes an AI. These, to me, would be the main differences.
Robert Wright appears to be aware of this, as his specific wording was “It seems to me that in some sense what you’re saying is that we need to build a God.”
If you are taking this as a question about what to CALL the thing, then I agree completely that the AI should not be called a God. But he said “in some sense” which means that his question is about something deeper than choosing a word. The wording he’s using is asking something more like “Do you think we should build something similar to a God?”
The way that I interpret this question is not “What do we call this thing?” but more “You think we should build a WHAT?” with the connotations of “What are you thinking?” because the salient thing is that building something even remotely similar to a God would be very, very dangerous.
The reason I interpreted it this way is partly because instead of interpreting everything I hear literally, I will often interpret wording based on what’s salient about it in the context of the situation. For instance, if I saw a scene where someone was running toward someone else with a knife and I asked “Are you about to commit murder?” I would NOT accept “Why don’t we call it knife relocation?” as an acceptable answer.
Afterward, Robert Wright says that Eliezer is being euphemistic. This perception that Eliezer’s answer was an attempt to substitute nice sounding wording for something awful confirms, to me, that Robert’s intent was not to ask “What word should we use for this?” but was intended more like “You think we should build a WHAT? What are you thinking?”
Now, it could be argued that Eliezer accidentally failed to detect the salient connotation. It could be argued, and probably fairly effectively (against me anyway) that the reason for Eliezer’s mistake is that he was having one of his arrogant moments and he genuinely thought that, because of a gigantic intelligence difference between Robert and himself, that Robert was asking a moronic question based on the stupid perception that a super powerful AI would be exactly the same as a real God (whatever that means). In this case, I would classify that as a “social skills / character flaw induced faux pas”.
In my personal interpretation of Eliezer’s behavior, I’m giving him more credit than that—I am assuming that he has previously encountered people by that point (2010) who have flipped out about the possibility that he wants to build a God and have voiced valid and poignant concerns like “Why do you believe it is possible to succeed at controlling something a bazillion times smarter than you?” or “Why would you want us imperfect humans to make something so insanely powerful if it’s more or less guaranteed to be flawed?” I’m assuming that Eliezer interpreted correctly when the salient part of someone’s question is not in it’s literal wording but in connotations relating to the situation.
This is why it looks, to me, like Eliezer’s intent was to brush him off by choosing to answer this question as if it were a question about what word to use and hoping that Robert didn’t have the nerve to go for the throat with valid and poignant questions like the examples above.
The topic of whether this was an unintentional faux pas or an intentional brush-off isn’t the most important thing here.
The most important questions, in my opinion, are:
“Does Eliezer intend to build something this powerful?”
“Does Eliezer really think that something a bazillion times as intelligent as himself can be controlled?”
“Do you and I agree/disagree that it’s a good idea to build something this powerful / that it can be controlled?”
If you are taking this as a question about what to CALL the thing, then I agree completely that the AI should not be called a God. But he said “in some sense” which means that his question is about something deeper than choosing a word. The wording he’s using is asking something more like “Do you think we should build something similar to a God?”
If forced to use that term and answer the question as you ask it, with a “Yes” or “No” then the correct answer would be “No”. He is not trying to create a God, he has done years of work working out what he is trying to create and it is completely different to a God in nearly all features except “very powerful”. If you insinst on that vocabulary you’re going to get “No, I don’t” as an answer. That the artificial intelligence Eliezer would want to create seems to Wright (and perhaps yourself) like it should be described as, considered a euphemism for, or reasoned about as if it is God is a feature of Wright’s lack of domain knowledge.
There is no disingenuity here. Eliezer can honestly say “We should create a very powerful (and carefully designed) optimizing agent” but he cannot honestly say “We should create a God”. (You may begin to understand some of the reasons why there is such a difference when you start considering questions like “Can it be controlled?”. Or at least when you start considering the answers to the same.) So Eliezer gave Wright the chance to get the answer he wanted (“Hell yes, I want to make a very powerful optimising agent!”) rather than the answer the question you suggest would have given him (“Hell no! Don’t create a God! That entails making at least two of the fundamental and critical ethical and practical blunders in FAI design that you probably aren’t able to comprehend yet!”)
The reason I interpreted it this way is partly because instead of interpreting everything I hear literally, I will often interpret wording based on what’s salient about it in the context of the situation. For instance, if I saw a scene where someone was running toward someone else with a knife and I asked “Are you about to commit murder?” I would NOT accept “Why don’t we call it knife relocation?” as an acceptable answer.
I reject the analogy. Eliezer’s answer isn’t like the knife relocation answer. (If anything the connotations are the reverse. More transparency and candidness rather than less.)
Now, it could be argued that Eliezer accidentally failed to detect the salient connotation. It could be argued, and probably fairly effectively (against me anyway) that the reason for Eliezer’s mistake is that he was having one of his arrogant moments and he genuinely thought that, because of a gigantic intelligence difference between Robert and himself, that Robert was asking a moronic question based on the stupid perception that a super powerful AI would be exactly the same as a real God (whatever that means).
It could be that there really is an overwhelming difference in crystallized intelligence between Eliezer and Robert. The question—at least relative to Eliezer’s standards—was moronic. Or at least had connotations of ignorance of salient features of the landscape.
In this case, I would classify that as a “social skills / character flaw induced faux pas”.
There may be a social skills related faux pas here—and it is one where it is usually socially appropriate to say wrong things in an entirely muddled model of reality rather than educate the people you are speaking to. Maybe that means that Eliezer shouldn’t talk to people like Robert. Perhaps he should get someone trained explicitly with spinning webs of eloquent bullshit to optimally communicate with the uneducated. However the character flaws that I take it you are referring to—Eliezer’s arrogance and soforth, just aren’t at play here.
In my personal interpretation of Eliezer’s behavior, I’m giving him more credit than that
The net amount of credit given is low. You are ascribing a certain intention to Eliezer’s actions where that intention is clearly not achieved. “I infer he is trying to do X and he in fact fails to do X”. In such cases generosity suggests that if they don’t seem to be achieving X, haven’t said X is what they are trying to achieve and X is inherently lacking in virtue then then by golly maybe they were in fact trying to achieve Y! (Eliezer really isn’t likely to be that actively incompetent at deviousness.)
I am assuming that he has previously encountered people by that point (2010) who have flipped out about the possibility that he wants to build a God
You assign a high likelyhood to people flipping out (and even persecuting Eliezer) in such a way. Nyan considers it less likely. It may be that Eliezer doesn’t have people (and particularly people of Robert Wright’s intellectual caliber) flip out at him like that.
and have voiced valid and poignant concerns like “Why do you believe it is possible to succeed at controlling something a bazillion times smarter than you?” or “Why would you want us imperfect humans to make something so insanely powerful if it’s more or less guaranteed to be flawed?”
The kind of people to whom there is the remote possibility that it would be useful to even bother to attempt to explain the answers to such questions are also the kind of people who are capable of asking them, without insisting on asking then belligerently emphasizing wrong questions about ‘God’. This is particularly the case with the first of those questions where the question of ‘controlling’ only comes up because of intuitive misunderstanding on how one would relate to such an agent—ie. thinking of it as a “God” which is something we already intuit as “like a human or mammal but way powerful”.
“Does Eliezer intend to build something this powerful?”
If he can prove safety mathematically then yes, he does.
At around the time I visited Berkeley there was a jest among some of the SingInst folks “We’re thinking of renaming ourselves from The Singularity Institute For Artificial Intelligence to The Singularity Institute For Or Against Artificial Intelligence Depending On What Seems To Be The Best Altruistic Approach All Things Considered”.
There are risks to creating something this powerful and, in fact, the goal of Eliezer and SIAI aren’t “research AGI”… plenty of researchers work on that. They are focused on Friendliness. Essentially… they are focused on the very dangers that you describe here and are dedicating themselves to combating those dangers.
Note that it is impossible to evaluate a decision to take an action without considering what alternative choice there is. Choosing to dedicate one’s efforts to developing an FAI (“safe and desirable very powerful optimizing agent”) has a very different meaning if the alternative is millennia of peace and tranquility than the same decision to work on FAI if the alternative is “someone is going to create a very powerful optimizing agent anyway but not bother with rigorous safety research”.
“Does Eliezer really think that something a bazillion times as intelligent as himself can be controlled?”
If you’re planning to try to control the super-intelligence you have already lost. The task is of selecting from the space of all possible mind designs a mind that will do things that you want done.
“Do you and I agree/disagree that it’s a good idea to build something this powerful
Estimate: Slightly disagree. The biggest differences in perception may be surrounding what the consequences of inaction are.
/ that it can be controlled?”
Estimate: Disagree significantly. I believe your understanding of likely superintelligence behavior and self development has too much of an anthropocentric bias. Your anticipations are (in my estimation) strongly influenced by how ethical, intellectual and personal development works in gifted humans.
The above disagreement actually doesn’t necessarily change overall risk assessment. I just expect the specific technical problems to be overcome in order to prevent “Super-intelligent rocks fall! Everybody dies.” to be slightly different in nature. Probably with more emphasis on abstract mathematical concerns.
I really like this nice, clear, direct observation.
Thank you. I will try to do more of that.
Here Eliezer is in a world full of Christians who believe that dreaded Satan is going to reincarnate soon, claim to be a God, promise to solve all the problems, and take over earth. Religious people have been known to become violent for religious reasons. Surely building an incarnation of Satan would, if that were their interpretation of it, qualify as more or less the ultimate reason to launch a religious war. These Christians outnumber Eliezer by a lot. And Eliezer, according to you, is talking about building WHAT?
Interesting. Religious people seem a lot less scary to me than this. My impression is that the teeth have been taken out of traditional christianity. There are a few christian terrorists left in north america, but they seem like holdouts raging bitterly against the death of their religion. They are still in the majority in some places, though, and can persecute people there.
I don’t think that the remains of theistic christianity could reach an effective military/propoganda arm all the way to Berkely even if they did somehow misinterpret FAI as an assault on God.
Nontheistic christianity, which is the ruling religion right now could flex enough military might to shut down SI, but I can’t think of any way to make them care.
I live in Vancouver, where as far as I can tell, most people are either non-religious, or very tolerant. This may affect my perceptions.
My take on the “build a God-like AI” idea is that it is pretty crazy. I might like this idea less than the Christians probably do seeing as how I don’t have any sense that Jesus is going to come back and reconstruct us after it does it’s optimization...
This is a good reaction. It is good to take seriously the threat that an AI could pose. However, the point of Friendly AI is to prevent all that and make sure it that if it happens, it is something we would want.
:) You can be as direct as you want to with me. (Normal smilie to prevent the tiny sad moments.)
Interesting. Religious people seem a lot less scary to me than this. My impression is that the teeth have been taken out of traditional christianity. There are a few christian terrorists left in north america, but they seem like holdouts raging bitterly against the death of their religion. They are still in the majority in some places, though, and can persecute people there.
Okay, good point. I agree that religion is losing ground. However, I’ve witnessed some pretty creepy stuff coming out of the churches. Some of them are saying the end is near and doing things like having events to educate about it. Now, that experience was one that I had in a particular location which happens to be very religious. I’m not sure that it was representative of what the churches are up to in general. I admit ignorance when it comes to what average churches are doing. But if there’s enough end-times kindling being thrown into the pit here, people who were previously losing faith may flare up into zealous Christians with the right spark. Trying to build what might be interpreted as an Antichrist would be quite the spark. The imminent arrival of an Antichrist may be seen as a fulfillment of the end times prophecies and be seen as a sign that the Christian religion really is true after all.
A lot is at stake here in the mind of the Christian. If it’s not the end of the world, opposing a machine “God” is still going to look like a good idea—it’s dangerous. If it is the end of the world, they’d better get their s—in gear and become all super-religious and go to battle against Satan because judgment day is coming and if they don’t, they’re going to be condemned. Being grateful to God and following a bunch of rules is pretty hard, especially when you can’t actually SEE the God in question. How people are responding to the mundane religious stuff shouldn’t be seen as a sign of how they’ll react when something exceptional happens.
Being terrified out of your mind that someone is building a super-intelligent mind is easy. This takes no effort at all. Heck, at least half of LessWrong would probably be terrified in this case. Being extra terrified because of end times prophecies doesn’t take any thought or effort. And fear will kill their minds, perhaps making religious feelings more likely. That, to me, seems to be a likely possibility in the event that someone attempts to build a machine “God”. You’re seeing a decline in religion and appear to be thinking that it’s going to continue decreasing. I see a decline in religion and I think it may decrease but also see the potential for the right kinds of things to trigger a conflagration of religious fervor.
There are other memes that add an interesting twist: The bible told them that a lot of people would lose faith before the Antichrist comes. Their own lack of faith might be taken as evidence that the bible is correct.
And I have to wonder how Christianity survived things like the plagues that wiped out half of Europe. They must have been pretty disenchanted with God—unless they interpreted it as the end of the world and became too terrified of eternal condemnation to question why God would allow such horrible things to happen.
Perhaps one of the ways the Christianity meme defends itself is to flood the minds of the religious with fear at the exact moments in history when they would have the most reason to question their faith.
Last year’s Gallup poll says that 78% of Americans are Christan. Even if they’ve lost some steam, if the majority still uses that word to self-identify, we should really acknowledge the possibility that some event could trigger zealous reactions.
I have been told that before Hitler came to power, the intelligentsia of Germany was laughing at him thinking it would never happen. It’s a common flaw of nerds to underestimate the violence and irrationality that the average person is capable of. I think this is because we use ourselves as a model and think they’ll behave, feel and think a lot more like we do than they actually will. I try to compensate for this bias as much as possible.
I live in Vancouver, where as far as I can tell, most people are either non-religious, or very tolerant.
BTW, where I am (i.e. among twentysomething university students in central Italy) atheists take the piss out of believers waaaaay more often than the other way round.
I picked it up from everything on Friendly AI, the terrible uncaringness of the universe, etc. It is most likely not explicitly represented as replacing a negligent god anywhere outside my own musings, unless I’ve forgotten.
I’m not sure I’ve heard any detailed analysis of the Friendly AI project specifically in those terms—at least not any that I felt was worth my time to read—but it’s a common trope of commentary on Singularitarianism in general.
No less mainstream a work than Deus Ex, for example, quotes Voltaire’s famous “”if God did not exist, it would be necessary to create him” in one of its endings—which revolves around granting a friendly (but probably not Friendly) AI control over the world’s computer networks.
No less mainstream a work than Deus Ex, for example, quotes Voltaire’s famous “”if God did not exist, it would be necessary to create him” in one of its endings—which revolves around granting a friendly (but probably not Friendly) AI control over the world’s computer networks.
ROT-13:
Vagrerfgvatyl, va gur raqvat Abeantrfg ersref gb, Uryvbf (na NV) pubbfrf gb hfr W.P. Qragba (gur cebgntbavfg jub fgvyy unf zbfgyl-uhzna cersreraprf) nf vachg sbe n PRI-yvxr cebprff orsber sbbzvat naq znxvat vgfrys (gur zretrq NV naq anab-nhtzragrq uhzna) cuvybfbcure-xvat bs gur jbeyq va beqre gb orggre shysvyy vgf bevtvany checbfr.
This is concerning. My intuitions suggest that it’s not a big deal. I infer that you think it’s a big deal. Someone is miscalibrated.
Do you have a history with persecution that makes you more attuned to it? I am blissfully ignorant.
I don’t know if there’s an explicit post about it. I picked it up from everything on Friendly AI, the terrible uncaringness of the universe, etc. It is most likely not explicitly represented as replacing a negligent god anywhere outside my own musings, unless I’ve forgotten.
I really like this nice, clear, direct observation.
Yes, but more relevantly, humanity has a history with persecution—lots of intelligent people and people who want to change the world from Socrates to Gandhi have been persecuted.
Here Eliezer is in a world full of Christians who believe that dreaded Satan is going to reincarnate soon, claim to be a God, promise to solve all the problems, and take over earth. Religious people have been known to become violent for religious reasons. Surely building an incarnation of Satan would, if that were their interpretation of it, qualify as more or less the ultimate reason to launch a religious war. These Christians outnumber Eliezer by a lot. And Eliezer, according to you, is talking about building WHAT?
My take on the “build a God-like AI” idea is that it is pretty crazy. I might like this idea less than the Christians probably do seeing as how I don’t have any sense that Jesus is going to come back and reconstruct us after it does it’s optimization...
I went out looking for myself and I just watched the bloggingheads video (6:42) where Robert Wright says to Eliezer “It sounds like what you’re saying is we need to build a God” and Eliezer is like “Why don’t we call it a very powerful optimizing agent?” and grins like he’s just fooled someone and Robert Wright thinks and he’s like “Why don’t we call that a euphemism for God?” which destroys Eliezer’s grin.
If Eliezer’s intentions are to build a God, then he’s far less risk-averse than the type of person who would simply try to avoid being burned at the stake. In that case the problem isn’t that he makes himself look bad...
Like he’s just fooled someone? I see him talking like he’s patiently humoring an ignorant child who is struggling to distinguish between “Any person who gives presents at Christmas time” and “The literal freaking Santa Claus, complete with magical flying reindeer”. He isn’t acting like he has ‘fooled’ anyone or acting in any way ‘sneaky’.
While I wouldn’t have been grinning previously whatever my expression had been it would change in response to that question in the direction of irritation and impatience. The answer to “Why don’t we call that a euphemism for God?” is “Because that’d be wrong and totally muddled thinking”. When your mission is to create an actual very powerful optimization agent and that—and not gods—is actually what you spend your time researching then a very powerful optimization agent isn’t a ‘euphemism’ for anything. It’s the actual core goal. Maybe, at a stretch, “God” can be used as a euphemism for “very powerful optimizing agent” but never the reverse.
I’m not commenting here on the question of whether there is a legitimate PR concern regarding people pattern matching to religious themes having dire, hysterical and murderous reactions. Let’s even assume that kind of PR concern legitimate for the purpose of this comment. Even then there is a distinct difference between “failure to successfully fool people” and “failure to educate fools”. It would be the latter task that Eliezer has failed at here and the former charge would be invalid. (I felt the paragraph I quoted to be unfair on Eliezer with respect to blurring that distinction.)
I don’t think that an AI that goes FOOM would be exactly the same as any of the “Gods” humanity has been envisioning and may not even resemble such a God (especially because, if it were a success, it would theoretically not behave in self-contradictory ways like making sinful people, knowing exactly what they’re going to do, making them to do just that, telling them not to act like what they are and then punishing them for behaving the way it designed them to). I don’t see a reason to believe that it is possible for any intellect to be omniscient, omnipotent or perfect. That includes an AI. These, to me, would be the main differences.
Robert Wright appears to be aware of this, as his specific wording was “It seems to me that in some sense what you’re saying is that we need to build a God.”
If you are taking this as a question about what to CALL the thing, then I agree completely that the AI should not be called a God. But he said “in some sense” which means that his question is about something deeper than choosing a word. The wording he’s using is asking something more like “Do you think we should build something similar to a God?”
The way that I interpret this question is not “What do we call this thing?” but more “You think we should build a WHAT?” with the connotations of “What are you thinking?” because the salient thing is that building something even remotely similar to a God would be very, very dangerous.
The reason I interpreted it this way is partly because instead of interpreting everything I hear literally, I will often interpret wording based on what’s salient about it in the context of the situation. For instance, if I saw a scene where someone was running toward someone else with a knife and I asked “Are you about to commit murder?” I would NOT accept “Why don’t we call it knife relocation?” as an acceptable answer.
Afterward, Robert Wright says that Eliezer is being euphemistic. This perception that Eliezer’s answer was an attempt to substitute nice sounding wording for something awful confirms, to me, that Robert’s intent was not to ask “What word should we use for this?” but was intended more like “You think we should build a WHAT? What are you thinking?”
Now, it could be argued that Eliezer accidentally failed to detect the salient connotation. It could be argued, and probably fairly effectively (against me anyway) that the reason for Eliezer’s mistake is that he was having one of his arrogant moments and he genuinely thought that, because of a gigantic intelligence difference between Robert and himself, that Robert was asking a moronic question based on the stupid perception that a super powerful AI would be exactly the same as a real God (whatever that means). In this case, I would classify that as a “social skills / character flaw induced faux pas”.
In my personal interpretation of Eliezer’s behavior, I’m giving him more credit than that—I am assuming that he has previously encountered people by that point (2010) who have flipped out about the possibility that he wants to build a God and have voiced valid and poignant concerns like “Why do you believe it is possible to succeed at controlling something a bazillion times smarter than you?” or “Why would you want us imperfect humans to make something so insanely powerful if it’s more or less guaranteed to be flawed?” I’m assuming that Eliezer interpreted correctly when the salient part of someone’s question is not in it’s literal wording but in connotations relating to the situation.
This is why it looks, to me, like Eliezer’s intent was to brush him off by choosing to answer this question as if it were a question about what word to use and hoping that Robert didn’t have the nerve to go for the throat with valid and poignant questions like the examples above.
The topic of whether this was an unintentional faux pas or an intentional brush-off isn’t the most important thing here.
The most important questions, in my opinion, are:
“Does Eliezer intend to build something this powerful?”
“Does Eliezer really think that something a bazillion times as intelligent as himself can be controlled?”
“Do you and I agree/disagree that it’s a good idea to build something this powerful / that it can be controlled?”
If forced to use that term and answer the question as you ask it, with a “Yes” or “No” then the correct answer would be “No”. He is not trying to create a God, he has done years of work working out what he is trying to create and it is completely different to a God in nearly all features except “very powerful”. If you insinst on that vocabulary you’re going to get “No, I don’t” as an answer. That the artificial intelligence Eliezer would want to create seems to Wright (and perhaps yourself) like it should be described as, considered a euphemism for, or reasoned about as if it is God is a feature of Wright’s lack of domain knowledge.
There is no disingenuity here. Eliezer can honestly say “We should create a very powerful (and carefully designed) optimizing agent” but he cannot honestly say “We should create a God”. (You may begin to understand some of the reasons why there is such a difference when you start considering questions like “Can it be controlled?”. Or at least when you start considering the answers to the same.) So Eliezer gave Wright the chance to get the answer he wanted (“Hell yes, I want to make a very powerful optimising agent!”) rather than the answer the question you suggest would have given him (“Hell no! Don’t create a God! That entails making at least two of the fundamental and critical ethical and practical blunders in FAI design that you probably aren’t able to comprehend yet!”)
I reject the analogy. Eliezer’s answer isn’t like the knife relocation answer. (If anything the connotations are the reverse. More transparency and candidness rather than less.)
It could be that there really is an overwhelming difference in crystallized intelligence between Eliezer and Robert. The question—at least relative to Eliezer’s standards—was moronic. Or at least had connotations of ignorance of salient features of the landscape.
There may be a social skills related faux pas here—and it is one where it is usually socially appropriate to say wrong things in an entirely muddled model of reality rather than educate the people you are speaking to. Maybe that means that Eliezer shouldn’t talk to people like Robert. Perhaps he should get someone trained explicitly with spinning webs of eloquent bullshit to optimally communicate with the uneducated. However the character flaws that I take it you are referring to—Eliezer’s arrogance and soforth, just aren’t at play here.
The net amount of credit given is low. You are ascribing a certain intention to Eliezer’s actions where that intention is clearly not achieved. “I infer he is trying to do X and he in fact fails to do X”. In such cases generosity suggests that if they don’t seem to be achieving X, haven’t said X is what they are trying to achieve and X is inherently lacking in virtue then then by golly maybe they were in fact trying to achieve Y! (Eliezer really isn’t likely to be that actively incompetent at deviousness.)
You assign a high likelyhood to people flipping out (and even persecuting Eliezer) in such a way. Nyan considers it less likely. It may be that Eliezer doesn’t have people (and particularly people of Robert Wright’s intellectual caliber) flip out at him like that.
The kind of people to whom there is the remote possibility that it would be useful to even bother to attempt to explain the answers to such questions are also the kind of people who are capable of asking them, without insisting on asking then belligerently emphasizing wrong questions about ‘God’. This is particularly the case with the first of those questions where the question of ‘controlling’ only comes up because of intuitive misunderstanding on how one would relate to such an agent—ie. thinking of it as a “God” which is something we already intuit as “like a human or mammal but way powerful”.
If he can prove safety mathematically then yes, he does.
At around the time I visited Berkeley there was a jest among some of the SingInst folks “We’re thinking of renaming ourselves from The Singularity Institute For Artificial Intelligence to The Singularity Institute For Or Against Artificial Intelligence Depending On What Seems To Be The Best Altruistic Approach All Things Considered”.
There are risks to creating something this powerful and, in fact, the goal of Eliezer and SIAI aren’t “research AGI”… plenty of researchers work on that. They are focused on Friendliness. Essentially… they are focused on the very dangers that you describe here and are dedicating themselves to combating those dangers.
Note that it is impossible to evaluate a decision to take an action without considering what alternative choice there is. Choosing to dedicate one’s efforts to developing an FAI (“safe and desirable very powerful optimizing agent”) has a very different meaning if the alternative is millennia of peace and tranquility than the same decision to work on FAI if the alternative is “someone is going to create a very powerful optimizing agent anyway but not bother with rigorous safety research”.
If you’re planning to try to control the super-intelligence you have already lost. The task is of selecting from the space of all possible mind designs a mind that will do things that you want done.
Estimate: Slightly disagree. The biggest differences in perception may be surrounding what the consequences of inaction are.
Estimate: Disagree significantly. I believe your understanding of likely superintelligence behavior and self development has too much of an anthropocentric bias. Your anticipations are (in my estimation) strongly influenced by how ethical, intellectual and personal development works in gifted humans.
The above disagreement actually doesn’t necessarily change overall risk assessment. I just expect the specific technical problems to be overcome in order to prevent “Super-intelligent rocks fall! Everybody dies.” to be slightly different in nature. Probably with more emphasis on abstract mathematical concerns.
Thank you. I will try to do more of that.
Interesting. Religious people seem a lot less scary to me than this. My impression is that the teeth have been taken out of traditional christianity. There are a few christian terrorists left in north america, but they seem like holdouts raging bitterly against the death of their religion. They are still in the majority in some places, though, and can persecute people there.
I don’t think that the remains of theistic christianity could reach an effective military/propoganda arm all the way to Berkely even if they did somehow misinterpret FAI as an assault on God.
Nontheistic christianity, which is the ruling religion right now could flex enough military might to shut down SI, but I can’t think of any way to make them care.
I live in Vancouver, where as far as I can tell, most people are either non-religious, or very tolerant. This may affect my perceptions.
This is a good reaction. It is good to take seriously the threat that an AI could pose. However, the point of Friendly AI is to prevent all that and make sure it that if it happens, it is something we would want.
:) You can be as direct as you want to with me. (Normal smilie to prevent the tiny sad moments.)
Okay, good point. I agree that religion is losing ground. However, I’ve witnessed some pretty creepy stuff coming out of the churches. Some of them are saying the end is near and doing things like having events to educate about it. Now, that experience was one that I had in a particular location which happens to be very religious. I’m not sure that it was representative of what the churches are up to in general. I admit ignorance when it comes to what average churches are doing. But if there’s enough end-times kindling being thrown into the pit here, people who were previously losing faith may flare up into zealous Christians with the right spark. Trying to build what might be interpreted as an Antichrist would be quite the spark. The imminent arrival of an Antichrist may be seen as a fulfillment of the end times prophecies and be seen as a sign that the Christian religion really is true after all.
A lot is at stake here in the mind of the Christian. If it’s not the end of the world, opposing a machine “God” is still going to look like a good idea—it’s dangerous. If it is the end of the world, they’d better get their s—in gear and become all super-religious and go to battle against Satan because judgment day is coming and if they don’t, they’re going to be condemned. Being grateful to God and following a bunch of rules is pretty hard, especially when you can’t actually SEE the God in question. How people are responding to the mundane religious stuff shouldn’t be seen as a sign of how they’ll react when something exceptional happens.
Being terrified out of your mind that someone is building a super-intelligent mind is easy. This takes no effort at all. Heck, at least half of LessWrong would probably be terrified in this case. Being extra terrified because of end times prophecies doesn’t take any thought or effort. And fear will kill their minds, perhaps making religious feelings more likely. That, to me, seems to be a likely possibility in the event that someone attempts to build a machine “God”. You’re seeing a decline in religion and appear to be thinking that it’s going to continue decreasing. I see a decline in religion and I think it may decrease but also see the potential for the right kinds of things to trigger a conflagration of religious fervor.
There are other memes that add an interesting twist: The bible told them that a lot of people would lose faith before the Antichrist comes. Their own lack of faith might be taken as evidence that the bible is correct.
And I have to wonder how Christianity survived things like the plagues that wiped out half of Europe. They must have been pretty disenchanted with God—unless they interpreted it as the end of the world and became too terrified of eternal condemnation to question why God would allow such horrible things to happen.
Perhaps one of the ways the Christianity meme defends itself is to flood the minds of the religious with fear at the exact moments in history when they would have the most reason to question their faith.
Last year’s Gallup poll says that 78% of Americans are Christan. Even if they’ve lost some steam, if the majority still uses that word to self-identify, we should really acknowledge the possibility that some event could trigger zealous reactions.
I have been told that before Hitler came to power, the intelligentsia of Germany was laughing at him thinking it would never happen. It’s a common flaw of nerds to underestimate the violence and irrationality that the average person is capable of. I think this is because we use ourselves as a model and think they’ll behave, feel and think a lot more like we do than they actually will. I try to compensate for this bias as much as possible.
BTW, where I am (i.e. among twentysomething university students in central Italy) atheists take the piss out of believers waaaaay more often than the other way round.
I’m not sure I’ve heard any detailed analysis of the Friendly AI project specifically in those terms—at least not any that I felt was worth my time to read—but it’s a common trope of commentary on Singularitarianism in general.
No less mainstream a work than Deus Ex, for example, quotes Voltaire’s famous “”if God did not exist, it would be necessary to create him” in one of its endings—which revolves around granting a friendly (but probably not Friendly) AI control over the world’s computer networks.
ROT-13:
Vagrerfgvatyl, va gur raqvat Abeantrfg ersref gb, Uryvbf (na NV) pubbfrf gb hfr W.P. Qragba (gur cebgntbavfg jub fgvyy unf zbfgyl-uhzna cersreraprf) nf vachg sbe n PRI-yvxr cebprff orsber sbbzvat naq znxvat vgfrys (gur zretrq NV naq anab-nhtzragrq uhzna) cuvybfbcure-xvat bs gur jbeyq va beqre gb orggre shysvyy vgf bevtvany checbfr.