Our benevolent dictator isn’t doing much dictatoring. If I understand correctly that it’s EY, he has a lot more hats to wear, and doesn’t have the time to do LW-managing full time.
Is he willing to improve LW, but not able? Then he is not a dictator. Is he able, but not willing? Then he is not benevolent. Is he both willing and able? Then whence cometh suck? Is he neither willing nor able? Then why call him God?
As with god, If we observe a lack of leadership, it is irrelevant whether we nominally have a god-emperor or not. The solution is always the same: Build a new one that will actually do the job we want done.
You’re defending yourself against accusations of being a phyg leader over there and over here, you’re enjoying a comment that implies that either the commenter, or the people the commenter is addressing perceive you as a god? And not only that, but this might even imply that you endorse the solution that is “always the same” of “building a new one (god-emperor)”.
I tend to see a fairly sharp distinction between negative aspects of phyg-leadership and the parts that seem like harmless fun, like having my own volcano island with a huge medieval castle, and sitting on a throne wearing a cape saying in dark tones, “IT IS NOT FOR YOU TO QUESTION MY FUN, MORTAL.” Ceteris paribus, I’d prefer that working environment if offered.
And how are people supposed to make the distinction between your fun and signs of pathological narcissism? You and I both know the world is full of irrationality, and that this place is public. You’ve endured the ravages of the hatchet job and Rationalwiki’s annoying behaviors. This comment could easily be interpreted by them as evidence that you really do fancy yourself a false prophet.
What’s more is that I (as in someone who is not a heartless and self-interested reporter, who thinks you’re brilliant, who appreciates you, who is not some completely confused person with no serious interest in rationality) am now thinking:
How do I make the distinction between a guy who has an “arrogance problem” and has fun encouraging comments that imply that people think of him as a god vs. a guy with a serious issue?
Try working in system administration for a while. Some people will think you are a god; some people will think you are a naughty child who wants to be seen as a god; and some people will think you are a sweeper. Mostly you will feel like a sweeper … except occasionally when you save the world from sin, death, and hell.
I feel the same way as a web developer. One day I’m being told I’m a genius for suggesting that a technical problem might be solved by changing a port number. The next day, I’m writing a script to compensate for the incompetent failures of a certain vendor.
When people ask me for help, they assume I can fix anything. When they give me a project, they assume they know better how to do it.
And how are people supposed to make the distinction between your fun and signs of pathological narcissism?
I don’t see this as a particular problem in this instance. The responses are of the form that if anything an indication that he isn’t taking himself too seriously. The more pathologically narcissistic type tend to be more somber about their power and image.
No, if there was a problem here it would be if the joke was in poor taste. In particular if there were those that had been given the impression that Eliezer’s power or Narcissism really was corrupting his thinking. If he had begun to use his power arbitrarily on his own whim or if his arrogance had left him incapable of receiving feedback or perceiving the consequences his actions have on others or even himself. Basically, jokes about how arrogant and narcissistic one is only work when people don’t perceive you as actually having problems in that regard. If you really do have real arrogance problems then joking that you have them while completely not acknowledging the problem makes you look grossly out of touch and socially awkward.
For my part, however, I don’t have any direct problem with Eliezer appreciating this kind of reasoning. It does strike me as a tad naive of him and I do agree that it is the kind of thing that makes Luke’s job harder. Just… as far as PR missteps made by Eliezer this seems so utterly trivial as to be barely worth mentioning.
How do I make the distinction between a guy who has an “arrogance problem” and has fun encouraging comments that imply that people think of him as a god vs. a guy with a serious issue?
The way I make such distinctions is to basically ignore ‘superficial arrogance’. I look at the real symptoms. The ones that matter and have potential direct consequences. I look at their ability to comprehend the words of others—particularly those others without the power to ‘force’ them to update. I look at how much care they take in exercising whatever power they do have. I look at how confident they are in their beliefs and compare that to how often those beliefs are correct.
you’re enjoying a comment that implies that either the commenter, or the people the commenter is addressing perceive you as a god?
I was drawing an analogy to Epicurus on this issue because the structure of the situation is the same, not because anyone perceives (our glorious leader) EY as a god.
And not only that, but this might even imply that you endorse the solution that is “always the same” of “building a new one (god-emperor)”.
I bet he does endorse it. His life’s work is all about building a new god to replace the negligent or nonexistent one that let the world go to shit. I got the idea from him.
My response was more about what interpretations are possible than what interpretation I took.
I was drawing an analogy to Epicurus on this issue because the structure of the situation is the same, not because anyone perceives (our glorious leader) EY as a god.
Okay. There’s a peculiar habit in this place where people say things that can easily be interpreted as something that will draw persecution. Then I point it out, and nobody cares.
I bet he does endorse it. His life’s work is all about building a new god to replace the negligent or nonexistent one that let the world go to shit. I got the idea from him.
Okay. It probably seems kind of stupid that I failed to realize that. Is there a post that I should read?
Okay. There’s a peculiar habit in this place where people say things that can easily be interpreted as something that will draw persecution. Then I point it out, and nobody cares.
This is concerning. My intuitions suggest that it’s not a big deal. I infer that you think it’s a big deal. Someone is miscalibrated.
Do you have a history with persecution that makes you more attuned to it? I am blissfully ignorant.
Okay. It probably seems kind of stupid that I failed to realize that. Is there a post that I should read?
I don’t know if there’s an explicit post about it. I picked it up from everything on Friendly AI, the terrible uncaringness of the universe, etc. It is most likely not explicitly represented as replacing a negligent god anywhere outside my own musings, unless I’ve forgotten.
This is concerning. My intuitions suggest that it’s not a big deal. I infer that you think it’s a big deal. Someone is miscalibrated.
I really like this nice, clear, direct observation.
Do you have a history with persecution that makes you more attuned to it? I am blissfully ignorant.
Yes, but more relevantly, humanity has a history with persecution—lots of intelligent people and people who want to change the world from Socrates to Gandhi have been persecuted.
Here Eliezer is in a world full of Christians who believe that dreaded Satan is going to reincarnate soon, claim to be a God, promise to solve all the problems, and take over earth. Religious people have been known to become violent for religious reasons. Surely building an incarnation of Satan would, if that were their interpretation of it, qualify as more or less the ultimate reason to launch a religious war. These Christians outnumber Eliezer by a lot. And Eliezer, according to you, is talking about building WHAT?
My take on the “build a God-like AI” idea is that it is pretty crazy. I might like this idea less than the Christians probably do seeing as how I don’t have any sense that Jesus is going to come back and reconstruct us after it does it’s optimization...
I don’t know if there’s an explicit post about it. I picked it up from everything on Friendly AI, the terrible uncaringness of the universe, etc. It is most likely not explicitly represented as replacing a negligent god anywhere outside my own musings, unless I’ve forgotten.
I went out looking for myself and I just watched the bloggingheads video (6:42) where Robert Wright says to Eliezer “It sounds like what you’re saying is we need to build a God” and Eliezer is like “Why don’t we call it a very powerful optimizing agent?” and grins like he’s just fooled someone and Robert Wright thinks and he’s like “Why don’t we call that a euphemism for God?” which destroys Eliezer’s grin.
If Eliezer’s intentions are to build a God, then he’s far less risk-averse than the type of person who would simply try to avoid being burned at the stake. In that case the problem isn’t that he makes himself look bad...
I went out looking for myself and I just watched the bloggingheads video (6:42) where Robert Wright says to Eliezer “It sounds like what you’re saying is we need to build a God” and Eliezer is like “Why don’t we call it a very powerful optimizing agent?” and grins like he’s just fooled someone
Like he’s just fooled someone? I see him talking like he’s patiently humoring an ignorant child who is struggling to distinguish between “Any person who gives presents at Christmas time” and “The literal freaking Santa Claus, complete with magical flying reindeer”. He isn’t acting like he has ‘fooled’ anyone or acting in any way ‘sneaky’.
and Robert Wright thinks and he’s like “Why don’t we call that a euphemism for God?” which destroys Eliezer’s grin.
While I wouldn’t have been grinning previously whatever my expression had been it would change in response to that question in the direction of irritation and impatience. The answer to “Why don’t we call that a euphemism for God?” is “Because that’d be wrong and totally muddled thinking”. When your mission is to create an actual very powerful optimization agent and that—and not gods—is actually what you spend your time researching then a very powerful optimization agent isn’t a ‘euphemism’ for anything. It’s the actual core goal. Maybe, at a stretch, “God” can be used as a euphemism for “very powerful optimizing agent” but never the reverse.
I’m not commenting here on the question of whether there is a legitimate PR concern regarding people pattern matching to religious themes having dire, hysterical and murderous reactions. Let’s even assume that kind of PR concern legitimate for the purpose of this comment. Even then there is a distinct difference between “failure to successfully fool people” and “failure to educate fools”. It would be the latter task that Eliezer has failed at here and the former charge would be invalid. (I felt the paragraph I quoted to be unfair on Eliezer with respect to blurring that distinction.)
I don’t think that an AI that goes FOOM would be exactly the same as any of the “Gods” humanity has been envisioning and may not even resemble such a God (especially because, if it were a success, it would theoretically not behave in self-contradictory ways like making sinful people, knowing exactly what they’re going to do, making them to do just that, telling them not to act like what they are and then punishing them for behaving the way it designed them to). I don’t see a reason to believe that it is possible for any intellect to be omniscient, omnipotent or perfect. That includes an AI. These, to me, would be the main differences.
Robert Wright appears to be aware of this, as his specific wording was “It seems to me that in some sense what you’re saying is that we need to build a God.”
If you are taking this as a question about what to CALL the thing, then I agree completely that the AI should not be called a God. But he said “in some sense” which means that his question is about something deeper than choosing a word. The wording he’s using is asking something more like “Do you think we should build something similar to a God?”
The way that I interpret this question is not “What do we call this thing?” but more “You think we should build a WHAT?” with the connotations of “What are you thinking?” because the salient thing is that building something even remotely similar to a God would be very, very dangerous.
The reason I interpreted it this way is partly because instead of interpreting everything I hear literally, I will often interpret wording based on what’s salient about it in the context of the situation. For instance, if I saw a scene where someone was running toward someone else with a knife and I asked “Are you about to commit murder?” I would NOT accept “Why don’t we call it knife relocation?” as an acceptable answer.
Afterward, Robert Wright says that Eliezer is being euphemistic. This perception that Eliezer’s answer was an attempt to substitute nice sounding wording for something awful confirms, to me, that Robert’s intent was not to ask “What word should we use for this?” but was intended more like “You think we should build a WHAT? What are you thinking?”
Now, it could be argued that Eliezer accidentally failed to detect the salient connotation. It could be argued, and probably fairly effectively (against me anyway) that the reason for Eliezer’s mistake is that he was having one of his arrogant moments and he genuinely thought that, because of a gigantic intelligence difference between Robert and himself, that Robert was asking a moronic question based on the stupid perception that a super powerful AI would be exactly the same as a real God (whatever that means). In this case, I would classify that as a “social skills / character flaw induced faux pas”.
In my personal interpretation of Eliezer’s behavior, I’m giving him more credit than that—I am assuming that he has previously encountered people by that point (2010) who have flipped out about the possibility that he wants to build a God and have voiced valid and poignant concerns like “Why do you believe it is possible to succeed at controlling something a bazillion times smarter than you?” or “Why would you want us imperfect humans to make something so insanely powerful if it’s more or less guaranteed to be flawed?” I’m assuming that Eliezer interpreted correctly when the salient part of someone’s question is not in it’s literal wording but in connotations relating to the situation.
This is why it looks, to me, like Eliezer’s intent was to brush him off by choosing to answer this question as if it were a question about what word to use and hoping that Robert didn’t have the nerve to go for the throat with valid and poignant questions like the examples above.
The topic of whether this was an unintentional faux pas or an intentional brush-off isn’t the most important thing here.
The most important questions, in my opinion, are:
“Does Eliezer intend to build something this powerful?”
“Does Eliezer really think that something a bazillion times as intelligent as himself can be controlled?”
“Do you and I agree/disagree that it’s a good idea to build something this powerful / that it can be controlled?”
If you are taking this as a question about what to CALL the thing, then I agree completely that the AI should not be called a God. But he said “in some sense” which means that his question is about something deeper than choosing a word. The wording he’s using is asking something more like “Do you think we should build something similar to a God?”
If forced to use that term and answer the question as you ask it, with a “Yes” or “No” then the correct answer would be “No”. He is not trying to create a God, he has done years of work working out what he is trying to create and it is completely different to a God in nearly all features except “very powerful”. If you insinst on that vocabulary you’re going to get “No, I don’t” as an answer. That the artificial intelligence Eliezer would want to create seems to Wright (and perhaps yourself) like it should be described as, considered a euphemism for, or reasoned about as if it is God is a feature of Wright’s lack of domain knowledge.
There is no disingenuity here. Eliezer can honestly say “We should create a very powerful (and carefully designed) optimizing agent” but he cannot honestly say “We should create a God”. (You may begin to understand some of the reasons why there is such a difference when you start considering questions like “Can it be controlled?”. Or at least when you start considering the answers to the same.) So Eliezer gave Wright the chance to get the answer he wanted (“Hell yes, I want to make a very powerful optimising agent!”) rather than the answer the question you suggest would have given him (“Hell no! Don’t create a God! That entails making at least two of the fundamental and critical ethical and practical blunders in FAI design that you probably aren’t able to comprehend yet!”)
The reason I interpreted it this way is partly because instead of interpreting everything I hear literally, I will often interpret wording based on what’s salient about it in the context of the situation. For instance, if I saw a scene where someone was running toward someone else with a knife and I asked “Are you about to commit murder?” I would NOT accept “Why don’t we call it knife relocation?” as an acceptable answer.
I reject the analogy. Eliezer’s answer isn’t like the knife relocation answer. (If anything the connotations are the reverse. More transparency and candidness rather than less.)
Now, it could be argued that Eliezer accidentally failed to detect the salient connotation. It could be argued, and probably fairly effectively (against me anyway) that the reason for Eliezer’s mistake is that he was having one of his arrogant moments and he genuinely thought that, because of a gigantic intelligence difference between Robert and himself, that Robert was asking a moronic question based on the stupid perception that a super powerful AI would be exactly the same as a real God (whatever that means).
It could be that there really is an overwhelming difference in crystallized intelligence between Eliezer and Robert. The question—at least relative to Eliezer’s standards—was moronic. Or at least had connotations of ignorance of salient features of the landscape.
In this case, I would classify that as a “social skills / character flaw induced faux pas”.
There may be a social skills related faux pas here—and it is one where it is usually socially appropriate to say wrong things in an entirely muddled model of reality rather than educate the people you are speaking to. Maybe that means that Eliezer shouldn’t talk to people like Robert. Perhaps he should get someone trained explicitly with spinning webs of eloquent bullshit to optimally communicate with the uneducated. However the character flaws that I take it you are referring to—Eliezer’s arrogance and soforth, just aren’t at play here.
In my personal interpretation of Eliezer’s behavior, I’m giving him more credit than that
The net amount of credit given is low. You are ascribing a certain intention to Eliezer’s actions where that intention is clearly not achieved. “I infer he is trying to do X and he in fact fails to do X”. In such cases generosity suggests that if they don’t seem to be achieving X, haven’t said X is what they are trying to achieve and X is inherently lacking in virtue then then by golly maybe they were in fact trying to achieve Y! (Eliezer really isn’t likely to be that actively incompetent at deviousness.)
I am assuming that he has previously encountered people by that point (2010) who have flipped out about the possibility that he wants to build a God
You assign a high likelyhood to people flipping out (and even persecuting Eliezer) in such a way. Nyan considers it less likely. It may be that Eliezer doesn’t have people (and particularly people of Robert Wright’s intellectual caliber) flip out at him like that.
and have voiced valid and poignant concerns like “Why do you believe it is possible to succeed at controlling something a bazillion times smarter than you?” or “Why would you want us imperfect humans to make something so insanely powerful if it’s more or less guaranteed to be flawed?”
The kind of people to whom there is the remote possibility that it would be useful to even bother to attempt to explain the answers to such questions are also the kind of people who are capable of asking them, without insisting on asking then belligerently emphasizing wrong questions about ‘God’. This is particularly the case with the first of those questions where the question of ‘controlling’ only comes up because of intuitive misunderstanding on how one would relate to such an agent—ie. thinking of it as a “God” which is something we already intuit as “like a human or mammal but way powerful”.
“Does Eliezer intend to build something this powerful?”
If he can prove safety mathematically then yes, he does.
At around the time I visited Berkeley there was a jest among some of the SingInst folks “We’re thinking of renaming ourselves from The Singularity Institute For Artificial Intelligence to The Singularity Institute For Or Against Artificial Intelligence Depending On What Seems To Be The Best Altruistic Approach All Things Considered”.
There are risks to creating something this powerful and, in fact, the goal of Eliezer and SIAI aren’t “research AGI”… plenty of researchers work on that. They are focused on Friendliness. Essentially… they are focused on the very dangers that you describe here and are dedicating themselves to combating those dangers.
Note that it is impossible to evaluate a decision to take an action without considering what alternative choice there is. Choosing to dedicate one’s efforts to developing an FAI (“safe and desirable very powerful optimizing agent”) has a very different meaning if the alternative is millennia of peace and tranquility than the same decision to work on FAI if the alternative is “someone is going to create a very powerful optimizing agent anyway but not bother with rigorous safety research”.
“Does Eliezer really think that something a bazillion times as intelligent as himself can be controlled?”
If you’re planning to try to control the super-intelligence you have already lost. The task is of selecting from the space of all possible mind designs a mind that will do things that you want done.
“Do you and I agree/disagree that it’s a good idea to build something this powerful
Estimate: Slightly disagree. The biggest differences in perception may be surrounding what the consequences of inaction are.
/ that it can be controlled?”
Estimate: Disagree significantly. I believe your understanding of likely superintelligence behavior and self development has too much of an anthropocentric bias. Your anticipations are (in my estimation) strongly influenced by how ethical, intellectual and personal development works in gifted humans.
The above disagreement actually doesn’t necessarily change overall risk assessment. I just expect the specific technical problems to be overcome in order to prevent “Super-intelligent rocks fall! Everybody dies.” to be slightly different in nature. Probably with more emphasis on abstract mathematical concerns.
I really like this nice, clear, direct observation.
Thank you. I will try to do more of that.
Here Eliezer is in a world full of Christians who believe that dreaded Satan is going to reincarnate soon, claim to be a God, promise to solve all the problems, and take over earth. Religious people have been known to become violent for religious reasons. Surely building an incarnation of Satan would, if that were their interpretation of it, qualify as more or less the ultimate reason to launch a religious war. These Christians outnumber Eliezer by a lot. And Eliezer, according to you, is talking about building WHAT?
Interesting. Religious people seem a lot less scary to me than this. My impression is that the teeth have been taken out of traditional christianity. There are a few christian terrorists left in north america, but they seem like holdouts raging bitterly against the death of their religion. They are still in the majority in some places, though, and can persecute people there.
I don’t think that the remains of theistic christianity could reach an effective military/propoganda arm all the way to Berkely even if they did somehow misinterpret FAI as an assault on God.
Nontheistic christianity, which is the ruling religion right now could flex enough military might to shut down SI, but I can’t think of any way to make them care.
I live in Vancouver, where as far as I can tell, most people are either non-religious, or very tolerant. This may affect my perceptions.
My take on the “build a God-like AI” idea is that it is pretty crazy. I might like this idea less than the Christians probably do seeing as how I don’t have any sense that Jesus is going to come back and reconstruct us after it does it’s optimization...
This is a good reaction. It is good to take seriously the threat that an AI could pose. However, the point of Friendly AI is to prevent all that and make sure it that if it happens, it is something we would want.
:) You can be as direct as you want to with me. (Normal smilie to prevent the tiny sad moments.)
Interesting. Religious people seem a lot less scary to me than this. My impression is that the teeth have been taken out of traditional christianity. There are a few christian terrorists left in north america, but they seem like holdouts raging bitterly against the death of their religion. They are still in the majority in some places, though, and can persecute people there.
Okay, good point. I agree that religion is losing ground. However, I’ve witnessed some pretty creepy stuff coming out of the churches. Some of them are saying the end is near and doing things like having events to educate about it. Now, that experience was one that I had in a particular location which happens to be very religious. I’m not sure that it was representative of what the churches are up to in general. I admit ignorance when it comes to what average churches are doing. But if there’s enough end-times kindling being thrown into the pit here, people who were previously losing faith may flare up into zealous Christians with the right spark. Trying to build what might be interpreted as an Antichrist would be quite the spark. The imminent arrival of an Antichrist may be seen as a fulfillment of the end times prophecies and be seen as a sign that the Christian religion really is true after all.
A lot is at stake here in the mind of the Christian. If it’s not the end of the world, opposing a machine “God” is still going to look like a good idea—it’s dangerous. If it is the end of the world, they’d better get their s—in gear and become all super-religious and go to battle against Satan because judgment day is coming and if they don’t, they’re going to be condemned. Being grateful to God and following a bunch of rules is pretty hard, especially when you can’t actually SEE the God in question. How people are responding to the mundane religious stuff shouldn’t be seen as a sign of how they’ll react when something exceptional happens.
Being terrified out of your mind that someone is building a super-intelligent mind is easy. This takes no effort at all. Heck, at least half of LessWrong would probably be terrified in this case. Being extra terrified because of end times prophecies doesn’t take any thought or effort. And fear will kill their minds, perhaps making religious feelings more likely. That, to me, seems to be a likely possibility in the event that someone attempts to build a machine “God”. You’re seeing a decline in religion and appear to be thinking that it’s going to continue decreasing. I see a decline in religion and I think it may decrease but also see the potential for the right kinds of things to trigger a conflagration of religious fervor.
There are other memes that add an interesting twist: The bible told them that a lot of people would lose faith before the Antichrist comes. Their own lack of faith might be taken as evidence that the bible is correct.
And I have to wonder how Christianity survived things like the plagues that wiped out half of Europe. They must have been pretty disenchanted with God—unless they interpreted it as the end of the world and became too terrified of eternal condemnation to question why God would allow such horrible things to happen.
Perhaps one of the ways the Christianity meme defends itself is to flood the minds of the religious with fear at the exact moments in history when they would have the most reason to question their faith.
Last year’s Gallup poll says that 78% of Americans are Christan. Even if they’ve lost some steam, if the majority still uses that word to self-identify, we should really acknowledge the possibility that some event could trigger zealous reactions.
I have been told that before Hitler came to power, the intelligentsia of Germany was laughing at him thinking it would never happen. It’s a common flaw of nerds to underestimate the violence and irrationality that the average person is capable of. I think this is because we use ourselves as a model and think they’ll behave, feel and think a lot more like we do than they actually will. I try to compensate for this bias as much as possible.
I live in Vancouver, where as far as I can tell, most people are either non-religious, or very tolerant.
BTW, where I am (i.e. among twentysomething university students in central Italy) atheists take the piss out of believers waaaaay more often than the other way round.
I picked it up from everything on Friendly AI, the terrible uncaringness of the universe, etc. It is most likely not explicitly represented as replacing a negligent god anywhere outside my own musings, unless I’ve forgotten.
I’m not sure I’ve heard any detailed analysis of the Friendly AI project specifically in those terms—at least not any that I felt was worth my time to read—but it’s a common trope of commentary on Singularitarianism in general.
No less mainstream a work than Deus Ex, for example, quotes Voltaire’s famous “”if God did not exist, it would be necessary to create him” in one of its endings—which revolves around granting a friendly (but probably not Friendly) AI control over the world’s computer networks.
No less mainstream a work than Deus Ex, for example, quotes Voltaire’s famous “”if God did not exist, it would be necessary to create him” in one of its endings—which revolves around granting a friendly (but probably not Friendly) AI control over the world’s computer networks.
ROT-13:
Vagrerfgvatyl, va gur raqvat Abeantrfg ersref gb, Uryvbf (na NV) pubbfrf gb hfr W.P. Qragba (gur cebgntbavfg jub fgvyy unf zbfgyl-uhzna cersreraprf) nf vachg sbe n PRI-yvxr cebprff orsber sbbzvat naq znxvat vgfrys (gur zretrq NV naq anab-nhtzragrq uhzna) cuvybfbcure-xvat bs gur jbeyq va beqre gb orggre shysvyy vgf bevtvany checbfr.
over here, you’re enjoying a comment that implies that either the commenter, or the people the commenter is addressing perceive you as a god?
I have to agree with Eliezer here: this is a terrible standard for evaluating phygishness. Simply put, enjoying that kind of comment does not correlate at all with what the harmful features of phygish organizations/social clubs, etc. are. There are plenty of Internet projects that refer to their most prominent leaders with such titles as God-King, “benevolent dictator” and the like; it has no implication at all.
You have more faith than I do that it will not be intentionally or unintentionally misinterpreted.
Also, I am interpreting at that comment within the context of other things. The “arrogance problem” thread, the b - - - - - - k, Eliezer’s dating profile, etc.
What’s not clear is whether you or I are more realistic when it comes to how people are likely to interpret, in not only a superficial context (like some hatchet jobbing reporter who knows only some LW gossip), but with no context, or within the context of other things with a similar theme.
Let’s go to the object level: in the case of God, the fact that god is doing nothing is not evidence that Friendly AI won’t work.
In the case of EY the supposed benevolent dictator, the fact that he is not doing any benevolent dictatoring is explained by the fact that he has many other things that are more important. That prevents us from learning anything about the general effectiveness of benevolent dictators, and we have to rely on the prior belief that it works quite well.
There are alternatives to monarchy, and an example of a disappointing monarch should suggest that alternatives might be worth considering, or at the very least that appointing a monarch isn’t invariably the answer. That was my only point.
Our benevolent dictator isn’t doing much dictatoring. If I understand correctly that it’s EY, he has a lot more hats to wear, and doesn’t have the time to do LW-managing full time.
As with god, If we observe a lack of leadership, it is irrelevant whether we nominally have a god-emperor or not. The solution is always the same: Build a new one that will actually do the job we want done.
Okay, that? That was one of the most awesome predicates of which I’ve ever been a subject.
You’re defending yourself against accusations of being a phyg leader over there and over here, you’re enjoying a comment that implies that either the commenter, or the people the commenter is addressing perceive you as a god? And not only that, but this might even imply that you endorse the solution that is “always the same” of “building a new one (god-emperor)”.
Have you forgotten Luke’s efforts to fight the perceptions of SI’s arrogance?
That you appear to be encouraging a comment that uses the word god to refer to you in any way, directly or indirectly, is pretty disheartening.
I tend to see a fairly sharp distinction between negative aspects of phyg-leadership and the parts that seem like harmless fun, like having my own volcano island with a huge medieval castle, and sitting on a throne wearing a cape saying in dark tones, “IT IS NOT FOR YOU TO QUESTION MY FUN, MORTAL.” Ceteris paribus, I’d prefer that working environment if offered.
And how are people supposed to make the distinction between your fun and signs of pathological narcissism? You and I both know the world is full of irrationality, and that this place is public. You’ve endured the ravages of the hatchet job and Rationalwiki’s annoying behaviors. This comment could easily be interpreted by them as evidence that you really do fancy yourself a false prophet.
What’s more is that I (as in someone who is not a heartless and self-interested reporter, who thinks you’re brilliant, who appreciates you, who is not some completely confused person with no serious interest in rationality) am now thinking:
How do I make the distinction between a guy who has an “arrogance problem” and has fun encouraging comments that imply that people think of him as a god vs. a guy with a serious issue?
Try working in system administration for a while. Some people will think you are a god; some people will think you are a naughty child who wants to be seen as a god; and some people will think you are a sweeper. Mostly you will feel like a sweeper … except occasionally when you save the world from sin, death, and hell.
I feel the same way as a web developer. One day I’m being told I’m a genius for suggesting that a technical problem might be solved by changing a port number. The next day, I’m writing a script to compensate for the incompetent failures of a certain vendor.
When people ask me for help, they assume I can fix anything. When they give me a project, they assume they know better how to do it.
The only way to decide whether someone has a serious issue is to read a bunch from them and then see which patterns you find.
I don’t see this as a particular problem in this instance. The responses are of the form that if anything an indication that he isn’t taking himself too seriously. The more pathologically narcissistic type tend to be more somber about their power and image.
No, if there was a problem here it would be if the joke was in poor taste. In particular if there were those that had been given the impression that Eliezer’s power or Narcissism really was corrupting his thinking. If he had begun to use his power arbitrarily on his own whim or if his arrogance had left him incapable of receiving feedback or perceiving the consequences his actions have on others or even himself. Basically, jokes about how arrogant and narcissistic one is only work when people don’t perceive you as actually having problems in that regard. If you really do have real arrogance problems then joking that you have them while completely not acknowledging the problem makes you look grossly out of touch and socially awkward.
For my part, however, I don’t have any direct problem with Eliezer appreciating this kind of reasoning. It does strike me as a tad naive of him and I do agree that it is the kind of thing that makes Luke’s job harder. Just… as far as PR missteps made by Eliezer this seems so utterly trivial as to be barely worth mentioning.
The way I make such distinctions is to basically ignore ‘superficial arrogance’. I look at the real symptoms. The ones that matter and have potential direct consequences. I look at their ability to comprehend the words of others—particularly those others without the power to ‘force’ them to update. I look at how much care they take in exercising whatever power they do have. I look at how confident they are in their beliefs and compare that to how often those beliefs are correct.
srsly, brah. I think you misunderstood me.
I was drawing an analogy to Epicurus on this issue because the structure of the situation is the same, not because anyone perceives (our glorious leader) EY as a god.
I bet he does endorse it. His life’s work is all about building a new god to replace the negligent or nonexistent one that let the world go to shit. I got the idea from him.
My response was more about what interpretations are possible than what interpretation I took.
Okay. There’s a peculiar habit in this place where people say things that can easily be interpreted as something that will draw persecution. Then I point it out, and nobody cares.
Okay. It probably seems kind of stupid that I failed to realize that. Is there a post that I should read?
This is concerning. My intuitions suggest that it’s not a big deal. I infer that you think it’s a big deal. Someone is miscalibrated.
Do you have a history with persecution that makes you more attuned to it? I am blissfully ignorant.
I don’t know if there’s an explicit post about it. I picked it up from everything on Friendly AI, the terrible uncaringness of the universe, etc. It is most likely not explicitly represented as replacing a negligent god anywhere outside my own musings, unless I’ve forgotten.
I really like this nice, clear, direct observation.
Yes, but more relevantly, humanity has a history with persecution—lots of intelligent people and people who want to change the world from Socrates to Gandhi have been persecuted.
Here Eliezer is in a world full of Christians who believe that dreaded Satan is going to reincarnate soon, claim to be a God, promise to solve all the problems, and take over earth. Religious people have been known to become violent for religious reasons. Surely building an incarnation of Satan would, if that were their interpretation of it, qualify as more or less the ultimate reason to launch a religious war. These Christians outnumber Eliezer by a lot. And Eliezer, according to you, is talking about building WHAT?
My take on the “build a God-like AI” idea is that it is pretty crazy. I might like this idea less than the Christians probably do seeing as how I don’t have any sense that Jesus is going to come back and reconstruct us after it does it’s optimization...
I went out looking for myself and I just watched the bloggingheads video (6:42) where Robert Wright says to Eliezer “It sounds like what you’re saying is we need to build a God” and Eliezer is like “Why don’t we call it a very powerful optimizing agent?” and grins like he’s just fooled someone and Robert Wright thinks and he’s like “Why don’t we call that a euphemism for God?” which destroys Eliezer’s grin.
If Eliezer’s intentions are to build a God, then he’s far less risk-averse than the type of person who would simply try to avoid being burned at the stake. In that case the problem isn’t that he makes himself look bad...
Like he’s just fooled someone? I see him talking like he’s patiently humoring an ignorant child who is struggling to distinguish between “Any person who gives presents at Christmas time” and “The literal freaking Santa Claus, complete with magical flying reindeer”. He isn’t acting like he has ‘fooled’ anyone or acting in any way ‘sneaky’.
While I wouldn’t have been grinning previously whatever my expression had been it would change in response to that question in the direction of irritation and impatience. The answer to “Why don’t we call that a euphemism for God?” is “Because that’d be wrong and totally muddled thinking”. When your mission is to create an actual very powerful optimization agent and that—and not gods—is actually what you spend your time researching then a very powerful optimization agent isn’t a ‘euphemism’ for anything. It’s the actual core goal. Maybe, at a stretch, “God” can be used as a euphemism for “very powerful optimizing agent” but never the reverse.
I’m not commenting here on the question of whether there is a legitimate PR concern regarding people pattern matching to religious themes having dire, hysterical and murderous reactions. Let’s even assume that kind of PR concern legitimate for the purpose of this comment. Even then there is a distinct difference between “failure to successfully fool people” and “failure to educate fools”. It would be the latter task that Eliezer has failed at here and the former charge would be invalid. (I felt the paragraph I quoted to be unfair on Eliezer with respect to blurring that distinction.)
I don’t think that an AI that goes FOOM would be exactly the same as any of the “Gods” humanity has been envisioning and may not even resemble such a God (especially because, if it were a success, it would theoretically not behave in self-contradictory ways like making sinful people, knowing exactly what they’re going to do, making them to do just that, telling them not to act like what they are and then punishing them for behaving the way it designed them to). I don’t see a reason to believe that it is possible for any intellect to be omniscient, omnipotent or perfect. That includes an AI. These, to me, would be the main differences.
Robert Wright appears to be aware of this, as his specific wording was “It seems to me that in some sense what you’re saying is that we need to build a God.”
If you are taking this as a question about what to CALL the thing, then I agree completely that the AI should not be called a God. But he said “in some sense” which means that his question is about something deeper than choosing a word. The wording he’s using is asking something more like “Do you think we should build something similar to a God?”
The way that I interpret this question is not “What do we call this thing?” but more “You think we should build a WHAT?” with the connotations of “What are you thinking?” because the salient thing is that building something even remotely similar to a God would be very, very dangerous.
The reason I interpreted it this way is partly because instead of interpreting everything I hear literally, I will often interpret wording based on what’s salient about it in the context of the situation. For instance, if I saw a scene where someone was running toward someone else with a knife and I asked “Are you about to commit murder?” I would NOT accept “Why don’t we call it knife relocation?” as an acceptable answer.
Afterward, Robert Wright says that Eliezer is being euphemistic. This perception that Eliezer’s answer was an attempt to substitute nice sounding wording for something awful confirms, to me, that Robert’s intent was not to ask “What word should we use for this?” but was intended more like “You think we should build a WHAT? What are you thinking?”
Now, it could be argued that Eliezer accidentally failed to detect the salient connotation. It could be argued, and probably fairly effectively (against me anyway) that the reason for Eliezer’s mistake is that he was having one of his arrogant moments and he genuinely thought that, because of a gigantic intelligence difference between Robert and himself, that Robert was asking a moronic question based on the stupid perception that a super powerful AI would be exactly the same as a real God (whatever that means). In this case, I would classify that as a “social skills / character flaw induced faux pas”.
In my personal interpretation of Eliezer’s behavior, I’m giving him more credit than that—I am assuming that he has previously encountered people by that point (2010) who have flipped out about the possibility that he wants to build a God and have voiced valid and poignant concerns like “Why do you believe it is possible to succeed at controlling something a bazillion times smarter than you?” or “Why would you want us imperfect humans to make something so insanely powerful if it’s more or less guaranteed to be flawed?” I’m assuming that Eliezer interpreted correctly when the salient part of someone’s question is not in it’s literal wording but in connotations relating to the situation.
This is why it looks, to me, like Eliezer’s intent was to brush him off by choosing to answer this question as if it were a question about what word to use and hoping that Robert didn’t have the nerve to go for the throat with valid and poignant questions like the examples above.
The topic of whether this was an unintentional faux pas or an intentional brush-off isn’t the most important thing here.
The most important questions, in my opinion, are:
“Does Eliezer intend to build something this powerful?”
“Does Eliezer really think that something a bazillion times as intelligent as himself can be controlled?”
“Do you and I agree/disagree that it’s a good idea to build something this powerful / that it can be controlled?”
If forced to use that term and answer the question as you ask it, with a “Yes” or “No” then the correct answer would be “No”. He is not trying to create a God, he has done years of work working out what he is trying to create and it is completely different to a God in nearly all features except “very powerful”. If you insinst on that vocabulary you’re going to get “No, I don’t” as an answer. That the artificial intelligence Eliezer would want to create seems to Wright (and perhaps yourself) like it should be described as, considered a euphemism for, or reasoned about as if it is God is a feature of Wright’s lack of domain knowledge.
There is no disingenuity here. Eliezer can honestly say “We should create a very powerful (and carefully designed) optimizing agent” but he cannot honestly say “We should create a God”. (You may begin to understand some of the reasons why there is such a difference when you start considering questions like “Can it be controlled?”. Or at least when you start considering the answers to the same.) So Eliezer gave Wright the chance to get the answer he wanted (“Hell yes, I want to make a very powerful optimising agent!”) rather than the answer the question you suggest would have given him (“Hell no! Don’t create a God! That entails making at least two of the fundamental and critical ethical and practical blunders in FAI design that you probably aren’t able to comprehend yet!”)
I reject the analogy. Eliezer’s answer isn’t like the knife relocation answer. (If anything the connotations are the reverse. More transparency and candidness rather than less.)
It could be that there really is an overwhelming difference in crystallized intelligence between Eliezer and Robert. The question—at least relative to Eliezer’s standards—was moronic. Or at least had connotations of ignorance of salient features of the landscape.
There may be a social skills related faux pas here—and it is one where it is usually socially appropriate to say wrong things in an entirely muddled model of reality rather than educate the people you are speaking to. Maybe that means that Eliezer shouldn’t talk to people like Robert. Perhaps he should get someone trained explicitly with spinning webs of eloquent bullshit to optimally communicate with the uneducated. However the character flaws that I take it you are referring to—Eliezer’s arrogance and soforth, just aren’t at play here.
The net amount of credit given is low. You are ascribing a certain intention to Eliezer’s actions where that intention is clearly not achieved. “I infer he is trying to do X and he in fact fails to do X”. In such cases generosity suggests that if they don’t seem to be achieving X, haven’t said X is what they are trying to achieve and X is inherently lacking in virtue then then by golly maybe they were in fact trying to achieve Y! (Eliezer really isn’t likely to be that actively incompetent at deviousness.)
You assign a high likelyhood to people flipping out (and even persecuting Eliezer) in such a way. Nyan considers it less likely. It may be that Eliezer doesn’t have people (and particularly people of Robert Wright’s intellectual caliber) flip out at him like that.
The kind of people to whom there is the remote possibility that it would be useful to even bother to attempt to explain the answers to such questions are also the kind of people who are capable of asking them, without insisting on asking then belligerently emphasizing wrong questions about ‘God’. This is particularly the case with the first of those questions where the question of ‘controlling’ only comes up because of intuitive misunderstanding on how one would relate to such an agent—ie. thinking of it as a “God” which is something we already intuit as “like a human or mammal but way powerful”.
If he can prove safety mathematically then yes, he does.
At around the time I visited Berkeley there was a jest among some of the SingInst folks “We’re thinking of renaming ourselves from The Singularity Institute For Artificial Intelligence to The Singularity Institute For Or Against Artificial Intelligence Depending On What Seems To Be The Best Altruistic Approach All Things Considered”.
There are risks to creating something this powerful and, in fact, the goal of Eliezer and SIAI aren’t “research AGI”… plenty of researchers work on that. They are focused on Friendliness. Essentially… they are focused on the very dangers that you describe here and are dedicating themselves to combating those dangers.
Note that it is impossible to evaluate a decision to take an action without considering what alternative choice there is. Choosing to dedicate one’s efforts to developing an FAI (“safe and desirable very powerful optimizing agent”) has a very different meaning if the alternative is millennia of peace and tranquility than the same decision to work on FAI if the alternative is “someone is going to create a very powerful optimizing agent anyway but not bother with rigorous safety research”.
If you’re planning to try to control the super-intelligence you have already lost. The task is of selecting from the space of all possible mind designs a mind that will do things that you want done.
Estimate: Slightly disagree. The biggest differences in perception may be surrounding what the consequences of inaction are.
Estimate: Disagree significantly. I believe your understanding of likely superintelligence behavior and self development has too much of an anthropocentric bias. Your anticipations are (in my estimation) strongly influenced by how ethical, intellectual and personal development works in gifted humans.
The above disagreement actually doesn’t necessarily change overall risk assessment. I just expect the specific technical problems to be overcome in order to prevent “Super-intelligent rocks fall! Everybody dies.” to be slightly different in nature. Probably with more emphasis on abstract mathematical concerns.
Thank you. I will try to do more of that.
Interesting. Religious people seem a lot less scary to me than this. My impression is that the teeth have been taken out of traditional christianity. There are a few christian terrorists left in north america, but they seem like holdouts raging bitterly against the death of their religion. They are still in the majority in some places, though, and can persecute people there.
I don’t think that the remains of theistic christianity could reach an effective military/propoganda arm all the way to Berkely even if they did somehow misinterpret FAI as an assault on God.
Nontheistic christianity, which is the ruling religion right now could flex enough military might to shut down SI, but I can’t think of any way to make them care.
I live in Vancouver, where as far as I can tell, most people are either non-religious, or very tolerant. This may affect my perceptions.
This is a good reaction. It is good to take seriously the threat that an AI could pose. However, the point of Friendly AI is to prevent all that and make sure it that if it happens, it is something we would want.
:) You can be as direct as you want to with me. (Normal smilie to prevent the tiny sad moments.)
Okay, good point. I agree that religion is losing ground. However, I’ve witnessed some pretty creepy stuff coming out of the churches. Some of them are saying the end is near and doing things like having events to educate about it. Now, that experience was one that I had in a particular location which happens to be very religious. I’m not sure that it was representative of what the churches are up to in general. I admit ignorance when it comes to what average churches are doing. But if there’s enough end-times kindling being thrown into the pit here, people who were previously losing faith may flare up into zealous Christians with the right spark. Trying to build what might be interpreted as an Antichrist would be quite the spark. The imminent arrival of an Antichrist may be seen as a fulfillment of the end times prophecies and be seen as a sign that the Christian religion really is true after all.
A lot is at stake here in the mind of the Christian. If it’s not the end of the world, opposing a machine “God” is still going to look like a good idea—it’s dangerous. If it is the end of the world, they’d better get their s—in gear and become all super-religious and go to battle against Satan because judgment day is coming and if they don’t, they’re going to be condemned. Being grateful to God and following a bunch of rules is pretty hard, especially when you can’t actually SEE the God in question. How people are responding to the mundane religious stuff shouldn’t be seen as a sign of how they’ll react when something exceptional happens.
Being terrified out of your mind that someone is building a super-intelligent mind is easy. This takes no effort at all. Heck, at least half of LessWrong would probably be terrified in this case. Being extra terrified because of end times prophecies doesn’t take any thought or effort. And fear will kill their minds, perhaps making religious feelings more likely. That, to me, seems to be a likely possibility in the event that someone attempts to build a machine “God”. You’re seeing a decline in religion and appear to be thinking that it’s going to continue decreasing. I see a decline in religion and I think it may decrease but also see the potential for the right kinds of things to trigger a conflagration of religious fervor.
There are other memes that add an interesting twist: The bible told them that a lot of people would lose faith before the Antichrist comes. Their own lack of faith might be taken as evidence that the bible is correct.
And I have to wonder how Christianity survived things like the plagues that wiped out half of Europe. They must have been pretty disenchanted with God—unless they interpreted it as the end of the world and became too terrified of eternal condemnation to question why God would allow such horrible things to happen.
Perhaps one of the ways the Christianity meme defends itself is to flood the minds of the religious with fear at the exact moments in history when they would have the most reason to question their faith.
Last year’s Gallup poll says that 78% of Americans are Christan. Even if they’ve lost some steam, if the majority still uses that word to self-identify, we should really acknowledge the possibility that some event could trigger zealous reactions.
I have been told that before Hitler came to power, the intelligentsia of Germany was laughing at him thinking it would never happen. It’s a common flaw of nerds to underestimate the violence and irrationality that the average person is capable of. I think this is because we use ourselves as a model and think they’ll behave, feel and think a lot more like we do than they actually will. I try to compensate for this bias as much as possible.
BTW, where I am (i.e. among twentysomething university students in central Italy) atheists take the piss out of believers waaaaay more often than the other way round.
I’m not sure I’ve heard any detailed analysis of the Friendly AI project specifically in those terms—at least not any that I felt was worth my time to read—but it’s a common trope of commentary on Singularitarianism in general.
No less mainstream a work than Deus Ex, for example, quotes Voltaire’s famous “”if God did not exist, it would be necessary to create him” in one of its endings—which revolves around granting a friendly (but probably not Friendly) AI control over the world’s computer networks.
ROT-13:
Vagrerfgvatyl, va gur raqvat Abeantrfg ersref gb, Uryvbf (na NV) pubbfrf gb hfr W.P. Qragba (gur cebgntbavfg jub fgvyy unf zbfgyl-uhzna cersreraprf) nf vachg sbe n PRI-yvxr cebprff orsber sbbzvat naq znxvat vgfrys (gur zretrq NV naq anab-nhtzragrq uhzna) cuvybfbcure-xvat bs gur jbeyq va beqre gb orggre shysvyy vgf bevtvany checbfr.
I have to agree with Eliezer here: this is a terrible standard for evaluating phygishness. Simply put, enjoying that kind of comment does not correlate at all with what the harmful features of phygish organizations/social clubs, etc. are. There are plenty of Internet projects that refer to their most prominent leaders with such titles as God-King, “benevolent dictator” and the like; it has no implication at all.
You have more faith than I do that it will not be intentionally or unintentionally misinterpreted.
Also, I am interpreting at that comment within the context of other things. The “arrogance problem” thread, the b - - - - - - k, Eliezer’s dating profile, etc.
What’s not clear is whether you or I are more realistic when it comes to how people are likely to interpret, in not only a superficial context (like some hatchet jobbing reporter who knows only some LW gossip), but with no context, or within the context of other things with a similar theme.
Why would you believe that something is always the solution when you already have evidence that it doesn’t always work?
Let’s go to the object level: in the case of God, the fact that god is doing nothing is not evidence that Friendly AI won’t work.
In the case of EY the supposed benevolent dictator, the fact that he is not doing any benevolent dictatoring is explained by the fact that he has many other things that are more important. That prevents us from learning anything about the general effectiveness of benevolent dictators, and we have to rely on the prior belief that it works quite well.
There are alternatives to monarchy, and an example of a disappointing monarch should suggest that alternatives might be worth considering, or at the very least that appointing a monarch isn’t invariably the answer. That was my only point.