What bothers me is that the real agenda of the LessWrong/Singularity Institute folks is being obscured by all these abstract philosophical discussions. I know that Peter Thiel and other billionaires are not funding these groups for academic reasons—this is ultimately a quest for power.
I’ve been told by Michael Anissimov personally that they are working on real, practical AI designs behind the scenes, but how often is this discussed here? Am I supposed to feel secure knowing that these groups are seeking the One Ring of Power, but it’s OK because they’ve written papers about “CEV” and are therefore the good guys? He who can save the world can control it. I don’t trust anyone with this kind of power, and I am deeply suspicious of any small group of intelligent people that is seeking power in this way.
Am I paranoid? Absolutely. I know too much about recent human history and the horrific failures of other grandiose intellectual projects to be anything else. Call me crazy, but I firmly believe that building intelligent machines is all about power, and that everything else (i.e. most of this site) is conversation.
But if it comes down to Us or Them, I’m with Them. You have been warned.
That’s from the document where Yudkowsky described his “transfer of allegence”.
What puzzles me is how the outfit gets any support. I mean, they are a secretive, closed-source machine intelligence outfit who makes no secret of their plan to take over the world. To me, that is like writing BAD GUY in big, black letters on your forehead.
The “He-he—let’s construct machine intelligence in our basement” is like something out of Tin-Tin.
Maybe the way to understand the phenomenon is as a personality cult.
That’s how it strikes me also. To me Yudkowsky has most of the traits of a megalomaniacal supervillain, but I don’t hold that against him. I will give LessWrong this much credit: they still allow me to post here, unlike Anissimov who simply banned me outright from his blog.
I’m pretty sure Eliezer is consciously riffing on some elements of the megalomaniacal supervillain archetype; at the very least, he name-checks the archetype here and here in somewhat favorable terms. There are any number of reasons why he might be doing so, ranging from pretty clever memetic engineering to simply thinking it’s fun or cool. As you might be implying, though, that doesn’t make him megalomaniacal or a supervillain; we live in a world where bad guys aren’t easily identified by waxed mustaches and expansive mannerisms.
Good thing, too; I lost my goatee less than a year ago.
I expect it helps to have your content come up first—if people search for your name and the word “supervillain”. Currently 3 of the top 4 posts with those search terms are E.Y. posts.
What bothers me is that the real agenda of the LessWrong/Singularity Institute folks is being obscured by all these abstract philosophical discussions. I know that Peter Thiel and other billionaires are not funding these groups for academic reasons—this is ultimately a quest for power.
I’ve been told by Michael Anissimov personally that they are working on real, practical AI designs behind the scenes, but how often is this discussed here? Am I supposed to feel secure knowing that these groups are seeking the One Ring of Power, but it’s OK because they’ve written papers about “CEV” and are therefore the good guys? He who can save the world can control it. I don’t trust anyone with this kind of power, and I am deeply suspicious of any small group of intelligent people that is seeking power in this way.
Am I paranoid? Absolutely. I know too much about recent human history and the horrific failures of other grandiose intellectual projects to be anything else. Call me crazy, but I firmly believe that building intelligent machines is all about power, and that everything else (i.e. most of this site) is conversation.
Keep your friends close...
That’s from the document where Yudkowsky described his “transfer of allegence”.
What puzzles me is how the outfit gets any support. I mean, they are a secretive, closed-source machine intelligence outfit who makes no secret of their plan to take over the world. To me, that is like writing BAD GUY in big, black letters on your forehead.
The “He-he—let’s construct machine intelligence in our basement” is like something out of Tin-Tin.
Maybe the way to understand the phenomenon is as a personality cult.
What. That quote seems to be directly at odds with the entire idea of “Friendly AI”. And of course it is, as a later version of Eliezer refuted it:
I’m also not sure it makes sense to call SIAI a “closed-source” machine intelligence outfit, given that I’m pretty sure there’s no code yet.
WTF? It says right at the top of the page:
That’s how it strikes me also. To me Yudkowsky has most of the traits of a megalomaniacal supervillain, but I don’t hold that against him. I will give LessWrong this much credit: they still allow me to post here, unlike Anissimov who simply banned me outright from his blog.
I’m pretty sure Eliezer is consciously riffing on some elements of the megalomaniacal supervillain archetype; at the very least, he name-checks the archetype here and here in somewhat favorable terms. There are any number of reasons why he might be doing so, ranging from pretty clever memetic engineering to simply thinking it’s fun or cool. As you might be implying, though, that doesn’t make him megalomaniacal or a supervillain; we live in a world where bad guys aren’t easily identified by waxed mustaches and expansive mannerisms.
Good thing, too; I lost my goatee less than a year ago.
I expect it helps to have your content come up first—if people search for your name and the word “supervillain”. Currently 3 of the top 4 posts with those search terms are E.Y. posts.
Since the quote is obsolete, as nhamann pointed out and as it says right on the top of the page, maybe you are being struck wrong.