Another source of perspective is the fact that Eliezer has made research into engineering safety mechanisms into advanced machine agents a seriously substantial movement, and he did this by blogging about rationality for two years, and then writing a Harry Potter fanfiction over the course of the succeeding five years.
Yet another source of perspective is reading the documents he wrote around the turn of the millennium circa the founding of SIAI on the subject. They can only be described as ‘hilarious’. There were specs for a programming language that would by its design ‘do what i mean’ that make my programmer friends laugh and complicated AI architectures and ideas for the social engineering they would do with the gigadollars that would be rolling in to bring about the singularity by 2010 so as to avoid the apocalyptic Nanowar that was coming.
Well, Eliezer’s about… what, 35? Not far off that, anyway. I’m sure I wrote some stuff that was at least that embarrassing when I was 20, though it wouldn’t have been under my own name or wouldn’t have had any public exposure to speak of or both.
I just want to note we’re not discussing laser-mounted mosquito-terminating drones anymore. That’s fine. Anyway, I’m a bit older than Eliezer was when he founded the Singularity Institute for Artificial Intelligence. While starting a non-profit organization at that age seems impressive to me, once it’s been legally incorporated I’d guess one can slap just about any name they like on it. The SIAI doesn’t seem to have achieved much in the first few years of its operation.
Based on the their history from the Wikipedia page on the Machine Intelligence Research Institute, it seems to me how notable the organization’s achievements are are commensurate with the time it’s been around. For several years, as the Singularity Institute, they also ran the Singularity Summit, which they eventually sold as a property to Singularity University for one million dollars. Eliezer Yudkowsky contributed two chapters to Global Catastrophic Risks in 2008, at the age of 28, without having completed either secondary school or a university education.
On the other hand, the MIRI has made great mistakes in operations, research, and outreach in their history. Eliezer Yudkwosky is obviously an impressive person for various reasons. I think the conclusion is Eliezer sometimes assumes he’s enough of a ‘rationalist’ he can get away with being lazy with how he plans or portrays his ideas. He seems like he’s not much of a communications consequentialist, and seems relcuctant to declare mea culpa when he makes those sorts of mistakes. All things equal, especially if we hasn’t tallied Eliezer’s track record, we should remain skeptical of his plans based on shoddy grounds. I too don’t believe we should take his bonus requests and ideas at the end of the post seriously.
There were specs for a programming language that would by its design ‘do what i mean’ that make my programmer friends laugh and complicated AI architectures and ideas for the social engineering they would do with the gigadollars that would be rolling in to bring about the singularity by 2010 so as to avoid the apocalyptic Nanowar that was coming.
Flare (the language) didn’t sound that dumb to me—my impression wasn’t that it would inherently ‘do what i mean’ but that it would somehow be both machine and human—readable, so that it would be easy to run advanced optimising compliers over it, and later would provide a natural basis for AI that could rewrite its own source code.
Looking back on it, this is way too much of a free lunch, and since an AI capable of understanding AI theory would probably also be able to parse the meaning of code written in conventional languages, its rather redundant. I still expect that ‘do what i mean’ languages will appear, for instance the language could detect ‘obvious’ mistakes, correct them and inform the user.
e.g. “x * y=z does not work because the dimensions do not match. Nor does x’ * y=z, but x * y’=z does, so I have taken the liberty of changing your code to x * y’=z”
or “‘inutaliseation’ is not a function or variable. I assume you meant ‘initialization’, which is a function, and I corrected this mistake”
Eventually, it might evolve into a natural language to code translator.
But yes, a nanowar by 2010 wasn’t the smartest idea.
Yet another source of perspective is reading the documents he wrote around the turn of the millennium circa the founding of SIAI on the subject. They can only be described as ‘hilarious’. There were specs for a programming language that would by its design ‘do what i mean’ that make my programmer friends laugh and complicated AI architectures and ideas for the social engineering they would do with the gigadollars that would be rolling in to bring about the singularity by 2010 so as to avoid the apocalyptic Nanowar that was coming.
Well, Eliezer’s about… what, 35? Not far off that, anyway. I’m sure I wrote some stuff that was at least that embarrassing when I was 20, though it wouldn’t have been under my own name or wouldn’t have had any public exposure to speak of or both.
I just want to note we’re not discussing laser-mounted mosquito-terminating drones anymore. That’s fine. Anyway, I’m a bit older than Eliezer was when he founded the Singularity Institute for Artificial Intelligence. While starting a non-profit organization at that age seems impressive to me, once it’s been legally incorporated I’d guess one can slap just about any name they like on it. The SIAI doesn’t seem to have achieved much in the first few years of its operation.
Based on the their history from the Wikipedia page on the Machine Intelligence Research Institute, it seems to me how notable the organization’s achievements are are commensurate with the time it’s been around. For several years, as the Singularity Institute, they also ran the Singularity Summit, which they eventually sold as a property to Singularity University for one million dollars. Eliezer Yudkowsky contributed two chapters to Global Catastrophic Risks in 2008, at the age of 28, without having completed either secondary school or a university education.
On the other hand, the MIRI has made great mistakes in operations, research, and outreach in their history. Eliezer Yudkwosky is obviously an impressive person for various reasons. I think the conclusion is Eliezer sometimes assumes he’s enough of a ‘rationalist’ he can get away with being lazy with how he plans or portrays his ideas. He seems like he’s not much of a communications consequentialist, and seems relcuctant to declare mea culpa when he makes those sorts of mistakes. All things equal, especially if we hasn’t tallied Eliezer’s track record, we should remain skeptical of his plans based on shoddy grounds. I too don’t believe we should take his bonus requests and ideas at the end of the post seriously.
Flare (the language) didn’t sound that dumb to me—my impression wasn’t that it would inherently ‘do what i mean’ but that it would somehow be both machine and human—readable, so that it would be easy to run advanced optimising compliers over it, and later would provide a natural basis for AI that could rewrite its own source code.
Looking back on it, this is way too much of a free lunch, and since an AI capable of understanding AI theory would probably also be able to parse the meaning of code written in conventional languages, its rather redundant. I still expect that ‘do what i mean’ languages will appear, for instance the language could detect ‘obvious’ mistakes, correct them and inform the user.
e.g. “x * y=z does not work because the dimensions do not match. Nor does x’ * y=z, but x * y’=z does, so I have taken the liberty of changing your code to x * y’=z”
or “‘inutaliseation’ is not a function or variable. I assume you meant ‘initialization’, which is a function, and I corrected this mistake”
Eventually, it might evolve into a natural language to code translator.
But yes, a nanowar by 2010 wasn’t the smartest idea.