Google bought DeepMind for, reportedly, more than $500 million. Other than possibly Eliezer, MIRI probably doesn’t have the capacity to employ people that the market places such a high value on.
EY could have such price if he invested more time in studying neural networks, but not in writing science fiction.
Lesswrong is also full of clever minds which probably could be employed in any tiny AI project.
He has ability to attract groups of people and write interesting texts. So he could attract good programmers for any task.
He has the ability to attract self-selected groups of people by writing texts that these people find interesting. He has shown no ability to attract, organize and lead a group of people to solve any significant technical task. The research output of SIAI/SI/MIRI has been relatively limited and most of the interesting stuff came out when he was not at the helm anymore.
While this may be formally right the question is what it shows (or should show)? Because on the other hand MIRI does have quite some research output as well as impact on AI safety—and that is what they set out for.
Most MIRI research output (papers, in particular the peer-reviewed ones) was produced under the direction of Luke Muehlhauser or Nate Soares. Under the direction of EY the prevalent outputs were the LessWrong sequences and Harry Potter fanfiction.
The impact of MIRI research on the work of actual AI researchers and engineers is more difficult to measure, my impression is that it has not been very much so far.
That looks like judgment from availability bias. How do you think MIRI did go about getting researchers and these better directors? And funding? And all those connections that seem to lead to AI safety being a thing now?
IMHO, AI safety is a thing now because AI is a thing now and when people see AI breakthroughs they tend to think of the Terminator.
Anyway, I agree that EY is good at getting funding and publicity (though not necessarily positive publicity), my comment was about his (lack of) proven technical abilities.
IMHO, AI safety is a thing now because AI is a thing now and when people see AI breakthroughs they tend to think of the Terminator.
Under that hypothesis, shouldn’t AI safety have become a “thing” (by which I assume you mean “gain mainstream recognition”) back when Deep Blue beat Kasparov?
If you look up mainstream news article written back then, you’ll notice that people were indeed concerned. Also, maybe it’s a coincidence, but The Matrix movie, which has AI uprising as it’s main premise, came out two years later.
The difference is that in 1997 there weren’t AI-risk organizations ready to capitalize on these concerns.
Was Eliezer ever in charge? I thought that during the OB, LW and HP eras his role was something like “Fellow” and other people (e.g., Goertzel, Muelhauser) were in charge.
Google bought DeepMind for, reportedly, more than $500 million. Other than possibly Eliezer, MIRI probably doesn’t have the capacity to employ people that the market places such a high value on.
EY could have such price if he invested more time in studying neural networks, but not in writing science fiction. Lesswrong is also full of clever minds which probably could be employed in any tiny AI project.
Has he ever demonstrated any ability to produce anything technically valuable?
He has ability to attract groups of people and write interesting texts. So he could attract good programmers for any task.
He has the ability to attract self-selected groups of people by writing texts that these people find interesting. He has shown no ability to attract, organize and lead a group of people to solve any significant technical task. The research output of SIAI/SI/MIRI has been relatively limited and most of the interesting stuff came out when he was not at the helm anymore.
While this may be formally right the question is what it shows (or should show)? Because on the other hand MIRI does have quite some research output as well as impact on AI safety—and that is what they set out for.
Most MIRI research output (papers, in particular the peer-reviewed ones) was produced under the direction of Luke Muehlhauser or Nate Soares. Under the direction of EY the prevalent outputs were the LessWrong sequences and Harry Potter fanfiction.
The impact of MIRI research on the work of actual AI researchers and engineers is more difficult to measure, my impression is that it has not been very much so far.
That looks like judgment from availability bias. How do you think MIRI did go about getting researchers and these better directors? And funding? And all those connections that seem to lead to AI safety being a thing now?
IMHO, AI safety is a thing now because AI is a thing now and when people see AI breakthroughs they tend to think of the Terminator.
Anyway, I agree that EY is good at getting funding and publicity (though not necessarily positive publicity), my comment was about his (lack of) proven technical abilities.
Under that hypothesis, shouldn’t AI safety have become a “thing” (by which I assume you mean “gain mainstream recognition”) back when Deep Blue beat Kasparov?
If you look up mainstream news article written back then, you’ll notice that people were indeed concerned. Also, maybe it’s a coincidence, but The Matrix movie, which has AI uprising as it’s main premise, came out two years later.
The difference is that in 1997 there weren’t AI-risk organizations ready to capitalize on these concerns.
Which organizations are you referring to, and what sort of capitalization?
Was Eliezer ever in charge? I thought that during the OB, LW and HP eras his role was something like “Fellow” and other people (e.g., Goertzel, Muelhauser) were in charge.
I’m not saying MIRI should’ve hired Shane Legg. It was more of a learning opportunity.