I think that MIRI did a mistake than decided not be evolved in actual AI research, but only in AI safety research. In retrospect the nature of this mistake is clear: MIRI was not recognised inside AI community, and its safety recommendations are not connected with actual AI development paths.
It is like a person would decide not to study nuclear physics but only nuclear safety. It even may work until some point, as safety laws are similar in many systems. But he will not be the first who will learn about surprises in new technology.
I think that MIRI did a mistake than decided not be evolved in actual AI research [...] MIRI was not recognised inside AI community
Being involved in actual AI research would have helped with that only if MIRI had been able to do good AI research, and would have been a net win only if MIRI had been able to do good AI research at less cost to their AI safety research than the gain from greater recognition in the AI community (and whatever other benefits doing AI research might have brought).
I think you’re probably correct that MIRI would be more effective if it did AI research, but it’s not at all obvious.
Maybe it should be some AI research which is relevant to safety, like small self evolving agents, or AI-agent which inspects other agents. It would also generate some profit.
LW was one handshake away from DeepMind, we interviewed Shane Legg and referred to his work many times. But I guess we didn’t have the right attitude, maybe still don’t. Now is probably a good time to “halt, melt and catch fire” as Eliezer puts it.
I’m confused what you would have done with the benefit of hindsight (beyond having folks like Jaan Tallin and Elon Musk who were concerned with AI safety become investors in DeepMind, which was in fact done).
Google bought DeepMind for, reportedly, more than $500 million. Other than possibly Eliezer, MIRI probably doesn’t have the capacity to employ people that the market places such a high value on.
EY could have such price if he invested more time in studying neural networks, but not in writing science fiction.
Lesswrong is also full of clever minds which probably could be employed in any tiny AI project.
He has ability to attract groups of people and write interesting texts. So he could attract good programmers for any task.
He has the ability to attract self-selected groups of people by writing texts that these people find interesting. He has shown no ability to attract, organize and lead a group of people to solve any significant technical task. The research output of SIAI/SI/MIRI has been relatively limited and most of the interesting stuff came out when he was not at the helm anymore.
While this may be formally right the question is what it shows (or should show)? Because on the other hand MIRI does have quite some research output as well as impact on AI safety—and that is what they set out for.
Most MIRI research output (papers, in particular the peer-reviewed ones) was produced under the direction of Luke Muehlhauser or Nate Soares. Under the direction of EY the prevalent outputs were the LessWrong sequences and Harry Potter fanfiction.
The impact of MIRI research on the work of actual AI researchers and engineers is more difficult to measure, my impression is that it has not been very much so far.
That looks like judgment from availability bias. How do you think MIRI did go about getting researchers and these better directors? And funding? And all those connections that seem to lead to AI safety being a thing now?
IMHO, AI safety is a thing now because AI is a thing now and when people see AI breakthroughs they tend to think of the Terminator.
Anyway, I agree that EY is good at getting funding and publicity (though not necessarily positive publicity), my comment was about his (lack of) proven technical abilities.
IMHO, AI safety is a thing now because AI is a thing now and when people see AI breakthroughs they tend to think of the Terminator.
Under that hypothesis, shouldn’t AI safety have become a “thing” (by which I assume you mean “gain mainstream recognition”) back when Deep Blue beat Kasparov?
If you look up mainstream news article written back then, you’ll notice that people were indeed concerned. Also, maybe it’s a coincidence, but The Matrix movie, which has AI uprising as it’s main premise, came out two years later.
The difference is that in 1997 there weren’t AI-risk organizations ready to capitalize on these concerns.
Was Eliezer ever in charge? I thought that during the OB, LW and HP eras his role was something like “Fellow” and other people (e.g., Goertzel, Muelhauser) were in charge.
I think that MIRI did a mistake than decided not be evolved in actual AI research, but only in AI safety research. In retrospect the nature of this mistake is clear: MIRI was not recognised inside AI community, and its safety recommendations are not connected with actual AI development paths.
It is like a person would decide not to study nuclear physics but only nuclear safety. It even may work until some point, as safety laws are similar in many systems. But he will not be the first who will learn about surprises in new technology.
Being involved in actual AI research would have helped with that only if MIRI had been able to do good AI research, and would have been a net win only if MIRI had been able to do good AI research at less cost to their AI safety research than the gain from greater recognition in the AI community (and whatever other benefits doing AI research might have brought).
I think you’re probably correct that MIRI would be more effective if it did AI research, but it’s not at all obvious.
Maybe it should be some AI research which is relevant to safety, like small self evolving agents, or AI-agent which inspects other agents. It would also generate some profit.
Agreed on all points.
LW was one handshake away from DeepMind, we interviewed Shane Legg and referred to his work many times. But I guess we didn’t have the right attitude, maybe still don’t. Now is probably a good time to “halt, melt and catch fire” as Eliezer puts it.
I’m confused what you would have done with the benefit of hindsight (beyond having folks like Jaan Tallin and Elon Musk who were concerned with AI safety become investors in DeepMind, which was in fact done).
What do you mean by “one handshake”?
Google bought DeepMind for, reportedly, more than $500 million. Other than possibly Eliezer, MIRI probably doesn’t have the capacity to employ people that the market places such a high value on.
EY could have such price if he invested more time in studying neural networks, but not in writing science fiction. Lesswrong is also full of clever minds which probably could be employed in any tiny AI project.
Has he ever demonstrated any ability to produce anything technically valuable?
He has ability to attract groups of people and write interesting texts. So he could attract good programmers for any task.
He has the ability to attract self-selected groups of people by writing texts that these people find interesting. He has shown no ability to attract, organize and lead a group of people to solve any significant technical task. The research output of SIAI/SI/MIRI has been relatively limited and most of the interesting stuff came out when he was not at the helm anymore.
While this may be formally right the question is what it shows (or should show)? Because on the other hand MIRI does have quite some research output as well as impact on AI safety—and that is what they set out for.
Most MIRI research output (papers, in particular the peer-reviewed ones) was produced under the direction of Luke Muehlhauser or Nate Soares. Under the direction of EY the prevalent outputs were the LessWrong sequences and Harry Potter fanfiction.
The impact of MIRI research on the work of actual AI researchers and engineers is more difficult to measure, my impression is that it has not been very much so far.
That looks like judgment from availability bias. How do you think MIRI did go about getting researchers and these better directors? And funding? And all those connections that seem to lead to AI safety being a thing now?
IMHO, AI safety is a thing now because AI is a thing now and when people see AI breakthroughs they tend to think of the Terminator.
Anyway, I agree that EY is good at getting funding and publicity (though not necessarily positive publicity), my comment was about his (lack of) proven technical abilities.
Under that hypothesis, shouldn’t AI safety have become a “thing” (by which I assume you mean “gain mainstream recognition”) back when Deep Blue beat Kasparov?
If you look up mainstream news article written back then, you’ll notice that people were indeed concerned. Also, maybe it’s a coincidence, but The Matrix movie, which has AI uprising as it’s main premise, came out two years later.
The difference is that in 1997 there weren’t AI-risk organizations ready to capitalize on these concerns.
Which organizations are you referring to, and what sort of capitalization?
Was Eliezer ever in charge? I thought that during the OB, LW and HP eras his role was something like “Fellow” and other people (e.g., Goertzel, Muelhauser) were in charge.
I’m not saying MIRI should’ve hired Shane Legg. It was more of a learning opportunity.
MIRI will never have a comparative advantage in doing the parts of AI research that the big players think will lead to profitable outcomes.
They might indeed have comparative advantages, though not absolute ones.