That looks like judgment from availability bias. How do you think MIRI did go about getting researchers and these better directors? And funding? And all those connections that seem to lead to AI safety being a thing now?
IMHO, AI safety is a thing now because AI is a thing now and when people see AI breakthroughs they tend to think of the Terminator.
Anyway, I agree that EY is good at getting funding and publicity (though not necessarily positive publicity), my comment was about his (lack of) proven technical abilities.
IMHO, AI safety is a thing now because AI is a thing now and when people see AI breakthroughs they tend to think of the Terminator.
Under that hypothesis, shouldn’t AI safety have become a “thing” (by which I assume you mean “gain mainstream recognition”) back when Deep Blue beat Kasparov?
If you look up mainstream news article written back then, you’ll notice that people were indeed concerned. Also, maybe it’s a coincidence, but The Matrix movie, which has AI uprising as it’s main premise, came out two years later.
The difference is that in 1997 there weren’t AI-risk organizations ready to capitalize on these concerns.
That looks like judgment from availability bias. How do you think MIRI did go about getting researchers and these better directors? And funding? And all those connections that seem to lead to AI safety being a thing now?
IMHO, AI safety is a thing now because AI is a thing now and when people see AI breakthroughs they tend to think of the Terminator.
Anyway, I agree that EY is good at getting funding and publicity (though not necessarily positive publicity), my comment was about his (lack of) proven technical abilities.
Under that hypothesis, shouldn’t AI safety have become a “thing” (by which I assume you mean “gain mainstream recognition”) back when Deep Blue beat Kasparov?
If you look up mainstream news article written back then, you’ll notice that people were indeed concerned. Also, maybe it’s a coincidence, but The Matrix movie, which has AI uprising as it’s main premise, came out two years later.
The difference is that in 1997 there weren’t AI-risk organizations ready to capitalize on these concerns.
Which organizations are you referring to, and what sort of capitalization?