On Ben’s blog post, I noted that a poll at the 2008 global catastrophic risks conference put the existential risk of machine intelligence at 5% - and that the people attending probably had some of the largest estimations of risk of anyone on the planet—since they were a self-selected group attending a conference on the topic.
“Molecular nanotech weapons” also get 5%. Presumably there’s going to be a heavy intersection between those two figures—even though in the paper they seem to be adding them together!
A poll at the 2008 global catastrophic risks conference put the existential risk of machine intelligence at 5%
Compare this with this Yudkowsky quote from 2005:
And if Novamente should ever cross the finish line, we all die
This looks like a rather different probability estimate. It seems to me to be highly overconfident one.
They’re probabilities for two different things. The 5% estimate is for P(AIisCreated&AIisUnfriendly), while Yudkowsky’s estimate is for P(AIisUnfriendly|AIisCreated&NovamenteFinishesFirst).
Well, a tendency towards mud-slinging might be counter-balanced by wanting to appear moral. Using FUD against competitors is usually regarded as a pretty low marketing strategy. Perhaps most of the mud-slinging can be delegated to anonymous minions, though.
There’s going to be a lot of mud-slinging in this space.
More generally, there’s going to be a lot of primate tribal politics in this space. After all, not only does it have all the usual trappings of academic arguments, it is also predicated on some pretty fundamental challenges to where power comes from and how it propagates.
On Ben’s blog post, I noted that a poll at the 2008 global catastrophic risks conference put the existential risk of machine intelligence at 5% - and that the people attending probably had some of the largest estimations of risk of anyone on the planet—since they were a self-selected group attending a conference on the topic.
“Molecular nanotech weapons” also get 5%. Presumably there’s going to be a heavy intersection between those two figures—even though in the paper they seem to be adding them together!
Compare this with this Yudkowsky quote from 2005:
This looks like a rather different probability estimate. It seems to me to be highly overconfident one.
I think the best way to model this is as FUD. Not Invented Here. A primate ego battle.
If this is how researchers deal with each other at this early stage, perhaps rough times lie ahead.
They’re probabilities for two different things. The 5% estimate is for P(AIisCreated&AIisUnfriendly), while Yudkowsky’s estimate is for P(AIisUnfriendly|AIisCreated&NovamenteFinishesFirst).
“perhaps”?
Well, a tendency towards mud-slinging might be counter-balanced by wanting to appear moral. Using FUD against competitors is usually regarded as a pretty low marketing strategy. Perhaps most of the mud-slinging can be delegated to anonymous minions, though.
There’s going to be a lot of mud-slinging in this space.
More generally, there’s going to be a lot of primate tribal politics in this space. After all, not only does it have all the usual trappings of academic arguments, it is also predicated on some pretty fundamental challenges to where power comes from and how it propagates.