No, I do not believe that the default case is friendly AI. But I believe that AI going FOOM is, if possible at all, very hard to accomplish. Surely everyone agrees here. But at the moment I do not share the opinion that friendliness, that is to implement scope boundaries, is a very likely failure mode. I see it this way, if one can figure out how to create an AGI that FOOM’s (no I do not think AGI implies FOOM) then you have a thorough comprehension of intelligence and its associated risks. I just don’t see that a group of researchers (I don’t believe a mere group is enough anyway) will be smart enough to create an AGI that does FOOM but somehow fail to limit its scope. Please consider reading this comment where I cover this topic in more detail. That is why I believe that only 5% of all AI’s going FOOM will be an existential risk to all of humanity. That is my current estimation, I’ll of course update on new evidence (e.g. arguments).
No, I do not believe that the default case is friendly AI. But I believe that AI going FOOM is, if possible at all, very hard to accomplish. Surely everyone agrees here. But at the moment I do not share the opinion that friendliness, that is to implement scope boundaries, is a very likely failure mode. I see it this way, if one can figure out how to create an AGI that FOOM’s (no I do not think AGI implies FOOM) then you have a thorough comprehension of intelligence and its associated risks. I just don’t see that a group of researchers (I don’t believe a mere group is enough anyway) will be smart enough to create an AGI that does FOOM but somehow fail to limit its scope. Please consider reading this comment where I cover this topic in more detail. That is why I believe that only 5% of all AI’s going FOOM will be an existential risk to all of humanity. That is my current estimation, I’ll of course update on new evidence (e.g. arguments).