Almost certainly, the first time you run the seed AI, it’ll crash quickly. I think it’s very unlikely that you construct a successful-enough-to-be-dangerous AI without a lot of mentally crippled ones first.
If so then we are all going to die. That is, if you have that level of buggy code then it is absurdly unlikely that the first time the “intelligence” part works at all it works well enough to be friendly. (And that scenario seems likely.)
If so then we are all going to die. That is, if you have that level of buggy code then it is absurdly unlikely that the first time the “intelligence” part works at all it works well enough to be friendly. (And that scenario seems likely.)
The first machine intellligences we build will be stupid ones.
By the time smarter ones are under developpment we will have other trustworthy smart machines on hand to help keep the newcomers in check.