Some computer programs crash—just as some possible superintelligences would kill all humans.
No *most” computer programs crash [...]
By “no”, you apparently mean “yes”.
With a self-modifying AI this is a lot harder to do.
Well, that is a completely different argument—and one that would appear to be in need of supporting evidence—since automated testing, linting and the ability to program in high-level languages are all improving simultaneously.
I am not aware of any evidence that real computer programs are getting more crash-prone with the passage of time.
With a self-modifying AI this is a lot harder to do.
Well, that is a completely different argument—and one that would appear to be in need of supporting evidence—since automated testing, linting and the ability to program in high-level languages are all improving simultaneously.
The point is that the first time you run the seed AI it will attempt to take over the world, so you don’t have the luxury of debugging it.
Almost certainly, the first time you run the seed AI, it’ll crash quickly. I think it’s very unlikely that you construct a successful-enough-to-be-dangerous AI without a lot of mentally crippled ones first.
Almost certainly, the first time you run the seed AI, it’ll crash quickly. I think it’s very unlikely that you construct a successful-enough-to-be-dangerous AI without a lot of mentally crippled ones first.
If so then we are all going to die. That is, if you have that level of buggy code then it is absurdly unlikely that the first time the “intelligence” part works at all it works well enough to be friendly. (And that scenario seems likely.)
By “no”, you apparently mean “yes”.
Well, that is a completely different argument—and one that would appear to be in need of supporting evidence—since automated testing, linting and the ability to program in high-level languages are all improving simultaneously.
I am not aware of any evidence that real computer programs are getting more crash-prone with the passage of time.
The point is that the first time you run the seed AI it will attempt to take over the world, so you don’t have the luxury of debugging it.
That is not a very impressive argument, IMHO.
We will have better test harnesses by then—allowing such machines to be debugged.
Almost certainly, the first time you run the seed AI, it’ll crash quickly. I think it’s very unlikely that you construct a successful-enough-to-be-dangerous AI without a lot of mentally crippled ones first.
If so then we are all going to die. That is, if you have that level of buggy code then it is absurdly unlikely that the first time the “intelligence” part works at all it works well enough to be friendly. (And that scenario seems likely.)
The first machine intellligences we build will be stupid ones.
By the time smarter ones are under developpment we will have other trustworthy smart machines on hand to help keep the newcomers in check.