Would you agree that one possible route to uFAI is human inspired?
Human inspired systems might have the same or similar high fallibility rate (from emulating neurons, or just random experimentation at some level) as humans and giving it access to its own machine code and low-level memory would not be a good idea. Most changes are likely to be bad.
So if an AI did manage to port its code, it would have to find some way of preventing/discouraging the copied AI in the x86 based arch from playing with the ultimate mind expanding/destroying drug that is machine code modification. This is what I meant about stability.
Of course, if it’s not, it could port itself to such if doing so is advantageous.
Would you agree that one possible route to uFAI is human inspired?
Human inspired systems might have the same or similar high fallibility rate (from emulating neurons, or just random experimentation at some level) as humans and giving it access to its own machine code and low-level memory would not be a good idea. Most changes are likely to be bad.
So if an AI did manage to port its code, it would have to find some way of preventing/discouraging the copied AI in the x86 based arch from playing with the ultimate mind expanding/destroying drug that is machine code modification. This is what I meant about stability.
Er, I can’t really give a better rebuttal than this: http://www.singinst.org/upload/LOGI//levels/code.html
What point are you rebutting?
The idea that a greater portion of possible changes to a human-style mind are bad than changes of a equal magnitude to a Von Neumann-style mind.
Most random changes to a von Neumann-style mind would be bad as well.
Just a von-Neumann-style mind is unlikely to make the random mistakes that we can do, or at least that is Eliezer’s contention.
I can’t wait until there are uploads around to make questions like this empirical.