It seems that the unfriendly AI is in a slightly unfavourable position. First, it has to preserve the information content of its utility function or other value representation, in addition to the information content possessed by the friendly AI.
There are two sorts of unsafe AI: one which care and one which doesnt care.
Ignorant is fastest—only calculate answer and doesn’t care of anything else.
Friend and enemy has to analyse additional things...
Ignorant is fastest—only calculate answer and doesn’t care of anything else.
Just don’t accidentally give it a problem that is more complex than you expect. Only caring about solving such a problem means tiling the universe with computronium.
There are two sorts of unsafe AI: one which care and one which doesnt care.
Ignorant is fastest—only calculate answer and doesn’t care of anything else.
Friend and enemy has to analyse additional things...
Just don’t accidentally give it a problem that is more complex than you expect. Only caring about solving such a problem means tiling the universe with computronium.