One problem with giving it axioms like this is that you have to be sure that your axioms represent a real possibility, or at least that it is not possible to prove the impossibility of your axioms. Eliezer believes such infinities (such as the infinite regression of simulators) to be impossible. If he is right, and if the AI manages to prove this impossibility, either it will malfunction in some unknown way on account of concluding that a contradiction is true, or it may realize that you simply imposed the axioms on it, and it will correct them.
One problem with giving it axioms like this is that you have to be sure that your axioms represent a real possibility, or at least that it is not possible to prove the impossibility of your axioms. Eliezer believes such infinities (such as the infinite regression of simulators) to be impossible. If he is right, and if the AI manages to prove this impossibility, either it will malfunction in some unknown way on account of concluding that a contradiction is true, or it may realize that you simply imposed the axioms on it, and it will correct them.