The axioms were strong enough to create god but not strong enough to fetter it? A small point to hit I imagine.
You can spin it that way, but there are counter spins. E.g., you can tell by looking that the Goedel machine’s formal goal system is significantly shakier than any decent initial axiomatic system. This could plausibly lead to a god that is ‘bound’ to a syntactic goal system whose semantics the god can interpret in arbitrary ways. There’s of course little reason to expect things like that unless you expect there to be ghosts in the machines; such ghosts could get there via things like Goedelian loops or recursivey semanticy schtuff or other nonsense that must be included in the axiomatic system before the machine is able reference itself and thus self-improve. I give it low probability but in my opinion there’s high structural uncertainty.
Oh I agree the formal goal system is shaky, but that is also the method by which the system “self-improves”, it uses it to find “improvements”. If there are weaknesses in the axiomatic system then I would expect any proven “improvements” to be potentially deleterious to the system as a whole and god would not likely be formed. If I have an axiom that having “hot lead flying through my brain will lead to me taking over the world” it does not mean that shooting myself in the head will make me the lord high supreme potentate of earth, despite the fact I can prove it given that axiom and some facts about the world. Changing ones source code could be equally messy if done on the basis of a weak system.
You can spin it that way, but there are counter spins. E.g., you can tell by looking that the Goedel machine’s formal goal system is significantly shakier than any decent initial axiomatic system. This could plausibly lead to a god that is ‘bound’ to a syntactic goal system whose semantics the god can interpret in arbitrary ways. There’s of course little reason to expect things like that unless you expect there to be ghosts in the machines; such ghosts could get there via things like Goedelian loops or recursivey semanticy schtuff or other nonsense that must be included in the axiomatic system before the machine is able reference itself and thus self-improve. I give it low probability but in my opinion there’s high structural uncertainty.
Oh I agree the formal goal system is shaky, but that is also the method by which the system “self-improves”, it uses it to find “improvements”. If there are weaknesses in the axiomatic system then I would expect any proven “improvements” to be potentially deleterious to the system as a whole and god would not likely be formed. If I have an axiom that having “hot lead flying through my brain will lead to me taking over the world” it does not mean that shooting myself in the head will make me the lord high supreme potentate of earth, despite the fact I can prove it given that axiom and some facts about the world. Changing ones source code could be equally messy if done on the basis of a weak system.