Dynamic: When the belief pool contains “X is fuzzle”, send X to the action system.
Ah, but the tortoise would argue that this isn’t enough. Sure, the belief pool may contain “X is fuzzle,” and this dynamic, but that doesn’t mean that X necessarily gets sent to the action system. In addition, you need another dynamic:
Dynamic 2: When the belief pool contains “X is fuzzle”, and there is a dynamic saying “When the belief pool contains ‘X is fuzzle’, send X to the action system”, then send X to the action system.
Or, to put it another way:
Dynamic 2: When the belief pool contains “X is fuzzle”, run Dynamic 1.
Of course, then one needs Dynamic 3 to tell you to run Dynamic 2, ad infinitum—and we’re back to the original problem.
I think the real point of the dialogue is that you can’t use rules of inference to derive rules of inference—even if you add them as axioms! In some sense, then, rules of inference are even more fundamental than axioms: they’re the machines that you feed the axioms into. Then one naturally starts to ask questions about how you can “program” the machines by feeding in certain kinds of axioms, and what happens if you try to feed a program’s description to itself, various paradoxes of self-reference, etc. This is where the connection to Gödel and Turing comes in—and probably why Hofstadter included this fable.
I think this just begs the question:
Ah, but the tortoise would argue that this isn’t enough. Sure, the belief pool may contain “X is fuzzle,” and this dynamic, but that doesn’t mean that X necessarily gets sent to the action system. In addition, you need another dynamic:Dynamic 2: When the belief pool contains “X is fuzzle”, and there is a dynamic saying “When the belief pool contains ‘X is fuzzle’, send X to the action system”, then send X to the action system.
Or, to put it another way:
Dynamic 2: When the belief pool contains “X is fuzzle”, run Dynamic 1.
Of course, then one needs Dynamic 3 to tell you to run Dynamic 2, ad infinitum—and we’re back to the original problem.
I think the real point of the dialogue is that you can’t use rules of inference to derive rules of inference—even if you add them as axioms! In some sense, then, rules of inference are even more fundamental than axioms: they’re the machines that you feed the axioms into. Then one naturally starts to ask questions about how you can “program” the machines by feeding in certain kinds of axioms, and what happens if you try to feed a program’s description to itself, various paradoxes of self-reference, etc. This is where the connection to Gödel and Turing comes in—and probably why Hofstadter included this fable.
Cheers, Ari