Do you gain anything by parameterizing on PA(n)? It seems like you could get a sound algorithm by replacing PA(n) in your definition by T, for any trustworthy system of axioms T. My first instinct was to pick T=ZFC, while you pick a dynamically changing subset of PA(omega), i.e. PA + a countably infinite sequence of consistency assertions.
Certainly PA(omega) is trustworthy, but there is nothing particularly canonical about it (you could equally well pick PA(omega)+CON(PA(omega))), and I don’t think there is any loss of generality in picking the axioms up-front.
I think you’re right. Parameterizing on PA(n) doesn’t really seem to do anything except perhaps give an illusion of self-improvement when there’s really a hard limit of PA(omega) that the AI can’t go beyond. I guess I choose it mostly because the OP was doing something similar.
If you just have PA(omega) and, say, parameterize on the proof limit instead, will your program double down on versions of itself that use weaker formal theories, not mentioned in your program?
Yes. If you call it with the argument AIQ2_T (i.e. the same algorithm but with T instead of PA(omega)), it will accept if it can find a proof that forall q,
AIQ2_T(q) != ‘self-destruct’
(which is easy) and
AIQ2_T(q) == ‘double’ ==> AIQ2(q)==‘double’
The latter condition expands to
PA(omega) |- someFormula ==> T |- someFormula
which may be quite easy to show, depending on T.
Basically, the Quining approach avoids trouble from the 2nd incompleteness theorem by using relative consistency instead of consistency.
It seems to me that the two programs don’t have the same “someFormula”, because each program’s formula uses a quined description of that program, but not the other one. Or maybe I’m wrong. Can you expand what you mean by “someFormula”?
Do you gain anything by parameterizing on PA(n)? It seems like you could get a sound algorithm by replacing PA(n) in your definition by T, for any trustworthy system of axioms T. My first instinct was to pick T=ZFC, while you pick a dynamically changing subset of PA(omega), i.e. PA + a countably infinite sequence of consistency assertions.
Certainly PA(omega) is trustworthy, but there is nothing particularly canonical about it (you could equally well pick PA(omega)+CON(PA(omega))), and I don’t think there is any loss of generality in picking the axioms up-front.
I think you’re right. Parameterizing on PA(n) doesn’t really seem to do anything except perhaps give an illusion of self-improvement when there’s really a hard limit of PA(omega) that the AI can’t go beyond. I guess I choose it mostly because the OP was doing something similar.
If you just have PA(omega) and, say, parameterize on the proof limit instead, will your program double down on versions of itself that use weaker formal theories, not mentioned in your program?
Yes. If you call it with the argument AIQ2_T (i.e. the same algorithm but with T instead of PA(omega)), it will accept if it can find a proof that forall q,
AIQ2_T(q) != ‘self-destruct’
(which is easy) and
AIQ2_T(q) == ‘double’ ==> AIQ2(q)==‘double’
The latter condition expands to
PA(omega) |- someFormula ==> T |- someFormula
which may be quite easy to show, depending on T.
Basically, the Quining approach avoids trouble from the 2nd incompleteness theorem by using relative consistency instead of consistency.
It seems to me that the two programs don’t have the same “someFormula”, because each program’s formula uses a quined description of that program, but not the other one. Or maybe I’m wrong. Can you expand what you mean by “someFormula”?
Oops.