I don’t think he was talking about self-PA, but rather an altered decision criteria, such that rather that “if I can prove this is good, do it” it is “if I can prove that if I am consistent then this is good, do it” which I think doesn’t have this particular problem, though it does have others, and it still can’t /increase/ in proof strength.
I don’t think he was talking about self-PA, but rather an altered decision criteria, such that rather that “if I can prove this is good, do it” it is “if I can prove that if I am consistent then this is good, do it”
Yes.
and it still can’t /increase/ in proof strength.
Mmm, I think I can see it. What about “if I can prove that if a version of me with unbounded computational resources is consistent then this is good, do it”. (*)
It seems to me that this allows increase in proof strength up to the proof strength of that particular ideal reference agent.
(* there should be probably additional constraints that specify that the current agent, and the successor if present, must be provably approximations of the unbounded agent in some conservative way)
“if I can prove that if a version of me with unbounded computational resources is consistent then this is good, do it”
In this formalism we generally assume infinite resources anyway. And even if this is not the case, consistent/inconsistent doesn’t depend on resources, only on the axioms and rules for deduction. So this still doesn’t let you increase in proof strength, although again it should help avoid losing it.
If we are already assuming infinite resources, then do we really need anything stronger than PA?
And even if this is not the case, consistent/inconsistent doesn’t depend on resources, only on the axioms and rules for deduction.
A formal system may be inconsistent, but a resource-bounded theorem prover working on it might never be able to prove any contradiction for a given resource bound. If you increase the resource bound, contradictions may become provable.
I don’t think he was talking about self-PA, but rather an altered decision criteria, such that rather that “if I can prove this is good, do it” it is “if I can prove that if I am consistent then this is good, do it” which I think doesn’t have this particular problem, though it does have others, and it still can’t /increase/ in proof strength.
Yes.
Mmm, I think I can see it.
What about “if I can prove that if a version of me with unbounded computational resources is consistent then this is good, do it”. (*) It seems to me that this allows increase in proof strength up to the proof strength of that particular ideal reference agent.
(* there should be probably additional constraints that specify that the current agent, and the successor if present, must be provably approximations of the unbounded agent in some conservative way)
“if I can prove that if a version of me with unbounded computational resources is consistent then this is good, do it”
In this formalism we generally assume infinite resources anyway. And even if this is not the case, consistent/inconsistent doesn’t depend on resources, only on the axioms and rules for deduction. So this still doesn’t let you increase in proof strength, although again it should help avoid losing it.
If we are already assuming infinite resources, then do we really need anything stronger than PA?
A formal system may be inconsistent, but a resource-bounded theorem prover working on it might never be able to prove any contradiction for a given resource bound. If you increase the resource bound, contradictions may become provable.