By the discussion above, we know that no matter what axioms we impose, we’ll never be able to fully capture the integers (because any formal system will always have unproveable truths, and therefore multiple interpretations, and therefore allow for models of the integers with are different than the real ones).
Assuming we use first-order logic, right?
Here is the part where I got stuck.. what exactly causes this attitude that if something cannot be accomplished using first-order logic, the reasonably thing is to give up? I assume there is a good reason behind that, but I have never heard it said explicitly, so to me it kinda sounds like “we found out that it is impossible to build a spaceship using wood, therefore there shall be no space travel”, as opposed to trying to find a more suitable material for building spaceships.
So, there is some first-order-logicness in the very concept of proof? Like, if you describe something using the first-order logic, then you can use a machine to algorithmically check the validity of the proof… and using some stronger methods could allow us to make the interpretation less ambiguous, but the cost would be that now we no longer can mechanically check the correctness?
Your original question (about the focus on first-order logic) was a very good one. I asked a logician friend about it, and his answer was the same as the one that you and Samuel settled on here—first-order logic is likely the best that humans brains (or machines) are able to access. So yeah, basically a version of the Church-Turing thesis.
They referred me to Lindström’s theorem as a concrete result in that direction. I don’t know enough to understand the statement of the theorem, but it seems to say something along the lines of “first-order logic is the strongest logic which still has [some properties that computable systems ought to have?].”
Fun fact I learned in the conversation: it is a theorem of first-order ZFC that the models of second-order ZFC are constrained enough to single out a canonical model of the integers.
Assuming we use first-order logic, right?
Here is the part where I got stuck.. what exactly causes this attitude that if something cannot be accomplished using first-order logic, the reasonably thing is to give up? I assume there is a good reason behind that, but I have never heard it said explicitly, so to me it kinda sounds like “we found out that it is impossible to build a spaceship using wood, therefore there shall be no space travel”, as opposed to trying to find a more suitable material for building spaceships.
So, there is some first-order-logicness in the very concept of proof? Like, if you describe something using the first-order logic, then you can use a machine to algorithmically check the validity of the proof… and using some stronger methods could allow us to make the interpretation less ambiguous, but the cost would be that now we no longer can mechanically check the correctness?
Your original question (about the focus on first-order logic) was a very good one. I asked a logician friend about it, and his answer was the same as the one that you and Samuel settled on here—first-order logic is likely the best that humans brains (or machines) are able to access. So yeah, basically a version of the Church-Turing thesis.
They referred me to Lindström’s theorem as a concrete result in that direction. I don’t know enough to understand the statement of the theorem, but it seems to say something along the lines of “first-order logic is the strongest logic which still has [some properties that computable systems ought to have?].”
Fun fact I learned in the conversation: it is a theorem of first-order ZFC that the models of second-order ZFC are constrained enough to single out a canonical model of the integers.
Thanks for the links! I will try to explore in this direction.