This is false. It only works if you don’t use too complex a sequence of quantifiers. It is also limited to arithmetic and doesn’t work with e.g. set theory.
Hm, how is this procedure not going to work for certain classes of statements like set theory. Where does the encoding process fail?
Because I do remember that there was a stackexchange post which claimed we can always take a logical statement and express it as something that returns true if it halts, and false if it doesn’t halt.
The encoding involves looping over the domain of discourse, which works fine for arithmetic where the domain of discourse are the natural numbers, which are an enumerable set. However, when the domain of discourse is the sets, you can’t just loop over them because there are too many with too complex relations which cannot be represented finitely.
I don’t know for sure but I think it is too complex for mathematicians to have come up with a way of quantifying the complexity of it. However, I can say a bit about what I mean by “too complex”:
The natural numbers N have a computable model. That is, in a nice language L such as the language of bitstrings {0,1}∗, we can define a relation R⊂N×L where you can think of nRsas meaning “the number n can be written with the bitstring s”. This relation can be defined in such as way as to have certain nice properties:
For each of the usual primitive operations over N such as +,⋅,0,1,…, there is a corresponding computable algorithm over L which preserves R. For instance, there is an algorithm A+ such that for all nRs and mRt, we have (n+m)RA+(s,t).
For each of the usual primitive relations over N such as =, there is a corresponding computable algorithm over L. For instance, there is an algorithm A= such that for all nRs and mRt, (n=m)⟺A=(s,t).
When we have a relation R and algorithms A like this, rather than leaving the mathematical objects in question as abstract hypothetical thoughts, they become extremely concrete because you can stuff them into a computer and have the computer automatically answer questions about them.
This then raises the question of whether the sets V have a computable model, which depends on what one considers the primitives. When I think of primitive operations for sets, I think of operations such as ∅,∈,=,{−,−},∪,{f(−)|−∈− if ϕ(−)},P(−),N,…. However, these are extremely powerful operations because they are inherently infinitary, and therefore allow you to solve very difficult problems.
For instance if we let h(x,y) mean “the Turing machine indexed by x halts within y steps” (which in itself is a fairly tame proposition since you can answer it just by running Turing machine number x for y steps and seeing whether it halts), then one can solve the halting problem using set theoretic operations just by going {y∈N|h(x,y)}≠∅. Since the halting problem is uncomputable, we must infer that any computable model of sets must be restricted to a language which lacks either subsets, equality, the empty set, or the set of all the natural numbers.
A simpler way to see that no computable model of V exists is by a cardinality argument; V is so big that it would be paradoxical for it to have a cardinality, whereas {0,1}∗ is countable. However there problem with cardinality arguments is that one can get into weird stuff due to Skolem’s paradox. Meanwhile, the observation that set theory would allow you to compute wild stuff and therefore cannot plausibly be computable holds even if you work in a countable setting via Skolem.
A simpler way to see that no computable model of V exists is by a cardinality argument; V is so big that it would be paradoxical for it to have a cardinality, whereas {0,1}∗ is countable.
Wait derp this is wrong. The fancy schmancy relational approach I used specifically has the strength that it allows models of uncountable sets because only the constructible elements actually need to have a representation.
A neat little result, that set theory is RE-hard, and damn this is a very large set, so large that it’s larger than every other cardinality.
This might be one of the few set theories that can’t be completely solved even with non-uniformity, as sometimes non-uniform models of computation, in if we could make them, we could solve every language.
An example is provided on the 14th page of this paper:
And this seems like a great challenge for the Universal Hypercomputer defined here, in that it could compute the entire universe of sets V using very weird resources.
I imagine you are refering to a Turing Machine halting or not halting.
There are statements in Set Theory, which Turing Machines cannot interpret at all ( formally, they have a particular complexity), and require the introduction of “Oracles” in order to assist in interpreting. These are called Oracle Turing Machines. They come about frequently in Descriptive Set Theory.
So we can, in a manner of speaking, encode them as programs to be run in a Turing Machine with a oracle tape. That’s not too hard to do, once we use stronger models of computation, and thus we can still encode set theory statements in more powerful computers.
So I’m still basically right, philosophically speaking, in that we can always encode set theory/mathematics statements in a program, using the trick I described of converting set theory into quantified statements, and then looking at the first quantifier to determine whether halting or not-halting is either true or false.
You would be right in most cases. But, there is still the issue of independent statements. ” ZF is consistent” can not be shown to be true or false, if ZF is consistent, via the Incompleteness Theorems.
So, some statements may not be shown to halting, or not halting. Which, is the famous halting problem.
Any algorithm would be unable to tell you, if the statement halts or doesn’t halt. So, not all statements can be expressed as true/false or halt/not-halt
I didn’t say that it had to return an answer, and the halting problem issue for Turing Machines is essentially that even though a program halts or doesn’t halt, which in this case can be mapped to true or false, we don’t have an algorithm that runs on a Turing Machine that can always tell us which of the 2 cases is true.
Gödel’s incompleteness theorems are important, but in this context, we can basically enhance the definition of a computer philosophically to solve essentially arbitrary problems, and the validity problem of first order logic becomes solvable by introducing an Oracle tape or a closed timeline curve to a Turing Machine, and at that point we can decide the validity and satisfiability problems of first order logic.
You also mentioned that oracles like oracle tapes can provide the necessary interpretation for set theory statements to be encoded, which was my goal, since a Turing Machine or other computer with an Oracle tape, which gets around the first incompleteness theorem by violating an assumption necessary to get the result.
Thank you for clarifying, I misunderstood your post.
Yes, you’re right. “essentially” arbitrary problems would be free game.
There is a hierarchy of questions one can ask though. That is, whatever oracle you introduce, you can now ask more complex questions and would require a more complex oracle to answer ( basically, you can ask the first oracle, questions about itself, which require another more complex oracle to answer).
When I saw you use the word “computer” I thought you meant, a literal computer that we could in principle build.
My focus was on the more philosophical/impractical side, and the computers we can actually build in principle, assuming the laws of physics are unchangable and we are basically correct, we can’t even build Universal Turing Machines/Turing Complete systems, but just linear bounded automatons, due to the holographic principle.
Also, the entire hierarchy can be solved simply by allowing non-uniform computational models, which is yet another benefit of non-uniform computational models.
Hm, how is this procedure not going to work for certain classes of statements like set theory. Where does the encoding process fail?
Because I do remember that there was a stackexchange post which claimed we can always take a logical statement and express it as something that returns true if it halts, and false if it doesn’t halt.
The encoding involves looping over the domain of discourse, which works fine for arithmetic where the domain of discourse are the natural numbers, which are an enumerable set. However, when the domain of discourse is the sets, you can’t just loop over them because there are too many with too complex relations which cannot be represented finitely.
Now my question is, how complicated is the domain of discourse of the sets, exactly?
I don’t know for sure but I think it is too complex for mathematicians to have come up with a way of quantifying the complexity of it. However, I can say a bit about what I mean by “too complex”:
The natural numbers N have a computable model. That is, in a nice language L such as the language of bitstrings {0,1}∗, we can define a relation R⊂N×L where you can think of nRsas meaning “the number n can be written with the bitstring s”. This relation can be defined in such as way as to have certain nice properties:
For each of the usual primitive operations over N such as +,⋅,0,1,…, there is a corresponding computable algorithm over L which preserves R. For instance, there is an algorithm A+ such that for all nRs and mRt, we have (n+m)RA+(s,t).
For each of the usual primitive relations over N such as =, there is a corresponding computable algorithm over L. For instance, there is an algorithm A= such that for all nRs and mRt, (n=m)⟺A=(s,t).
When we have a relation R and algorithms A like this, rather than leaving the mathematical objects in question as abstract hypothetical thoughts, they become extremely concrete because you can stuff them into a computer and have the computer automatically answer questions about them.
This then raises the question of whether the sets V have a computable model, which depends on what one considers the primitives. When I think of primitive operations for sets, I think of operations such as ∅,∈,=,{−,−},∪,{f(−)|−∈− if ϕ(−)},P(−),N,…. However, these are extremely powerful operations because they are inherently infinitary, and therefore allow you to solve very difficult problems.
For instance if we let h(x,y) mean “the Turing machine indexed by x halts within y steps” (which in itself is a fairly tame proposition since you can answer it just by running Turing machine number x for y steps and seeing whether it halts), then one can solve the halting problem using set theoretic operations just by going {y∈N|h(x,y)}≠∅. Since the halting problem is uncomputable, we must infer that any computable model of sets must be restricted to a language which lacks either subsets, equality, the empty set, or the set of all the natural numbers.
A simpler way to see that no computable model of V exists is by a cardinality argument; V is so big that it would be paradoxical for it to have a cardinality, whereas {0,1}∗ is countable. However there problem with cardinality arguments is that one can get into weird stuff due to Skolem’s paradox. Meanwhile, the observation that set theory would allow you to compute wild stuff and therefore cannot plausibly be computable holds even if you work in a countable setting via Skolem.
Wait derp this is wrong. The fancy schmancy relational approach I used specifically has the strength that it allows models of uncountable sets because only the constructible elements actually need to have a representation.
Hm, what does this mean for your argument that set theory is uncomputable by a Turing machine, for example?
I had two separate arguments and it only breaks the second argument, not the first one.
A neat little result, that set theory is RE-hard, and damn this is a very large set, so large that it’s larger than every other cardinality.
This might be one of the few set theories that can’t be completely solved even with non-uniformity, as sometimes non-uniform models of computation, in if we could make them, we could solve every language.
An example is provided on the 14th page of this paper:
https://arxiv.org/pdf/0808.2669.pdf
And this seems like a great challenge for the Universal Hypercomputer defined here, in that it could compute the entire universe of sets V using very weird resources.
https://arxiv.org/pdf/1806.08747.pdf
I imagine you are refering to a Turing Machine halting or not halting.
There are statements in Set Theory, which Turing Machines cannot interpret at all ( formally, they have a particular complexity), and require the introduction of “Oracles” in order to assist in interpreting. These are called Oracle Turing Machines. They come about frequently in Descriptive Set Theory.
So we can, in a manner of speaking, encode them as programs to be run in a Turing Machine with a oracle tape. That’s not too hard to do, once we use stronger models of computation, and thus we can still encode set theory statements in more powerful computers.
So I’m still basically right, philosophically speaking, in that we can always encode set theory/mathematics statements in a program, using the trick I described of converting set theory into quantified statements, and then looking at the first quantifier to determine whether halting or not-halting is either true or false.
You would be right in most cases. But, there is still the issue of independent statements. ” ZF is consistent” can not be shown to be true or false, if ZF is consistent, via the Incompleteness Theorems.
So, some statements may not be shown to halting, or not halting. Which, is the famous halting problem.
Any algorithm would be unable to tell you, if the statement halts or doesn’t halt. So, not all statements can be expressed as true/false or halt/not-halt
There are 2 things to be said here:
I didn’t say that it had to return an answer, and the halting problem issue for Turing Machines is essentially that even though a program halts or doesn’t halt, which in this case can be mapped to true or false, we don’t have an algorithm that runs on a Turing Machine that can always tell us which of the 2 cases is true.
Gödel’s incompleteness theorems are important, but in this context, we can basically enhance the definition of a computer philosophically to solve essentially arbitrary problems, and the validity problem of first order logic becomes solvable by introducing an Oracle tape or a closed timeline curve to a Turing Machine, and at that point we can decide the validity and satisfiability problems of first order logic.
You also mentioned that oracles like oracle tapes can provide the necessary interpretation for set theory statements to be encoded, which was my goal, since a Turing Machine or other computer with an Oracle tape, which gets around the first incompleteness theorem by violating an assumption necessary to get the result.
Thank you for clarifying, I misunderstood your post.
Yes, you’re right. “essentially” arbitrary problems would be free game.
There is a hierarchy of questions one can ask though. That is, whatever oracle you introduce, you can now ask more complex questions and would require a more complex oracle to answer ( basically, you can ask the first oracle, questions about itself, which require another more complex oracle to answer).
When I saw you use the word “computer” I thought you meant, a literal computer that we could in principle build.
My focus was on the more philosophical/impractical side, and the computers we can actually build in principle, assuming the laws of physics are unchangable and we are basically correct, we can’t even build Universal Turing Machines/Turing Complete systems, but just linear bounded automatons, due to the holographic principle.
Also, the entire hierarchy can be solved simply by allowing non-uniform computational models, which is yet another benefit of non-uniform computational models.