I don’t know for sure but I think it is too complex for mathematicians to have come up with a way of quantifying the complexity of it. However, I can say a bit about what I mean by “too complex”:
The natural numbers N have a computable model. That is, in a nice language L such as the language of bitstrings {0,1}∗, we can define a relation R⊂N×L where you can think of nRsas meaning “the number n can be written with the bitstring s”. This relation can be defined in such as way as to have certain nice properties:
For each of the usual primitive operations over N such as +,⋅,0,1,…, there is a corresponding computable algorithm over L which preserves R. For instance, there is an algorithm A+ such that for all nRs and mRt, we have (n+m)RA+(s,t).
For each of the usual primitive relations over N such as =, there is a corresponding computable algorithm over L. For instance, there is an algorithm A= such that for all nRs and mRt, (n=m)⟺A=(s,t).
When we have a relation R and algorithms A like this, rather than leaving the mathematical objects in question as abstract hypothetical thoughts, they become extremely concrete because you can stuff them into a computer and have the computer automatically answer questions about them.
This then raises the question of whether the sets V have a computable model, which depends on what one considers the primitives. When I think of primitive operations for sets, I think of operations such as ∅,∈,=,{−,−},∪,{f(−)|−∈− if ϕ(−)},P(−),N,…. However, these are extremely powerful operations because they are inherently infinitary, and therefore allow you to solve very difficult problems.
For instance if we let h(x,y) mean “the Turing machine indexed by x halts within y steps” (which in itself is a fairly tame proposition since you can answer it just by running Turing machine number x for y steps and seeing whether it halts), then one can solve the halting problem using set theoretic operations just by going {y∈N|h(x,y)}≠∅. Since the halting problem is uncomputable, we must infer that any computable model of sets must be restricted to a language which lacks either subsets, equality, the empty set, or the set of all the natural numbers.
A simpler way to see that no computable model of V exists is by a cardinality argument; V is so big that it would be paradoxical for it to have a cardinality, whereas {0,1}∗ is countable. However there problem with cardinality arguments is that one can get into weird stuff due to Skolem’s paradox. Meanwhile, the observation that set theory would allow you to compute wild stuff and therefore cannot plausibly be computable holds even if you work in a countable setting via Skolem.
A simpler way to see that no computable model of V exists is by a cardinality argument; V is so big that it would be paradoxical for it to have a cardinality, whereas {0,1}∗ is countable.
Wait derp this is wrong. The fancy schmancy relational approach I used specifically has the strength that it allows models of uncountable sets because only the constructible elements actually need to have a representation.
A neat little result, that set theory is RE-hard, and damn this is a very large set, so large that it’s larger than every other cardinality.
This might be one of the few set theories that can’t be completely solved even with non-uniformity, as sometimes non-uniform models of computation, in if we could make them, we could solve every language.
An example is provided on the 14th page of this paper:
And this seems like a great challenge for the Universal Hypercomputer defined here, in that it could compute the entire universe of sets V using very weird resources.
Now my question is, how complicated is the domain of discourse of the sets, exactly?
I don’t know for sure but I think it is too complex for mathematicians to have come up with a way of quantifying the complexity of it. However, I can say a bit about what I mean by “too complex”:
The natural numbers N have a computable model. That is, in a nice language L such as the language of bitstrings {0,1}∗, we can define a relation R⊂N×L where you can think of nRsas meaning “the number n can be written with the bitstring s”. This relation can be defined in such as way as to have certain nice properties:
For each of the usual primitive operations over N such as +,⋅,0,1,…, there is a corresponding computable algorithm over L which preserves R. For instance, there is an algorithm A+ such that for all nRs and mRt, we have (n+m)RA+(s,t).
For each of the usual primitive relations over N such as =, there is a corresponding computable algorithm over L. For instance, there is an algorithm A= such that for all nRs and mRt, (n=m)⟺A=(s,t).
When we have a relation R and algorithms A like this, rather than leaving the mathematical objects in question as abstract hypothetical thoughts, they become extremely concrete because you can stuff them into a computer and have the computer automatically answer questions about them.
This then raises the question of whether the sets V have a computable model, which depends on what one considers the primitives. When I think of primitive operations for sets, I think of operations such as ∅,∈,=,{−,−},∪,{f(−)|−∈− if ϕ(−)},P(−),N,…. However, these are extremely powerful operations because they are inherently infinitary, and therefore allow you to solve very difficult problems.
For instance if we let h(x,y) mean “the Turing machine indexed by x halts within y steps” (which in itself is a fairly tame proposition since you can answer it just by running Turing machine number x for y steps and seeing whether it halts), then one can solve the halting problem using set theoretic operations just by going {y∈N|h(x,y)}≠∅. Since the halting problem is uncomputable, we must infer that any computable model of sets must be restricted to a language which lacks either subsets, equality, the empty set, or the set of all the natural numbers.
A simpler way to see that no computable model of V exists is by a cardinality argument; V is so big that it would be paradoxical for it to have a cardinality, whereas {0,1}∗ is countable. However there problem with cardinality arguments is that one can get into weird stuff due to Skolem’s paradox. Meanwhile, the observation that set theory would allow you to compute wild stuff and therefore cannot plausibly be computable holds even if you work in a countable setting via Skolem.
Wait derp this is wrong. The fancy schmancy relational approach I used specifically has the strength that it allows models of uncountable sets because only the constructible elements actually need to have a representation.
Hm, what does this mean for your argument that set theory is uncomputable by a Turing machine, for example?
I had two separate arguments and it only breaks the second argument, not the first one.
A neat little result, that set theory is RE-hard, and damn this is a very large set, so large that it’s larger than every other cardinality.
This might be one of the few set theories that can’t be completely solved even with non-uniformity, as sometimes non-uniform models of computation, in if we could make them, we could solve every language.
An example is provided on the 14th page of this paper:
https://arxiv.org/pdf/0808.2669.pdf
And this seems like a great challenge for the Universal Hypercomputer defined here, in that it could compute the entire universe of sets V using very weird resources.
https://arxiv.org/pdf/1806.08747.pdf