Suppose we define a generalized version of Solomonoff Induction based on some second-order logic. The truth predicate for this logic can’t be defined within the logic and therefore a device that can decide the truth value of arbitrary statements in this logical has no finite description within this logic. If an alien claimed to have such a device, this generalized Solomonoff induction would assign the hypothesis that they’re telling the truth zero probability, whereas we would assign it some small but positive probability.
It seems to me that the paradox may lie within this problem setup, not within the agent doing the induction.
We first consider that, rather than this device being assigned zero probability, it should actually be inconceivable to the agent—there should not be a finitely describable thingy that the agent assigns zero probability of having a finitely describable property.
Why would an agent using a second-order analogue of Solomonoff induction have such conceptual problems? Well, considering how Tarski’s original undefinability theorems worked, perhaps what goes wrong is this: we want to believe that the device outputs the truth of statements about the universe. But we also want to believe this device is in the universe. So what happens if we ask the device, “Does the universe entail the sentence stating that outputs ‘No’ in response to a question which looks like ?”
Thus, such a device is inconceivable in the first place since it has no consistent model, and we are actually correct to assign zero probability to the alien’s assertion that the device produces correct questions to all questions about the universe including questions about the device itself.
we want to believe that the device outputs the truth of statements about the universe
I’m not sure why you say this, because the device is supposed to output the truth of statements about some second-order logic, not about the universe. The device is not describable by the second-order logic via Tarski, so if the device is in the universe, the universe must only be describable by some meta-logic, which implies the device is outputting truth of statements about something strictly simpler than the universe. The agent ought to be able to conceive of this...
In that case I’m not sure what you mean by “second-order analogue of Solomonoff induction”. Solomonoff induction is about the universe and proceeds from sensory experiences. What does it mean to induct that something may produce the truth of sentences “about a second-order logic”?
What does it mean to induct that something may produce the truth of sentences “about a second-order logic”?
I mean suppose you encounter a device that outputs “true” or “false” whenever you feed it a sentence in some second-order logic, and after doing a lot of tests, you think maybe the device outputs “true” if and only if the input sentence is actually true, regardless of what input it receives. This seems like a straightforward analogue of inducting that something may be a Halting Oracle, which I thought you agreed an agent should be able to do?
I’m not sure if this answers your question. If not, feel free to find me on Skype or Google Chat to talk more.
By “true” do you mean that the physical universe semantically entails the sentence, or that the sentence is a semantic tautology of second-order logic? I’m assuming the latter, since the former case is what I was assuming when I gave my Tarski reply.
So… I’m pretty sure you can represent a set’s semantic entailment of a Henkin interpretation of a second-order sentence in a first-order set theory. I’m less sure that you can represent entailment of a second-order sentence inside a second-order set theory, but I’m having trouble seeing why you wouldn’t be able to do that. I also think that second-order set theories are supposed to be finitely axiomatizable, though I could be wrong about this. But then I’m not quite sure why we don’t get into Tarskian trouble.
Let ST be the finite axiomatization of a second-order set theory. Then in second-order logic, why doesn’t the sentence (ST->”All sets: Set entails S”) form a truth predicate for S? My guess is that there’s just no theorem which says, “X is true (entailed by the mathematical universal set / model we’re currently operating inside) iff ‘X’ is entailed by all sets (inside the current model)”.
If this is so, then it looks to me like for any given set of axioms corresponding to a second-order set theory, second-order logic can represent the idea of a device that outputs sentences which are semantically entailed by every individual set inside that theory. So the new answer would be that you are welcome to hypothesize that a device prints out truths of second-order logic, for any given background second-order set theory which provides a universe of models against which those sentences are judged universally semantically true.
In which case the indefinite extensibility gets packed into the choice of set theory that we think this device is judging second-order validity for, if I haven’t dropped the bucket somewhere along the way.
Then in second-order logic, why doesn’t the sentence (ST->”All sets: Set entails S”) form a truth predicate for S?
Let S=”ST → false”. Then S is false in second-order logic (assuming ST is consistent), but (ST->”All sets: Set entails S”) is true, because ST’s model has to be bigger than any set (I think it’s usually taken to be the proper class of all sets), so every set entails “Not ST”.
So the new answer would be that you are welcome to hypothesize that a device prints out truths of second-order logic, for any given background second-order set theory which provides a universe of models against which those sentences are judged universally semantically true.
On input S=”ST → false”, your device prints out “true”, while my device prints out “false”. I still want to be able to hypothesize my device. :)
There are totally models of ZFC containing sets that are models of ZFC. See “Grothendieck universe”. Is there a reason why it’d be different in second-order logic? I don’t think a second-order set theory would pin down a unique model, why would it? Unless you had some axiom stating that there were no more ordinals past a certain point in which case you might be able to get a unique model. Unless I’m getting this all completely wrong, since I’m overrunning my expertise here.
So in retrospect I have to modify this for us to somehow suppose that the device is operating in a particular model of a second-order theory. And then my device prints out “true” (if it’s in one of the smallest models) or the device prints out “false” (if it’s in a larger model), unless the device is against the background of an ST with an upper bound imposing a unique model, in which case the device does print out “true” for ST → false and from the outside, we think that this device is about a small collection of sets so this result is not surprising.
Then the question is whether it makes sense to imagine that the device is about the “largest relevant” model of a set theory—i.e., for any other similar devices, you think no other device will ever refer to a larger model than the current one, nor will any set theory successfully force a model larger than the current one—I think that’s the point at which things get semantically interesting again.
Is there a reason why it’d be different in second-order logic?
Second-order set theory is beyond my expertise too, but I’m going by this paper, which on page 8 says:
We have managed to give a formal semantics for the second-order language of
set theory without expanding our ontology to include classes that are not sets. The
obvious alternative is to invoke the existence of proper classes. One can then tinker
with the definition of a standard model so as to allow for a model with the (proper)
class of all sets as its domain and the class of all ordered-pairs x, y (for x an element
of y) as its interpretation function.12 The existence of such a model is in fact all
it takes to render the truth of a sentence of the language of set theory an immediate
consequence of its validity.
So I was taking the “obvious alternative” of proper class of all sets to be the standard model for second order set theory. I don’t quite understand the paper’s own proposed model, but I don’t think it’s a set either.
I’m not sure I believe in proper classes and in particular, I’m not sure there’s a proper class of all sets that could be the model of a second-order theory such that you could not describe any set larger than the model, and as for pinning down that model using axioms I’m pretty sure you shouldn’t be able to do that. There are analogues of the Lowenheim-Skolem theorem for sufficiently large infinities in second-order logic, I seem to recall reading.
It seems to me that the paradox may lie within this problem setup, not within the agent doing the induction.
We first consider that, rather than this device being assigned zero probability, it should actually be inconceivable to the agent—there should not be a finitely describable thingy that the agent assigns zero probability of having a finitely describable property.
Why would an agent using a second-order analogue of Solomonoff induction have such conceptual problems? Well, considering how Tarski’s original undefinability theorems worked, perhaps what goes wrong is this: we want to believe that the device outputs the truth of statements about the universe. But we also want to believe this device is in the universe. So what happens if we ask the device, “Does the universe entail the sentence stating that outputs ‘No’ in response to a question which looks like ?”
Thus, such a device is inconceivable in the first place since it has no consistent model, and we are actually correct to assign zero probability to the alien’s assertion that the device produces correct questions to all questions about the universe including questions about the device itself.
I’m not sure why you say this, because the device is supposed to output the truth of statements about some second-order logic, not about the universe. The device is not describable by the second-order logic via Tarski, so if the device is in the universe, the universe must only be describable by some meta-logic, which implies the device is outputting truth of statements about something strictly simpler than the universe. The agent ought to be able to conceive of this...
In that case I’m not sure what you mean by “second-order analogue of Solomonoff induction”. Solomonoff induction is about the universe and proceeds from sensory experiences. What does it mean to induct that something may produce the truth of sentences “about a second-order logic”?
I mean suppose you encounter a device that outputs “true” or “false” whenever you feed it a sentence in some second-order logic, and after doing a lot of tests, you think maybe the device outputs “true” if and only if the input sentence is actually true, regardless of what input it receives. This seems like a straightforward analogue of inducting that something may be a Halting Oracle, which I thought you agreed an agent should be able to do?
I’m not sure if this answers your question. If not, feel free to find me on Skype or Google Chat to talk more.
By “true” do you mean that the physical universe semantically entails the sentence, or that the sentence is a semantic tautology of second-order logic? I’m assuming the latter, since the former case is what I was assuming when I gave my Tarski reply.
So… I’m pretty sure you can represent a set’s semantic entailment of a Henkin interpretation of a second-order sentence in a first-order set theory. I’m less sure that you can represent entailment of a second-order sentence inside a second-order set theory, but I’m having trouble seeing why you wouldn’t be able to do that. I also think that second-order set theories are supposed to be finitely axiomatizable, though I could be wrong about this. But then I’m not quite sure why we don’t get into Tarskian trouble.
Let ST be the finite axiomatization of a second-order set theory. Then in second-order logic, why doesn’t the sentence (ST->”All sets: Set entails S”) form a truth predicate for S? My guess is that there’s just no theorem which says, “X is true (entailed by the mathematical universal set / model we’re currently operating inside) iff ‘X’ is entailed by all sets (inside the current model)”.
If this is so, then it looks to me like for any given set of axioms corresponding to a second-order set theory, second-order logic can represent the idea of a device that outputs sentences which are semantically entailed by every individual set inside that theory. So the new answer would be that you are welcome to hypothesize that a device prints out truths of second-order logic, for any given background second-order set theory which provides a universe of models against which those sentences are judged universally semantically true.
In which case the indefinite extensibility gets packed into the choice of set theory that we think this device is judging second-order validity for, if I haven’t dropped the bucket somewhere along the way.
Let S=”ST → false”. Then S is false in second-order logic (assuming ST is consistent), but (ST->”All sets: Set entails S”) is true, because ST’s model has to be bigger than any set (I think it’s usually taken to be the proper class of all sets), so every set entails “Not ST”.
On input S=”ST → false”, your device prints out “true”, while my device prints out “false”. I still want to be able to hypothesize my device. :)
There are totally models of ZFC containing sets that are models of ZFC. See “Grothendieck universe”. Is there a reason why it’d be different in second-order logic? I don’t think a second-order set theory would pin down a unique model, why would it? Unless you had some axiom stating that there were no more ordinals past a certain point in which case you might be able to get a unique model. Unless I’m getting this all completely wrong, since I’m overrunning my expertise here.
So in retrospect I have to modify this for us to somehow suppose that the device is operating in a particular model of a second-order theory. And then my device prints out “true” (if it’s in one of the smallest models) or the device prints out “false” (if it’s in a larger model), unless the device is against the background of an ST with an upper bound imposing a unique model, in which case the device does print out “true” for ST → false and from the outside, we think that this device is about a small collection of sets so this result is not surprising.
Then the question is whether it makes sense to imagine that the device is about the “largest relevant” model of a set theory—i.e., for any other similar devices, you think no other device will ever refer to a larger model than the current one, nor will any set theory successfully force a model larger than the current one—I think that’s the point at which things get semantically interesting again.
Second-order set theory is beyond my expertise too, but I’m going by this paper, which on page 8 says:
So I was taking the “obvious alternative” of proper class of all sets to be the standard model for second order set theory. I don’t quite understand the paper’s own proposed model, but I don’t think it’s a set either.
I’m not sure I believe in proper classes and in particular, I’m not sure there’s a proper class of all sets that could be the model of a second-order theory such that you could not describe any set larger than the model, and as for pinning down that model using axioms I’m pretty sure you shouldn’t be able to do that. There are analogues of the Lowenheim-Skolem theorem for sufficiently large infinities in second-order logic, I seem to recall reading.