If I understand you correctly, then I’m not sure why you are asking the question. Supposing I generalize PA to describe time, I would like to still keep adding logical information to PA based on my observations. For example, I tentatively believe the Goldbach conjecture, and would like my probability to increase as more instances are checked. In fact, I would like my credence to increase as “something like” the fraction of Turing machines that are non-halting out of all Turing machines which don’t halt by time X, where X is the largest number I’ve checked (assuming I’m checking starting from 2 and skipping nothing).
Furthermore, I would like this process to allow me to probabilistically accept a wide range of mathematical foundations, rather than only accepting more statements in the language of PA.
I’m asking about the process that causes other mathematical beliefs to generalize to your beliefs about physical time in such fashion that physical time always seems to have the smallest model allowed by any of your mathematical beliefs. When I learn that the ordinal epsilon-0 corresponds to an ordering on unordered finitely branching trees, I don’t conclude that a basket of apples is made out of little tiny unordered trees. What do the physical apples have to do with ordinals, after all?
Why, as you come to believe that Zermelo-Fraenkel set theory has a model, do you come to believe that physical time will never show you a moment when a machine checking for ZF-inconsistency proofs halts? Why shouldn’t physical time just be a random model of PA instead, allowing it to have a time where ZF is proven inconsistent? Why do you transfer beliefs from one domain to the other—or what law makes them the same domain?
This isn’t meant to be an unanswerable question, I suspect it’s answerable, I’m asking if you have any particular ideas about the mechanics.
Why, as you come to believe that Zermelo-Fraenkel set theory has a model, do you come to believe that physical time will never show you a moment when a machine checking for ZF-inconsistency proofs halts?
I try to intepret it (granted, I interpret it in my worldview which is different) and I cannot see the question here.
I am not 100% sure whether even PA has a model, but I find it likely that even ZFC has. But if I say that ZFC has a model, it means that this is a model where formula parts are numbered by the natural numbers derived from my notion of subsequent moments of time.
This isn’t meant to be an unanswerable question, I suspect it’s answerable, I’m asking if you have any particular ideas about the mechanics.
Oh, OK. (This was in fact my main confusion about the question.)
One thing is clear to me: in the case that I’ve made a closed-world assumption about my domain (which is what the least-set description of natural numbers is), I should somehow set up the probabilities so that a hypothesis which proves the existence of a domain element with a specific property does not get an a priori chunk of probability mass just for being a hypothesis of a specific length. Ideally, we want to set this up so that it follows “something like” the halting distribution which I mentioned in my previous reply (but of course that’s not computable).
One idea is that such a hypothesis only gets probability mass from specific domain elements with that property. So, for example, if we want to measure the belief that the Goldbach conjecture is false, the probability mass we assign to that must come from the mass we assign to conjectures like false_at(1000), false_at(1002), et cetera. As I eliminate possibilities like this, the probability for exists x. false_at(x) falls.
This can be formalized by saying that for closed-world domains, as we form a theory, we must have an example before we introduce an existential statement. (I’m assuming that we want to model the probability of a theory as some process which generates theories at random, as in my recent AGI paper.) We might even want to say that we need a specific example before we will introduce any quantifiers. This doesn’t strictly rule out any theories, since we can just write down examples that we don’t yet know to be false before introducing the quantified statements, but it modifies the probabilities associated with them.
However, I’m not really confident in that particular trick (I’m not sure it behaves correctly for nested quantifiers, et cetera). I would be more confident if there were an obvious way to generalize this to work nicely with both open-world and closed-world cases (and mixed cases). But I know that it is not possible to generalize “correctly” to full 2nd-order logic (for any definition of “correctly” that I’ve thought of so far).
If I understand you correctly, then I’m not sure why you are asking the question. Supposing I generalize PA to describe time, I would like to still keep adding logical information to PA based on my observations. For example, I tentatively believe the Goldbach conjecture, and would like my probability to increase as more instances are checked. In fact, I would like my credence to increase as “something like” the fraction of Turing machines that are non-halting out of all Turing machines which don’t halt by time X, where X is the largest number I’ve checked (assuming I’m checking starting from 2 and skipping nothing).
Furthermore, I would like this process to allow me to probabilistically accept a wide range of mathematical foundations, rather than only accepting more statements in the language of PA.
I’m asking about the process that causes other mathematical beliefs to generalize to your beliefs about physical time in such fashion that physical time always seems to have the smallest model allowed by any of your mathematical beliefs. When I learn that the ordinal epsilon-0 corresponds to an ordering on unordered finitely branching trees, I don’t conclude that a basket of apples is made out of little tiny unordered trees. What do the physical apples have to do with ordinals, after all?
Why, as you come to believe that Zermelo-Fraenkel set theory has a model, do you come to believe that physical time will never show you a moment when a machine checking for ZF-inconsistency proofs halts? Why shouldn’t physical time just be a random model of PA instead, allowing it to have a time where ZF is proven inconsistent? Why do you transfer beliefs from one domain to the other—or what law makes them the same domain?
This isn’t meant to be an unanswerable question, I suspect it’s answerable, I’m asking if you have any particular ideas about the mechanics.
Could you please clarify your question here?
I try to intepret it (granted, I interpret it in my worldview which is different) and I cannot see the question here.
I am not 100% sure whether even PA has a model, but I find it likely that even ZFC has. But if I say that ZFC has a model, it means that this is a model where formula parts are numbered by the natural numbers derived from my notion of subsequent moments of time.
Oh, OK. (This was in fact my main confusion about the question.)
One thing is clear to me: in the case that I’ve made a closed-world assumption about my domain (which is what the least-set description of natural numbers is), I should somehow set up the probabilities so that a hypothesis which proves the existence of a domain element with a specific property does not get an a priori chunk of probability mass just for being a hypothesis of a specific length. Ideally, we want to set this up so that it follows “something like” the halting distribution which I mentioned in my previous reply (but of course that’s not computable).
One idea is that such a hypothesis only gets probability mass from specific domain elements with that property. So, for example, if we want to measure the belief that the Goldbach conjecture is false, the probability mass we assign to that must come from the mass we assign to conjectures like false_at(1000), false_at(1002), et cetera. As I eliminate possibilities like this, the probability for exists x. false_at(x) falls.
This can be formalized by saying that for closed-world domains, as we form a theory, we must have an example before we introduce an existential statement. (I’m assuming that we want to model the probability of a theory as some process which generates theories at random, as in my recent AGI paper.) We might even want to say that we need a specific example before we will introduce any quantifiers. This doesn’t strictly rule out any theories, since we can just write down examples that we don’t yet know to be false before introducing the quantified statements, but it modifies the probabilities associated with them.
However, I’m not really confident in that particular trick (I’m not sure it behaves correctly for nested quantifiers, et cetera). I would be more confident if there were an obvious way to generalize this to work nicely with both open-world and closed-world cases (and mixed cases). But I know that it is not possible to generalize “correctly” to full 2nd-order logic (for any definition of “correctly” that I’ve thought of so far).