One way to think about it is viewing each universe as existing in many instances “simulated” by different other processes/universes. Parts within the universe which can be “easily located” are the parts where more of these simulations are instantiated.
This is not making sense for me. Maybe you could expand the argument into a longer comment or post?
This would only make me assume that the artifact runs on a theory more powerful than ZFC, possibly a relatively simple theory that is more powerful than anything we can formulate.
If it was running on a formal theory, it would fail to give answers for some P’s, but this one never fails to do that. With enough time it would be possible to rule out the artifact running any simple theory, and the hypothesis of it being a true halting oracle should become increasingly likely (unless we give that a zero prior, like you’re proposing here).
I think logical uncertainty is going to be just a way to estimate hard computations with scarce resources.
I’m not sure I understand the relevance of this assertion here. To expand on my question, when we fully understand logical uncertainty, presumably we will have a method to assign a probability for any P halting or not halting. We could use this to help form beliefs on whether a suspected halting oracle is really a halting oracle or not, in addition to asking the artifact about programs for which we have proof under ZFC.
I understand your concern, but this is just one aspect of a wider problem.
I agree with you on the meta point, but back on the object level, I don’t see how we can have much certainty that uncomputable mathematical structures and ones of higher cardinalities do not really exist. A lot of mathematicians sure seem to think they are studying/reasoning about things that are real, not just playing games with symbols. Granted that it may not be feasible to build an FAI that fully incorporates our informal reasoning abilities, it still seems like the bar for building fundamental formal assumptions into an FAI has to be higher than this.
This is not making sense for me. Maybe you could expand the argument into a longer comment or post?
One way to think of Tegmark Level IV is as if there is literally a universal Turing machine which is executed on inputs made of random bits. The outputs are the various universes. Low complexity time segments of a universe will frequently appear as the beginning of such an output. I don’t have a deep point here, just trying to bridge the gap with intuition.
If it was running on a formal theory, it would fail to give answers for some P’s, but this one never fails to do that. With enough time it would be possible to rule out the artifact running any simple theory, and the hypothesis of it being a true halting oracle should become increasingly likely (unless we give that a zero prior, like you’re proposing here).
But we can never know that the theory T in question cannot decide a given instance of the halting problem (since we don’t have explicit grasp of it and cannot consider something like T+1), therefore the associated probability will not fall to 0.
...when we fully understand logical uncertainty, presumably we will have a method to assign a probability for any P halting or not halting.
I realize this is a part of the logic uncertainty program. I just think this part will never come to fruition. That is, we will never have a computable object that can be reasonably interpreted as “the probability of P halting”.
One way to think of Tegmark Level IV is as if there is literally a universal Turing machine which is executed on inputs made of random bits.
Yes, if reality literally consisted of an infinite array of UTMs each initialized with a different input tape, then what you’ve proposed makes more sense. I’ve had similar thoughts, but questions remain:
How do we know reality is really like this?
How do we know which UTM is making up this infinite array?
Why care only about the output bits, and not the internal computations? For example, the UTM that outputs a time fragment starting at 3^^^3 seconds still has to compute the first 3^^^3 seconds of the universe, and if we cared about what happens inside the hidden computation, that would greatly dilute how much we care about the low-complexity fragment.
But we can never know that the theory T in question cannot decide a given instance of the halting problem (since we don’t have explicit grasp of it and cannot consider something like T+1), therefore the associated probability will not fall to 0.
But the probability approaches 0 as times goes to infinity and we test the artifact on more and more problems, right?
I realize this is a part of the logic uncertainty program. I just think this part will never come to fruition. That is, we will never have a computable object that can be reasonably interpreted as “the probability of P halting”.
An FAI likely has to make decisions that depend on whether some program halts or not. (For example, it needs to decide whether or not to invest resources into finding polynomial time solutions to NP-complete problems, but doesn’t know whether such a program would halt.) How does the agent make such decisions if it has nothing that could be interpreted as the probability of P halting?
...How do we know which UTM is making up this infinite array?
I have to admit this question is much easier to answer from the opposite point of view. That is, we can take the point of view that only the product of the utility function by the Solomonoff measure is meaningful rather than each factor separately (like advocated by Coscott). This way changing the UTM can be re-interpreted as changing the utility function. This approach is similar to what you suggested in the sense that if we allow for arbitrary bounded utility functions, the Solomonoff factor only sets the asymptotic behavior of the product. However, if we constrain our utilities to be computable, we still cannot do away with the utility boost of “easily definable places”.
But the probability approaches 0 as times goes to infinity and we test the artifact on more and more problems, right?
OK, I attacked this question from the wrong angle. Allow me to backtrack.
Consider an infinite binary sequence x. Let’s call x a “testable hypothesis” if there is a program P that given any infinite binary sequence y produces a sequence of probabilities P_n(y) s.t. P_n(x) converges to 1 whereas P_n(y) converges to 0 for y =/= x. Then, it can be proved that x is computable (I can spell out the proof if you want).
An FAI likely has to make decisions that depend on whether some program halts or not. (For example, it needs to decide whether or not to invest resources into finding polynomial time solutions to NP-complete problems, but doesn’t know whether such a program would halt.)
If the utility function is upper semicontinuous as I suggested in the post, the FAI won’t have to answer such questions. In particular, it is sufficient to decide whether a polynomial time solution to NP-complete problems can be found in some finite time T because finding it later won’t be worth the resource investment anyway.
This is not making sense for me. Maybe you could expand the argument into a longer comment or post?
If it was running on a formal theory, it would fail to give answers for some P’s, but this one never fails to do that. With enough time it would be possible to rule out the artifact running any simple theory, and the hypothesis of it being a true halting oracle should become increasingly likely (unless we give that a zero prior, like you’re proposing here).
I’m not sure I understand the relevance of this assertion here. To expand on my question, when we fully understand logical uncertainty, presumably we will have a method to assign a probability for any P halting or not halting. We could use this to help form beliefs on whether a suspected halting oracle is really a halting oracle or not, in addition to asking the artifact about programs for which we have proof under ZFC.
I agree with you on the meta point, but back on the object level, I don’t see how we can have much certainty that uncomputable mathematical structures and ones of higher cardinalities do not really exist. A lot of mathematicians sure seem to think they are studying/reasoning about things that are real, not just playing games with symbols. Granted that it may not be feasible to build an FAI that fully incorporates our informal reasoning abilities, it still seems like the bar for building fundamental formal assumptions into an FAI has to be higher than this.
One way to think of Tegmark Level IV is as if there is literally a universal Turing machine which is executed on inputs made of random bits. The outputs are the various universes. Low complexity time segments of a universe will frequently appear as the beginning of such an output. I don’t have a deep point here, just trying to bridge the gap with intuition.
But we can never know that the theory T in question cannot decide a given instance of the halting problem (since we don’t have explicit grasp of it and cannot consider something like T+1), therefore the associated probability will not fall to 0.
I realize this is a part of the logic uncertainty program. I just think this part will never come to fruition. That is, we will never have a computable object that can be reasonably interpreted as “the probability of P halting”.
Yes, if reality literally consisted of an infinite array of UTMs each initialized with a different input tape, then what you’ve proposed makes more sense. I’ve had similar thoughts, but questions remain:
How do we know reality is really like this?
How do we know which UTM is making up this infinite array?
Why care only about the output bits, and not the internal computations? For example, the UTM that outputs a time fragment starting at 3^^^3 seconds still has to compute the first 3^^^3 seconds of the universe, and if we cared about what happens inside the hidden computation, that would greatly dilute how much we care about the low-complexity fragment.
But the probability approaches 0 as times goes to infinity and we test the artifact on more and more problems, right?
An FAI likely has to make decisions that depend on whether some program halts or not. (For example, it needs to decide whether or not to invest resources into finding polynomial time solutions to NP-complete problems, but doesn’t know whether such a program would halt.) How does the agent make such decisions if it has nothing that could be interpreted as the probability of P halting?
I have to admit this question is much easier to answer from the opposite point of view. That is, we can take the point of view that only the product of the utility function by the Solomonoff measure is meaningful rather than each factor separately (like advocated by Coscott). This way changing the UTM can be re-interpreted as changing the utility function. This approach is similar to what you suggested in the sense that if we allow for arbitrary bounded utility functions, the Solomonoff factor only sets the asymptotic behavior of the product. However, if we constrain our utilities to be computable, we still cannot do away with the utility boost of “easily definable places”.
OK, I attacked this question from the wrong angle. Allow me to backtrack.
Consider an infinite binary sequence x. Let’s call x a “testable hypothesis” if there is a program P that given any infinite binary sequence y produces a sequence of probabilities P_n(y) s.t. P_n(x) converges to 1 whereas P_n(y) converges to 0 for y =/= x. Then, it can be proved that x is computable (I can spell out the proof if you want).
If the utility function is upper semicontinuous as I suggested in the post, the FAI won’t have to answer such questions. In particular, it is sufficient to decide whether a polynomial time solution to NP-complete problems can be found in some finite time T because finding it later won’t be worth the resource investment anyway.