Γ=Σ^R, it’s a function from programs to what result they output. It can be thought of as a computational universe, for it specifies what all the functions do.
Should this say “elements are function… They can be thought of as...?”
Can you make a similar theory/special case with probability theory, or do you really need infra-bayesianism? If the second, is there a simple explanation of where probability theory fails?
Should this say “elements are function… They can be thought of as...?”
Yes, the phrasing was confusing, I fixed it, thanks.
Can you make a similar theory/special case with probability theory, or do you really need infra-bayesianism?
We really need infrabayesianism. On bayesian hypotheses, the bridge transform degenerates: it says that, more or less, all programs are always running. And, the counterfactuals degenerate too, because selecting most policies would produce “Nirvana”.
The idea is, you must have Knightian uncertainty about the result of a program in order to meaningfully speak about whether the universe is running it. (Roughly speaking, if you ask “is the universe running 2+2?” the answer is always yes.) And, you must have Knightian uncertainty about your own future behavior in order for counterfactuals to be meaningful.
It is not surprising that you need infrabayesianism in order to do naturalized induction: if you’re thinking of the agent as part of the universe then you are by definition in the nonrealizable setting, since the agent cannot possibly have a full description of something “larger” than itself.
Should this say “elements are function… They can be thought of as...?”
Can you make a similar theory/special case with probability theory, or do you really need infra-bayesianism? If the second, is there a simple explanation of where probability theory fails?
Yes, the phrasing was confusing, I fixed it, thanks.
We really need infrabayesianism. On bayesian hypotheses, the bridge transform degenerates: it says that, more or less, all programs are always running. And, the counterfactuals degenerate too, because selecting most policies would produce “Nirvana”.
The idea is, you must have Knightian uncertainty about the result of a program in order to meaningfully speak about whether the universe is running it. (Roughly speaking, if you ask “is the universe running 2+2?” the answer is always yes.) And, you must have Knightian uncertainty about your own future behavior in order for counterfactuals to be meaningful.
It is not surprising that you need infrabayesianism in order to do naturalized induction: if you’re thinking of the agent as part of the universe then you are by definition in the nonrealizable setting, since the agent cannot possibly have a full description of something “larger” than itself.