Thanks for clarifying. I’ll try to answer the original question first, and then expand a little on the comparison with other interpretations that might help understand the motivation for this work a little better.
I’m imagining on the high level you have something like the following in mind (correct me if not). Suppose Φ is all the possible states of the universe, and suppose we have a subset of possible worlds (the “multiverse”) M⊂Φ, then we might simply say we have Knightian uncertainty over the possible worlds, which would correspond to an infra-belief Θ=⊤M∈□Φ. Then if we have a loss function L:Φ→R, we might apply some decision rule, e.g. minimizing worst-case expected loss maxθ∈ΘEθ[L] (note that in this case this just turns out to be maxm∈ML(m)). Arguably, this is not a very good way to make decisions in a quantum multiverse, since we’re completely ignoring the amplitudes of the different worlds.
The usual thing to do instead is to consider the distribution θM=∑m∈Mpmm∈ΔΦ, where the probability pm=|αm|2, assuming M is a decoherent set of worlds, and αm is the amplitude of world m. Then we can consider minimizing the expected loss Eθm[L]. Putting aside questions of subjective versus objective probabilities (and what the latter would mean), this high-level approach could cover various ontological interpretations, including the Bohmian view, and consistent histories (which is what Hartle uses).
I would say the main issue with the above approach is that it involves a choice of ontology Φ as well as a choice of multiverse M (note that these are both things that an agent inside the universe would construct). Even if we assumed that we do have a loss function L:Φ→R (which is already very questionable), the expected loss in general would still depend on the choice of the multiverse.
Infra-Bayesian physicalism is aiming to address some of these issues by providing a framework to talk about losses that translate meaningfully across ontologies.
Hello again. I regret that so much time has passed. My problem seems to be that I haven’t yet properly understood everything that goes into the epistemology and decision-making of an infra-bayesian agent.
For example, I don’t understand how this framework “translates across ontologies”. I would normally think of ontologies as mutually exclusive possibilities, which can be subsumed into a larger framework by having a broad notion of possibility which includes all the ontologies as particular cases. Does the infra-bayesian agent think in some other way?
I would say the translation across ontologies is carried out by “computationalism” in this case, rather than infra-Bayesianism itself. That is, (roughly speaking) we consider which computations are instantiated in various ontologies, and base our loss function off of that. From this viewpoint infra-Bayesianism comes in as an ingredient of a specific implementation of computationalism (namely, infra-Bayesian physicalism). In this perspective the need for infra-Bayesianism is motivated by the fact that an agent needs to have Knightean uncertainty over part of the computational universe (e.g. the part relating to its own source code). Let me know if this helps clarify things.
Thanks for clarifying. I’ll try to answer the original question first, and then expand a little on the comparison with other interpretations that might help understand the motivation for this work a little better.
I’m imagining on the high level you have something like the following in mind (correct me if not). Suppose Φ is all the possible states of the universe, and suppose we have a subset of possible worlds (the “multiverse”) M⊂Φ, then we might simply say we have Knightian uncertainty over the possible worlds, which would correspond to an infra-belief Θ=⊤M∈□Φ. Then if we have a loss function L:Φ→R, we might apply some decision rule, e.g. minimizing worst-case expected loss maxθ∈ΘEθ[L] (note that in this case this just turns out to be maxm∈ML(m)). Arguably, this is not a very good way to make decisions in a quantum multiverse, since we’re completely ignoring the amplitudes of the different worlds.
The usual thing to do instead is to consider the distribution θM=∑m∈Mpmm∈ΔΦ, where the probability pm=|αm|2, assuming M is a decoherent set of worlds, and αm is the amplitude of world m. Then we can consider minimizing the expected loss Eθm[L]. Putting aside questions of subjective versus objective probabilities (and what the latter would mean), this high-level approach could cover various ontological interpretations, including the Bohmian view, and consistent histories (which is what Hartle uses).
I would say the main issue with the above approach is that it involves a choice of ontology Φ as well as a choice of multiverse M (note that these are both things that an agent inside the universe would construct). Even if we assumed that we do have a loss function L:Φ→R (which is already very questionable), the expected loss in general would still depend on the choice of the multiverse.
Infra-Bayesian physicalism is aiming to address some of these issues by providing a framework to talk about losses that translate meaningfully across ontologies.
Hello again. I regret that so much time has passed. My problem seems to be that I haven’t yet properly understood everything that goes into the epistemology and decision-making of an infra-bayesian agent.
For example, I don’t understand how this framework “translates across ontologies”. I would normally think of ontologies as mutually exclusive possibilities, which can be subsumed into a larger framework by having a broad notion of possibility which includes all the ontologies as particular cases. Does the infra-bayesian agent think in some other way?
I would say the translation across ontologies is carried out by “computationalism” in this case, rather than infra-Bayesianism itself. That is, (roughly speaking) we consider which computations are instantiated in various ontologies, and base our loss function off of that. From this viewpoint infra-Bayesianism comes in as an ingredient of a specific implementation of computationalism (namely, infra-Bayesian physicalism). In this perspective the need for infra-Bayesianism is motivated by the fact that an agent needs to have Knightean uncertainty over part of the computational universe (e.g. the part relating to its own source code). Let me know if this helps clarify things.