Thanks for clarifying. I’ll try to answer the original question first, and then expand a little on the comparison with other interpretations that might help understand the motivation for this work a little better.
I’m imagining on the high level you have something like the following in mind (correct me if not). Suppose is all the possible states of the universe, and suppose we have a subset of possible worlds (the “multiverse”) , then we might simply say we have Knightian uncertainty over the possible worlds, which would correspond to an infra-belief . Then if we have a loss function , we might apply some decision rule, e.g. minimizing worst-case expected loss (note that in this case this just turns out to be ). Arguably, this is not a very good way to make decisions in a quantum multiverse, since we’re completely ignoring the amplitudes of the different worlds.
The usual thing to do instead is to consider the distribution , where the probability , assuming is a decoherent set of worlds, and is the amplitude of world . Then we can consider minimizing the expected loss . Putting aside questions of subjective versus objective probabilities (and what the latter would mean), this high-level approach could cover various ontological interpretations, including the Bohmian view, and consistent histories (which is what Hartle uses).
I would say the main issue with the above approach is that it involves a choice of ontology as well as a choice of multiverse (note that these are both things that an agent inside the universe would construct). Even if we assumed that we do have a loss function (which is already very questionable), the expected loss in general would still depend on the choice of the multiverse.
Infra-Bayesian physicalism is aiming to address some of these issues by providing a framework to talk about losses that translate meaningfully across ontologies.
I would say the translation across ontologies is carried out by “computationalism” in this case, rather than infra-Bayesianism itself. That is, (roughly speaking) we consider which computations are instantiated in various ontologies, and base our loss function off of that. From this viewpoint infra-Bayesianism comes in as an ingredient of a specific implementation of computationalism (namely, infra-Bayesian physicalism). In this perspective the need for infra-Bayesianism is motivated by the fact that an agent needs to have Knightean uncertainty over part of the computational universe (e.g. the part relating to its own source code). Let me know if this helps clarify things.