I think the problem here is that you do not quite understand the problem.
It’s not that we “imagine that we’ve imagined the whole world, do not notice any contradictions and call it a day”. It’s that we know there exists idealized procedure which doesn’t produce stupid answers, like, it can’t be money-pumped. We also know that if we try to approximate this procedure harder (consider more hypotheses, compute more inferences) we are going to get in expectation better results. It is not, say, property of null hypothesis testing—the more hypotheses you consider, the more likely you to either p-hack or drive p-value into statistical insignificance due to excessive multiple testing correction.
The whole computationally unbounded Bayesian business is more about “here is an idealized procedure X, and if we don’t do anything visibly for us stupid from perspective of X, then we can hope that our losses won’t be unbounded from certain notion of boundedness”. It is not obvious that your procedure can be understood this way.
I mostly think about alignment methods like “model-based RL which maximizes reward iff it outputs action which is provably good under our specification of good”.