I actually use a slightly different principle for statements like that.
I call it the “preferred action principle” (or Reaper’s Law when I’m feeling pretentious)
If a possible model of reality doesn’t give me a preferred action, ie. if all actions, including inaction, are equally reasonable (and therefore, all actions are of relative utility 0) in that model, I reject that model out of hand. Not as false, but as utterly useless.
Even if it’s 3^^^3 9s certain that that is the real world, I might as well ignore that possibility, because it puts no weight into the utility calculations.
I actually use a slightly different principle for statements like that.
I call it the “preferred action principle” (or Reaper’s Law when I’m feeling pretentious)
If a possible model of reality doesn’t give me a preferred action, ie. if all actions, including inaction, are equally reasonable (and therefore, all actions are of relative utility 0) in that model, I reject that model out of hand. Not as false, but as utterly useless.
Even if it’s 3^^^3 9s certain that that is the real world, I might as well ignore that possibility, because it puts no weight into the utility calculations.