Something analogous to what you are suggesting occurs. Specifically, let’s say you assign 95% probability to the bandit game behaving as normal, and 5% to “oh no, anything could happen, including the meteor”. As it turns out, this behaves similarly to the ordinary bandit game being guaranteed, as the “maybe meteor” hypothesis assigns all your possible actions a score of “you’re dead” so it drops out of consideration.
The important aspect which a hypothesis needs, in order for you to ignore it, is that no matter what you do you get the same outcome, whether it be good or bad. A “meteor of bliss hits the earth and everything is awesome forever” hypothesis would also drop out of consideration because it doesn’t really matter what you do in that scenario.
To be a wee bit more mathy, probabilistic mix of inframeasures works like this. We’ve got a probability distribution ζ∈ΔN, and a bunch of hypotheses ψi∈□X, things that take functions as input, and return expectation values. So, your prior, your probabilistic mixture of hypotheses according to your probability distribution, would be the function
f↦∑i∈Nζ(i)⋅ψi(f)
It gets very slightly more complicated when you’re dealing with environments, instead of static probability distributions, but it’s basically the same thing. And so, if you vary your actions/vary your choice of function f, and one of the hypotheses ψi is assigning all these functions/choices of actions the same expectation value, then it can be ignored completely when you’re trying to figure out the best function/choice of actions to plug in.
So, hypotheses that are like “you’re doomed no matter what you do” drop out of consideration, an infra-Bayes agent will always focus on the remaining hypotheses that say that what it does matters.
The meteor doesn’t have to really flatten things out, there might be some actions that we think remain valuable (e.g. hedonism, saying tearful goodbyes).
And so if we have Knightian uncertainty about the meteor, maximin (as in Vanessa’s link) means we’ll spend a lot of time on tearful goodbyes.
Said actions or lack thereof cause a fairly low utility differential compared to the actions in other, non-doomy hypotheses. Also I want to draw a critical distinction between “full knightian uncertainty over meteor presence or absence”, where your analysis is correct, and “ordinary probabilistic uncertainty between a high-knightian-uncertainty hypotheses, and a low-knightian uncertainty one that says the meteor almost certainly won’t happen” (where the meteor hypothesis will be ignored unless there’s a meteor-inspired modification to what you do that’s also very cheap in the “ordinary uncertainty” world, like calling your parents, because the meteor hypothesis is suppressed in decision-making by the low expected utility differentials, and we’re maximin-ing expected utility)
Something analogous to what you are suggesting occurs. Specifically, let’s say you assign 95% probability to the bandit game behaving as normal, and 5% to “oh no, anything could happen, including the meteor”. As it turns out, this behaves similarly to the ordinary bandit game being guaranteed, as the “maybe meteor” hypothesis assigns all your possible actions a score of “you’re dead” so it drops out of consideration.
The important aspect which a hypothesis needs, in order for you to ignore it, is that no matter what you do you get the same outcome, whether it be good or bad. A “meteor of bliss hits the earth and everything is awesome forever” hypothesis would also drop out of consideration because it doesn’t really matter what you do in that scenario.
To be a wee bit more mathy, probabilistic mix of inframeasures works like this. We’ve got a probability distribution ζ∈ΔN, and a bunch of hypotheses ψi∈□X, things that take functions as input, and return expectation values. So, your prior, your probabilistic mixture of hypotheses according to your probability distribution, would be the function
f↦∑i∈Nζ(i)⋅ψi(f)
It gets very slightly more complicated when you’re dealing with environments, instead of static probability distributions, but it’s basically the same thing. And so, if you vary your actions/vary your choice of function f, and one of the hypotheses ψi is assigning all these functions/choices of actions the same expectation value, then it can be ignored completely when you’re trying to figure out the best function/choice of actions to plug in.
So, hypotheses that are like “you’re doomed no matter what you do” drop out of consideration, an infra-Bayes agent will always focus on the remaining hypotheses that say that what it does matters.
The meteor doesn’t have to really flatten things out, there might be some actions that we think remain valuable (e.g. hedonism, saying tearful goodbyes).
And so if we have Knightian uncertainty about the meteor, maximin (as in Vanessa’s link) means we’ll spend a lot of time on tearful goodbyes.
Said actions or lack thereof cause a fairly low utility differential compared to the actions in other, non-doomy hypotheses. Also I want to draw a critical distinction between “full knightian uncertainty over meteor presence or absence”, where your analysis is correct, and “ordinary probabilistic uncertainty between a high-knightian-uncertainty hypotheses, and a low-knightian uncertainty one that says the meteor almost certainly won’t happen” (where the meteor hypothesis will be ignored unless there’s a meteor-inspired modification to what you do that’s also very cheap in the “ordinary uncertainty” world, like calling your parents, because the meteor hypothesis is suppressed in decision-making by the low expected utility differentials, and we’re maximin-ing expected utility)