You’re completely right that hypotheses with unconstrained Murphy get ignored because you’re doomed no matter what you do, so you might as well optimize for just the other hypotheses where what you do matters. Your “-1,000,000 vs −999,999 is the same sort of problem as 0 vs 1” reasoning is good.
Again, you are making the serious mistake of trying to think about Murphy verbally, rather than thinking of Murphy as the personification of the “inf” part of the EΨ[f]:=inf(m,b)∈Ψm(f)+b definition of expected value, and writing actual equations.Ψ is the available set of possibilities for a hypothesis. If you really want to, you can think of this as constraints on Murphy, and Murphy picking from available options, but it’s highly encouraged to just work with the math.
For mixing hypotheses (several different Ψi sets of possibilities) according to a prior distribution ζ∈ΔN, you can write it as an expectation functional via ψζ(f):=Ei∼ζ[ψi(f)] (mix the expectation functionals of the component hypotheses according to your prior on hypotheses), or as a set via Ψζ:={(m,b)|∃(mi,bi)∈Ψi:Ei∼ζ(mi,bi)=(m,b)} (the available possibilities for the mix of hypotheses are all of the form “pick a possibility from each hypothesis, mix them together according to your prior on hypotheses”)
This is what I meant by “a constraint on Murphy is picked according to this probability distribution/prior, then Murphy chooses from the available options of the hypothesis they picked”, that Ψζ set (your mixture of hypotheses according to a prior) corresponds to selecting one of the Ψi sets according to your prior ζ, and then Murphy picking freely from the set Ψi.
Using ψζ(f):=Ei∼ζ[ψi(f)] (and considering our choice of what to do affecting the choice of f, we’re trying to pick the best function f) we can see that if the prior is composed of a bunch of “do this sequence of actions or bad things happen” hypotheses, the details of what you do sensitively depend on the probability distribution over hypotheses. Just like with AIXI, really. Informal proof: if ψi(fi)≃1 and ψi(fj)≃0 (assuming j≠i), then we can see that ψζ(fi)=Ej∼ζ[ψj(fi)]=∑j≠iζj⋅ψj(fi)+ζi⋅ψi(fi)≃ζi and so, the best sequence of actions to do would be the one associated with the “you’re doomed if you don’t do blahblah action sequence” hypothesis with the highest prior. Much like AIXI does.
Using the same sort of thing, we can also see that if there’s a maximally adversarial hypothesis in there somewhere that’s just like “you get 0 reward, screw you” no matter what you do (let’s say this is psi_0), then we have ψζ(fi)=Ej∼ζ[ψj(fi)]=∑j≥1ζj⋅ψj(fi)+ζ0⋅ψ0(fi)≃∑j≥1ζj⋅ψj(fi) And so, that hypothesis drops out of the process of calculating the expected value, for all possible functions/actions. Just do a scale-and-shift, and you might as well be dealing with the prior (ζ|i≠0), which a-priori assumes you aren’t in the “screw you, you lose” environment.
Hm, what about if you’ve just got two hypotheses, one where you’re like “my knightian uncertainty scales with the amount of energy in the universe so if there’s lots of energy available, things could e really bad, while if there’s little energy available, Murphy can’t make things bad” (ψ0) and one where reality behaves pretty much as you’d expect it to(ψ1)? And your two possible options would be “burn energy freely so Murphy can’t use it” (the choice f0, attaining a worst-case expected utility of x0 in ψ0 and x1 in ψ1), and “just try to make things good and don’t worry about the environment being adversarial” (the choice f1, attaining 0 utility in ψ0, 1 utility in ψ1).
The expected utility of f0 (burn energy) would be ψζ(f0)=ζ0⋅ψ0(f0)+ζ1⋅ψ1(f0)=ζ0⋅x0+ζ1⋅x1 And the expected utility of f1(act normally) would be ψζ(f1)=ζ0⋅ψ0(f1)+ζ1⋅ψ1(f1)=ζ0⋅0+ζ1⋅1=ζ1 So “act normally” wins if ζ1≥ζ0⋅x0+ζ1⋅x1, which can be rearranged as ζ1(1−x1)≥ζ0(x0−0). Ie, you’ll act normally if the probability of “things are normal” times the loss from burning energy when things are normal exceeds the probability of “Murphy’s malice scales with amount of available energy” times the gain from burning energy in that universe. So, assuming you assign a high enough probability to “things are normal” in your prior, you’ll just act normally. Or, making the simplifying assumption that “burn energy” has similar expected utilities in both cases (ie, x1≃x0), then it would come down to questions like “is the utility of burning energy closer to the worst-case where Murphy has free reign, or the best-case where I can freely optimize?” And this is assuming there’s just two options, the actual strategy selected would probably be something like “act normally, if it looks like things are going to shit, start burning energy so it can’t be used to optimize against me”
Note that, in particular, the hypothesis where the level of attainable badness scales with available energy is very different from the “screw you, you lose” hypothesis, since there are actions you can take that do better and worse in the “level of attainable badness scales with energy in the universe” hypothesis, while the “screw you, you lose” hypothesis just makes you lose. And both of these are very different from a “you lose if you don’t take this exact sequence of actions” hypothesis.
Murphy is not a physical being, it’s a personification of an equation, thinking verbally about an actual Murphy doesn’t help because you start confusing very different hypotheses, think purely about what the actual set of probability distributions Ψi corresponding to hypothesis i looks like. I can’t stress this enough.
Also, remember, the goal is to maximize worst-case expected value, not worst-case value.
You’re completely right that hypotheses with unconstrained Murphy get ignored because you’re doomed no matter what you do, so you might as well optimize for just the other hypotheses where what you do matters. Your “-1,000,000 vs −999,999 is the same sort of problem as 0 vs 1” reasoning is good.
Again, you are making the serious mistake of trying to think about Murphy verbally, rather than thinking of Murphy as the personification of the “inf” part of the EΨ[f]:=inf(m,b)∈Ψm(f)+b definition of expected value, and writing actual equations.Ψ is the available set of possibilities for a hypothesis. If you really want to, you can think of this as constraints on Murphy, and Murphy picking from available options, but it’s highly encouraged to just work with the math.
For mixing hypotheses (several different Ψi sets of possibilities) according to a prior distribution ζ∈ΔN, you can write it as an expectation functional via ψζ(f):=Ei∼ζ[ψi(f)] (mix the expectation functionals of the component hypotheses according to your prior on hypotheses), or as a set via Ψζ:={(m,b)|∃(mi,bi)∈Ψi:Ei∼ζ(mi,bi)=(m,b)} (the available possibilities for the mix of hypotheses are all of the form “pick a possibility from each hypothesis, mix them together according to your prior on hypotheses”)
This is what I meant by “a constraint on Murphy is picked according to this probability distribution/prior, then Murphy chooses from the available options of the hypothesis they picked”, that Ψζ set (your mixture of hypotheses according to a prior) corresponds to selecting one of the Ψi sets according to your prior ζ, and then Murphy picking freely from the set Ψi.
Using ψζ(f):=Ei∼ζ[ψi(f)] (and considering our choice of what to do affecting the choice of f, we’re trying to pick the best function f) we can see that if the prior is composed of a bunch of “do this sequence of actions or bad things happen” hypotheses, the details of what you do sensitively depend on the probability distribution over hypotheses. Just like with AIXI, really.
Informal proof: if ψi(fi)≃1 and ψi(fj)≃0 (assuming j≠i), then we can see that
ψζ(fi)=Ej∼ζ[ψj(fi)]=∑j≠iζj⋅ψj(fi)+ζi⋅ψi(fi)≃ζi
and so, the best sequence of actions to do would be the one associated with the “you’re doomed if you don’t do blahblah action sequence” hypothesis with the highest prior. Much like AIXI does.
Using the same sort of thing, we can also see that if there’s a maximally adversarial hypothesis in there somewhere that’s just like “you get 0 reward, screw you” no matter what you do (let’s say this is psi_0), then we have
ψζ(fi)=Ej∼ζ[ψj(fi)]=∑j≥1ζj⋅ψj(fi)+ζ0⋅ψ0(fi)≃∑j≥1ζj⋅ψj(fi)
And so, that hypothesis drops out of the process of calculating the expected value, for all possible functions/actions. Just do a scale-and-shift, and you might as well be dealing with the prior (ζ|i≠0), which a-priori assumes you aren’t in the “screw you, you lose” environment.
Hm, what about if you’ve just got two hypotheses, one where you’re like “my knightian uncertainty scales with the amount of energy in the universe so if there’s lots of energy available, things could e really bad, while if there’s little energy available, Murphy can’t make things bad” (ψ0) and one where reality behaves pretty much as you’d expect it to(ψ1)? And your two possible options would be “burn energy freely so Murphy can’t use it” (the choice f0, attaining a worst-case expected utility of x0 in ψ0 and x1 in ψ1), and “just try to make things good and don’t worry about the environment being adversarial” (the choice f1, attaining 0 utility in ψ0, 1 utility in ψ1).
The expected utility of f0 (burn energy) would be ψζ(f0)=ζ0⋅ψ0(f0)+ζ1⋅ψ1(f0)=ζ0⋅x0+ζ1⋅x1
And the expected utility of f1(act normally) would be
ψζ(f1)=ζ0⋅ψ0(f1)+ζ1⋅ψ1(f1)=ζ0⋅0+ζ1⋅1=ζ1
So “act normally” wins if ζ1≥ζ0⋅x0+ζ1⋅x1, which can be rearranged as ζ1(1−x1)≥ζ0(x0−0). Ie, you’ll act normally if the probability of “things are normal” times the loss from burning energy when things are normal exceeds the probability of “Murphy’s malice scales with amount of available energy” times the gain from burning energy in that universe.
So, assuming you assign a high enough probability to “things are normal” in your prior, you’ll just act normally. Or, making the simplifying assumption that “burn energy” has similar expected utilities in both cases (ie, x1≃x0), then it would come down to questions like “is the utility of burning energy closer to the worst-case where Murphy has free reign, or the best-case where I can freely optimize?”
And this is assuming there’s just two options, the actual strategy selected would probably be something like “act normally, if it looks like things are going to shit, start burning energy so it can’t be used to optimize against me”
Note that, in particular, the hypothesis where the level of attainable badness scales with available energy is very different from the “screw you, you lose” hypothesis, since there are actions you can take that do better and worse in the “level of attainable badness scales with energy in the universe” hypothesis, while the “screw you, you lose” hypothesis just makes you lose. And both of these are very different from a “you lose if you don’t take this exact sequence of actions” hypothesis.
Murphy is not a physical being, it’s a personification of an equation, thinking verbally about an actual Murphy doesn’t help because you start confusing very different hypotheses, think purely about what the actual set of probability distributions Ψi corresponding to hypothesis i looks like. I can’t stress this enough.
Also, remember, the goal is to maximize worst-case expected value, not worst-case value.