Okay, so let’s say you’re given some weak evidence which world you’re in—for example, if you’re asked the question when you’ve been awake for 4 hours if the coin was Tails vs. awake for 3.5 hours if Heads. In the Doomsday problem, this would be like learning facts about the earth that would be different if we were about to go extinct vs. if it wasn’t (we know lots of these, in fact).
So let’s say that your internal chronometer is telling you that if “feels like it’s been 4 hours” when you’re asked the question, but you’re not totally sure—let’s say that the only two options are “feels like it’s been 4 hours” and “feels like it’s been 3.5 hours,” and that your internal chronometer is correctly influenced by the world 75% of the time. So P(feels like 4 | heads) = 0.25, P(feels like 3.5 | heads) = 0.75, and vice versa for tails.
A utility-maximizing agent would then make decisions based on P(heads | feels like 4 hours) - but an ADT agent has to do something else. In order to update on the evidence, an ADT agent can just weight the different worlds by the update ratio. For example, if told that the coin is more likely to land heads than tails, an ADT agent successfully updates in favor of heads.
However, what if the update ratio also depended on the anthropic probabilities (that is, SIA vs. SSA)? That would be bad—we couldn’t do the same updating thing . If our new probability is P(A|B), Bayes’ rule says that’s P(A)*P(B|A)/P(B), so the update ratio is P(B|A)/P(B). The numerator is easy—it’s just 0.75 or 0.25. Does the denominator, on the other hand, depend on the anthropic probabilities?
If we look at the odds ratios, then P(A|B)/P(¬A|B)=P(A)/P(¬A) * P(B|A)/P(B|¬A). So as long as we have P(B|A) and P(B|¬A), it seems to work exactly as usual.
Good idea. Though since it’s a ratio, you do miss out on a scale factor—In my example, you don’t know whether to scale the heads world by 1⁄3 or the tails world by 3. Or mess with both by factors of 3⁄7 and 9⁄7, who knows?
Scaling by the ratio does successfully help you correct if you want to compare options between two worlds—for example, if you know you would pay 1 in the tails world, you now know you would pay 1⁄3 in the heads world. But if you don’t know something along those lines, that missing scale factor seems like it would become an actual problem.
I think you’re confusing the odds ratio (P(A)/P(¬A) * P(B|A)/P(B|¬A)), which ADT can’t touch, with the update on the odds ratio (P(B|A)/P(B|¬A)), which has to be used with a bit more creativity.
It is exactly one of those probabilities.
Can you spell out the full setup?
Okay, so let’s say you’re given some weak evidence which world you’re in—for example, if you’re asked the question when you’ve been awake for 4 hours if the coin was Tails vs. awake for 3.5 hours if Heads. In the Doomsday problem, this would be like learning facts about the earth that would be different if we were about to go extinct vs. if it wasn’t (we know lots of these, in fact).
So let’s say that your internal chronometer is telling you that if “feels like it’s been 4 hours” when you’re asked the question, but you’re not totally sure—let’s say that the only two options are “feels like it’s been 4 hours” and “feels like it’s been 3.5 hours,” and that your internal chronometer is correctly influenced by the world 75% of the time. So P(feels like 4 | heads) = 0.25, P(feels like 3.5 | heads) = 0.75, and vice versa for tails.
A utility-maximizing agent would then make decisions based on P(heads | feels like 4 hours) - but an ADT agent has to do something else. In order to update on the evidence, an ADT agent can just weight the different worlds by the update ratio. For example, if told that the coin is more likely to land heads than tails, an ADT agent successfully updates in favor of heads.
However, what if the update ratio also depended on the anthropic probabilities (that is, SIA vs. SSA)? That would be bad—we couldn’t do the same updating thing . If our new probability is P(A|B), Bayes’ rule says that’s P(A)*P(B|A)/P(B), so the update ratio is P(B|A)/P(B). The numerator is easy—it’s just 0.75 or 0.25. Does the denominator, on the other hand, depend on the anthropic probabilities?
If we look at the odds ratios, then P(A|B)/P(¬A|B)=P(A)/P(¬A) * P(B|A)/P(B|¬A). So as long as we have P(B|A) and P(B|¬A), it seems to work exactly as usual.
Good idea. Though since it’s a ratio, you do miss out on a scale factor—In my example, you don’t know whether to scale the heads world by 1⁄3 or the tails world by 3. Or mess with both by factors of 3⁄7 and 9⁄7, who knows?
Scaling by the ratio does successfully help you correct if you want to compare options between two worlds—for example, if you know you would pay 1 in the tails world, you now know you would pay 1⁄3 in the heads world. But if you don’t know something along those lines, that missing scale factor seems like it would become an actual problem.
The scale ratio doesn’t matter—you can recover the probabilities from the odds ratios (and the fact that they must sum to one).
I think you’re confusing the odds ratio (P(A)/P(¬A) * P(B|A)/P(B|¬A)), which ADT can’t touch, with the update on the odds ratio (P(B|A)/P(B|¬A)), which has to be used with a bit more creativity.