And it looks to me like we’re not actually using the last one- am I not also entangled with agents in universes where Omega is lying about whether or not it would have provided me with $1,000, and in those cases, shouldn’t I refuse to give it $100?
I found that handling the Counterfactual Mugging “correctly” (according to Eliezer’s intuitive argument of retroactively acting on rational precommitments) requires different machinery from other problems. You’re right that we don’t seem to be “using” the last one, if we act under weak entanglement, and won’t pay Omega $100.
The problem is that in Eliezer’s original specification of the problem, he explicitly noted that, unknown to us as the player, the coin is basically weighted. Omega isn’t a liar, but there aren’t even any significant quantity of MWI timelines in which the coin comes up heads and Parallel!Us actually receives the money. We’re trying to decide the scenario in a way that favors a version of our agent who never exists outside Omega’s imagination.
I understand the notion behind this—act now according to precommitments it would have been rational to make in the past—but my own intuitions label giving Omega the money an outright loss of $100 with no real purpose, given the knowledge that the coin cannot come up heads.
This might just mean I have badly-trained intuitions! After all, if I switch mental “scenarios” to Omega being not merely a friendly superintelligence or Time Lord but an actual Trickster Matrix Lord, then all of a sudden it seems plausible that I am the prediction copy, and that “real me” might still have a chance at $1000, and I should thus pay Omega my imaginary and worthless simulated money.
The problem is, that presupposes my being willing to believe in some other universe entirely outside my own (ie: outside the simulation) in which Omega’s claim to have already flipped the coin and gotten tails is simply not true. It makes Omega at least a partial liar. It confuses the hell out of me, personally.
Another version of the entanglement proposition might be able to handle this, but it sacrifices the transitivity of entanglement (to what loss, I haven’t found out):
| ent : (forall (b: Beliefs), a1 b d1 = a2 b d1 /\ a1 b d2 = a2 b d2) -> entangled a1 a2 d1 d2.
On the upside, unlike “strong entanglement”, it won’t trivially lose on the Prisoners’ Dilemma.
That is, there seems to me to be a difference between logical uncertainty and indexical uncertainty. It makes sense to entangle across indexical uncertainty, but it doesn’t make sense to entangle across logical uncertainty.
Assume that the causal Bayes nets given as input to our decision algorithm contain only indexical uncertainty.
I found that handling the Counterfactual Mugging “correctly” (according to Eliezer’s intuitive argument of retroactively acting on rational precommitments) requires different machinery from other problems. You’re right that we don’t seem to be “using” the last one, if we act under weak entanglement, and won’t pay Omega $100.
The problem is that in Eliezer’s original specification of the problem, he explicitly noted that, unknown to us as the player, the coin is basically weighted. Omega isn’t a liar, but there aren’t even any significant quantity of MWI timelines in which the coin comes up heads and Parallel!Us actually receives the money. We’re trying to decide the scenario in a way that favors a version of our agent who never exists outside Omega’s imagination.
I understand the notion behind this—act now according to precommitments it would have been rational to make in the past—but my own intuitions label giving Omega the money an outright loss of $100 with no real purpose, given the knowledge that the coin cannot come up heads.
This might just mean I have badly-trained intuitions! After all, if I switch mental “scenarios” to Omega being not merely a friendly superintelligence or Time Lord but an actual Trickster Matrix Lord, then all of a sudden it seems plausible that I am the prediction copy, and that “real me” might still have a chance at $1000, and I should thus pay Omega my imaginary and worthless simulated money.
The problem is, that presupposes my being willing to believe in some other universe entirely outside my own (ie: outside the simulation) in which Omega’s claim to have already flipped the coin and gotten tails is simply not true. It makes Omega at least a partial liar. It confuses the hell out of me, personally.
Another version of the entanglement proposition might be able to handle this, but it sacrifices the transitivity of entanglement (to what loss, I haven’t found out):
On the upside, unlike “strong entanglement”, it won’t trivially lose on the Prisoners’ Dilemma.
Assume that the causal Bayes nets given as input to our decision algorithm contain only indexical uncertainty.