I think that in the Counterfactual Mugging with a logical coin, unlike CM with a quantum coin, it is incorrect to abstract away from Omega’s internal algorithm. In the CM with a quantum coin, the coin toss screens off Omega’s causal influence. With the logical coin it is not so.
Good point, but what if you care about only a single world program where Omega is hardcoded to use the millionth digit of pi as the coinflip, and you have logical uncertainty about that digit?
Then I’ll have to know (or have probability distribution over) where the world program comes from. If it was created by Omega-2, then I’m back at square 1. If the world program is laws of physics, then I suppose, logical uncertainty is equivalent to logical probability, like in a regular (non-quantum) coin toss. But then, the CM problem is a very poor model of typical laws of physics. And, with laws of physics, you’ll never need to accept bargains conditioned on 2+2=5...
But you might need to accept bargains conditioned on more complicated logical facts, and the bargains may involve future versions of you who will find these logical facts as trivial as 2+2=4.
If we want decision theory to answer the question “what kind of AI should I write?”, then using “logical priors” is very likely to be the right answer. But Wei has a different way of looking at the problem which seems to make it much harder: he asks “what decision theory should I be following?”
I think I need a different problem as an intuition pump for this. Can’t reformulate the CM problem satisfactorily. It all comes back to Omega’s original motivation. Either it was “fair”, and so equivalent to a quantum coin toss at some point in the causal chain, or not. If it was fair, then it’s equivalent to regular CM, so I should accept the bargain and not “update” later, even for 2+2=5. If not, then it all depends on what it is...
Quantum juju has nothing to do with decision theory. I guess I should have included it in the post about common mistakes. What would you say about this problem if you lived in a deterministic universe and never heard about quantum physics? You know that boundedly rational agents in such universes can observe things that look awfully similar to random noise, right?
Well, tossing a quantum coin is a simple way to provably sever causal links. In a deterministic universe with boundedly rational agents, I suppose, there could be cryptographical schemes producing equivalent guarantees.
What if I reformulate as follows: Omega says that it tossed a coin and so chose to check either the oddness or the evenness of the millionth digit of pi. Coin indicated the “oddness”, so bla bla bla.
The properties of the problem appear the same as the logical-coin CM, except now the possible causal influence from Omega is severed.
If I’m Omega, and I decide to check whether the 10^10 th digit of pi is 0, 2 or 5, and reward you if it is… how would you feel about that? I chose those numbers because we have ten fingers, and I chose reward because “e” is the 5th letter in the alphabet (I went through the letters of “reward” and “punish” until I found one that was the 10th (J), 5th (E) or 2nd (B) letter).
Or a second variant: I implement the logical-coin CM that can be described in python in the most compact way.
If it’s true that you chose the numbers because we have ten fingers (and because of nothing else), and I can verify that, then I feel I should behave as if the event is random with probability 0.3, even if it was the 10-th digit of pi, not 10^10-th.
I never had anything against logical uncertainty :)
The point, though, is that this setup—where I can verify Omega’s honest attempt at randomness—does not produce the paradoxes. In particular, it does not allow someone to pump money out of me. And so it seems to me that I can and should “keep paying up in Counterfactual Mugging even when the logical coinflip looks as obvious as 2+2=4.”
I think that in the Counterfactual Mugging with a logical coin, unlike CM with a quantum coin, it is incorrect to abstract away from Omega’s internal algorithm. In the CM with a quantum coin, the coin toss screens off Omega’s causal influence. With the logical coin it is not so.
Good point, but what if you care about only a single world program where Omega is hardcoded to use the millionth digit of pi as the coinflip, and you have logical uncertainty about that digit?
Then I’ll have to know (or have probability distribution over) where the world program comes from. If it was created by Omega-2, then I’m back at square 1. If the world program is laws of physics, then I suppose, logical uncertainty is equivalent to logical probability, like in a regular (non-quantum) coin toss. But then, the CM problem is a very poor model of typical laws of physics. And, with laws of physics, you’ll never need to accept bargains conditioned on 2+2=5...
But you might need to accept bargains conditioned on more complicated logical facts, and the bargains may involve future versions of you who will find these logical facts as trivial as 2+2=4.
If we want decision theory to answer the question “what kind of AI should I write?”, then using “logical priors” is very likely to be the right answer. But Wei has a different way of looking at the problem which seems to make it much harder: he asks “what decision theory should I be following?”
I think I need a different problem as an intuition pump for this. Can’t reformulate the CM problem satisfactorily. It all comes back to Omega’s original motivation. Either it was “fair”, and so equivalent to a quantum coin toss at some point in the causal chain, or not. If it was fair, then it’s equivalent to regular CM, so I should accept the bargain and not “update” later, even for 2+2=5. If not, then it all depends on what it is...
Quantum juju has nothing to do with decision theory. I guess I should have included it in the post about common mistakes. What would you say about this problem if you lived in a deterministic universe and never heard about quantum physics? You know that boundedly rational agents in such universes can observe things that look awfully similar to random noise, right?
Well, tossing a quantum coin is a simple way to provably sever causal links. In a deterministic universe with boundedly rational agents, I suppose, there could be cryptographical schemes producing equivalent guarantees.
What if I reformulate as follows: Omega says that it tossed a coin and so chose to check either the oddness or the evenness of the millionth digit of pi. Coin indicated the “oddness”, so bla bla bla.
The properties of the problem appear the same as the logical-coin CM, except now the possible causal influence from Omega is severed.
If I’m Omega, and I decide to check whether the 10^10 th digit of pi is 0, 2 or 5, and reward you if it is… how would you feel about that? I chose those numbers because we have ten fingers, and I chose reward because “e” is the 5th letter in the alphabet (I went through the letters of “reward” and “punish” until I found one that was the 10th (J), 5th (E) or 2nd (B) letter).
Or a second variant: I implement the logical-coin CM that can be described in python in the most compact way.
If it’s true that you chose the numbers because we have ten fingers (and because of nothing else), and I can verify that, then I feel I should behave as if the event is random with probability 0.3, even if it was the 10-th digit of pi, not 10^10-th.
Yep—welcome to logical uncertainty!
I never had anything against logical uncertainty :)
The point, though, is that this setup—where I can verify Omega’s honest attempt at randomness—does not produce the paradoxes. In particular, it does not allow someone to pump money out of me. And so it seems to me that I can and should “keep paying up in Counterfactual Mugging even when the logical coinflip looks as obvious as 2+2=4.”