By “unsolvable” I mean that you’re screwed over in final outcomes, not that TDT fails to have an output.
The interesting part of the problem is that, whatever you decide, you deduce facts about the background such that you know that what you are doing is the wrong thing. However, if you do anything differently, you would have to make a different deduction about the background facts, and again know that what you were doing was the wrong thing. Since we don’t believe that our decision is capable of affecting the background facts, the background facts ought to be a fixed constant, and we should be able to alter our decision without affecting the background facts… however, as soon as we do so, our inference about the unalterable background facts changes. It’s not 100% clear how to square this with TDT.
Actually, there is an optimal solution to this dilemma. Rather than use any internal process to decide, using a truly random process gives a 50% chance of survival. If you base your decision on a quantum randomness source, in principle no simulation can predict your choice (or rather, a complete simulation would correctly predict you fail in 50% of possible worlds).
Knowing how to use randomness against an intelligent adversary is important.
Gary postulated an infallible simulator, which presumably includes your entire initial state and all pseudorandom algorithms you might run. Known quantum randomness methods can only amplify existing entropy, not manufacture it ab initio. So you have no recourse to coinflips.
EDIT: Oops! pengvado is right. I was thinking of the case discussed here, where the random bits are provided by some quantum black box.
Quantum coinflips work even if Omega can predict them. It’s like a branch-both-ways instruction. Just measure some quantum variable, then measure a noncommuting variable, and voila, you’ve been split into two or more branches that observe different results and thus can perform different strategies. Omega’s perfect predictor tells it that you will do both strategies, each with half of your original measure. There is no arrangement of atoms (encoding the right answer) that Omega can choose in advance that would make both of you wrong.
If Omega wants to smack down the use of randomness, I can’t stop it. But there are a number of game theoretic situations where the optimal response is random play, and any decision theory that can’t respond correctly is broken.
A black box RNG is still useless despite being based on a quantum mechanism, or
That a quantum device will necessarily manufacture random bits.
Counterexamples to 2 are pretty straightforward (quantum computers), so I’m assuming you mean 1. I’m operating at the edge of my knowledge here (as my original mistake shows), but I think the entire point of Pironio et al’s paper was that you can verify random bits obtained from an adversary, subject to the conditions:
Bell inequality violations are observable (i.e., it’s a quantum generator).
The adversary can’t predict your measurement strategy.
By “unsolvable” I mean that you’re screwed over in final outcomes, not that TDT fails to have an output.
Oh ok. So it’s unsolvable in the same sense that “Choose red or green. Then I’ll shoot you.” is unsolvable. Sometimes choice really is futile. :) [EDIT: Oops, I probably misunderstood what you’re referring to by “screwed over”.]
The interesting part of the problem is that, whatever you decide, you deduce facts about the background such that you know that what you are doing is the wrong thing.
Yes, assuming that you’re the sort of algorithm that can (without inconsistency) know its own choice here before the choice is executed.
If you’re the sort of algorithm that may revise its intended action in response to the updated deduction, and if you have enough time left to perform the updated deduction, then the (previously) intended action may not be reliable evidence of what you will actually do, so it fails to provide sound reason for the update in the first place.
By “unsolvable” I mean that you’re screwed over in final outcomes, not that TDT fails to have an output.
The interesting part of the problem is that, whatever you decide, you deduce facts about the background such that you know that what you are doing is the wrong thing. However, if you do anything differently, you would have to make a different deduction about the background facts, and again know that what you were doing was the wrong thing. Since we don’t believe that our decision is capable of affecting the background facts, the background facts ought to be a fixed constant, and we should be able to alter our decision without affecting the background facts… however, as soon as we do so, our inference about the unalterable background facts changes. It’s not 100% clear how to square this with TDT.
This is like trying to decide whether this statement is true:
“You will decide that this statement is false.”
There is nothing paradoxical about this statement. It is either true or false. The only problem is that you can’t get it right.
Actually, there is an optimal solution to this dilemma. Rather than use any internal process to decide, using a truly random process gives a 50% chance of survival. If you base your decision on a quantum randomness source, in principle no simulation can predict your choice (or rather, a complete simulation would correctly predict you fail in 50% of possible worlds).
Knowing how to use randomness against an intelligent adversary is important.
Gary postulated an infallible simulator, which presumably includes your entire initial state and all pseudorandom algorithms you might run. Known quantum randomness methods can only amplify existing entropy, not manufacture it ab initio. So you have no recourse to coinflips.
EDIT: Oops! pengvado is right. I was thinking of the case discussed here, where the random bits are provided by some quantum black box.
Quantum coinflips work even if Omega can predict them. It’s like a branch-both-ways instruction. Just measure some quantum variable, then measure a noncommuting variable, and voila, you’ve been split into two or more branches that observe different results and thus can perform different strategies. Omega’s perfect predictor tells it that you will do both strategies, each with half of your original measure. There is no arrangement of atoms (encoding the right answer) that Omega can choose in advance that would make both of you wrong.
I agree, and for this reason whenever I make descriptions I make Omega’s response to quantum smart-asses and other randomisers explicit and negative.
If Omega wants to smack down the use of randomness, I can’t stop it. But there are a number of game theoretic situations where the optimal response is random play, and any decision theory that can’t respond correctly is broken.
Does putting the ‘quantum’ in a black box change anything?
Not sure I know which question you’re asking:
A black box RNG is still useless despite being based on a quantum mechanism, or
That a quantum device will necessarily manufacture random bits.
Counterexamples to 2 are pretty straightforward (quantum computers), so I’m assuming you mean 1. I’m operating at the edge of my knowledge here (as my original mistake shows), but I think the entire point of Pironio et al’s paper was that you can verify random bits obtained from an adversary, subject to the conditions:
Bell inequality violations are observable (i.e., it’s a quantum generator).
The adversary can’t predict your measurement strategy.
Am I misunderstanding something?
Oh ok. So it’s unsolvable in the same sense that “Choose red or green. Then I’ll shoot you.” is unsolvable. Sometimes choice really is futile. :) [EDIT: Oops, I probably misunderstood what you’re referring to by “screwed over”.]
Yes, assuming that you’re the sort of algorithm that can (without inconsistency) know its own choice here before the choice is executed.
If you’re the sort of algorithm that may revise its intended action in response to the updated deduction, and if you have enough time left to perform the updated deduction, then the (previously) intended action may not be reliable evidence of what you will actually do, so it fails to provide sound reason for the update in the first place.