If you assume that you are a physical system and that the alien is capable of modeling that system under a variety of circumstances, there is no contradiction. The alien simply has a device that creates an effective enough simulation of you that it is able to reliably predict what will happen when you are presented with the problem. Causality isn’t running backwards then, it’s just that the alien’s model is close enough to reality that it can reliably predict your behavior in advance. So it’s:
(You[t0])>(Alien’s Model of You)>(Set Up Box)>(You[t1])
If the alien’s model of you is accurate enough, then it will pick out the decision you will make in advance (or at least, is likely to with an extraordinarily high probability) but that doesn’t violate causality any more than me offering to take my girlfriend out for chinese does because I predict that she will say yes. If accurate models broke causality then causality would have snuffed out of existence somewhere around the time the first brain formed, maybe earlier.
You don’t seem to understand what I’m getting at. I’ve already addressed this ineptly, but at some length. If causality does not run backwards, then the actual set of rules involved in the alien’s predictive method, the mode of input it requires from reality, its accuracy, etc, become the focus of calculation. If nothing is known about this stuff, then the problem has not been specified in sufficient detail to propose customized solutions, and we can only make general guesses as to the optimal course of action. (lol The hubris of trying to outsmart unimaginably advanced technology as though it were a crude lie detector reminds me of Artemis Fowl. The third book was awesome.) I only mentioned one ungameable system to explain why I ruled it out as being a trivial consideration in the first place. (Sorry, it isn’t Sunday. No incomprehensible ranting today, only tangents involving childrens’ literature.)
If you assume that you are a physical system and that the alien is capable of modeling that system under a variety of circumstances, there is no contradiction. The alien simply has a device that creates an effective enough simulation of you that it is able to reliably predict what will happen when you are presented with the problem. Causality isn’t running backwards then, it’s just that the alien’s model is close enough to reality that it can reliably predict your behavior in advance. So it’s:
(You[t0])>(Alien’s Model of You)>(Set Up Box)>(You[t1])
If the alien’s model of you is accurate enough, then it will pick out the decision you will make in advance (or at least, is likely to with an extraordinarily high probability) but that doesn’t violate causality any more than me offering to take my girlfriend out for chinese does because I predict that she will say yes. If accurate models broke causality then causality would have snuffed out of existence somewhere around the time the first brain formed, maybe earlier.
You don’t seem to understand what I’m getting at. I’ve already addressed this ineptly, but at some length. If causality does not run backwards, then the actual set of rules involved in the alien’s predictive method, the mode of input it requires from reality, its accuracy, etc, become the focus of calculation. If nothing is known about this stuff, then the problem has not been specified in sufficient detail to propose customized solutions, and we can only make general guesses as to the optimal course of action. (lol The hubris of trying to outsmart unimaginably advanced technology as though it were a crude lie detector reminds me of Artemis Fowl. The third book was awesome.) I only mentioned one ungameable system to explain why I ruled it out as being a trivial consideration in the first place. (Sorry, it isn’t Sunday. No incomprehensible ranting today, only tangents involving childrens’ literature.)