My past self is not the cause of my future choices, it is one of many distal causes for my future choices. Similarly, it is not the cause of Omega’s prediction. The direct cause of my future choice is my future self and his future situation, where Omega is going to rig the future situation so that my future self is screwed if he makes the usual causal analysis.
Predictable is fine. People predict my behavior all the time, and in general, it’s a good thing for both of us.
As far as Omega goes, I object to his toying with inferior beings.
We could probably rig up something to the same effect with dogs, using their biases and limitations against them so that we can predict their choices, and arrange it so that if they did the normally right thing to do, they always get screwed. I think that would be a rather malicious and sadistic thing to do to a dog, as I consider the same done to me.
As far as this “paradox” goes, I object to the smuggled recursion, which is just another game of “everything I say is a lie”. I similarly object to other “super rationality” ploys. I also object to the lack of explicit bayesian update analysis. Talky talky is what keeps a paradox going. Serious analysis makes one’s assumptions explicit.
Your past, Omega-observed self can cause both Omega’s prediction and your future choice without violating causality.
What you’re objecting to is your being predictable.
My past self is not the cause of my future choices, it is one of many distal causes for my future choices. Similarly, it is not the cause of Omega’s prediction. The direct cause of my future choice is my future self and his future situation, where Omega is going to rig the future situation so that my future self is screwed if he makes the usual causal analysis.
Predictable is fine. People predict my behavior all the time, and in general, it’s a good thing for both of us.
As far as Omega goes, I object to his toying with inferior beings.
We could probably rig up something to the same effect with dogs, using their biases and limitations against them so that we can predict their choices, and arrange it so that if they did the normally right thing to do, they always get screwed. I think that would be a rather malicious and sadistic thing to do to a dog, as I consider the same done to me.
As far as this “paradox” goes, I object to the smuggled recursion, which is just another game of “everything I say is a lie”. I similarly object to other “super rationality” ploys. I also object to the lack of explicit bayesian update analysis. Talky talky is what keeps a paradox going. Serious analysis makes one’s assumptions explicit.
The obvious difference between these hypotheticals is that you’re smart enough to figure out the right thing to do in this novel situation.