To me, Newcomb’s problem seemed like a contrived trick to punish CDT, and it seemed that any other decision theory was just as likely to run into some other strange scenario to punish it,
David Wolpert of the “No Free Lunch Theorem” was one of my favorite researchers back in the 90s. If I remember it right, part of the No Free Lunch Theorem for generalizers was that for any world where your generalizer worked, there would be another world where it didn’t. The issue was the fit of your generalizer to the universe you were in.
Has anyone actually wrote out the bayesian updating for Newcomb? It should take quite a lot of evidence for me to give up on causality as is.
As it turns out, looking at the Newcomb’s Paradox wikipedia page, Wolpert was on the job for this problem, pointing out ” It is straightforward to prove that the two strategies for which boxes to choose make mutually inconsistent assumptions for the underlying Bayes net”. Yes, that’s about my feeling. A hypothetical is constructed which contradicts something for which we have great evidence. Choosing to overturn old conclusions on the basis of new evidence is a matter of the probabilities you’ve assigned to the different and contradictory theories.
Really nothing to see here. Hypothesizing strong evidence that contradicts something you’ve assigned high probability to naturally feels confusing. Of course it does.
My past self is not the cause of my future choices, it is one of many distal causes for my future choices. Similarly, it is not the cause of Omega’s prediction. The direct cause of my future choice is my future self and his future situation, where Omega is going to rig the future situation so that my future self is screwed if he makes the usual causal analysis.
Predictable is fine. People predict my behavior all the time, and in general, it’s a good thing for both of us.
As far as Omega goes, I object to his toying with inferior beings.
We could probably rig up something to the same effect with dogs, using their biases and limitations against them so that we can predict their choices, and arrange it so that if they did the normally right thing to do, they always get screwed. I think that would be a rather malicious and sadistic thing to do to a dog, as I consider the same done to me.
As far as this “paradox” goes, I object to the smuggled recursion, which is just another game of “everything I say is a lie”. I similarly object to other “super rationality” ploys. I also object to the lack of explicit bayesian update analysis. Talky talky is what keeps a paradox going. Serious analysis makes one’s assumptions explicit.
David Wolpert of the “No Free Lunch Theorem” was one of my favorite researchers back in the 90s. If I remember it right, part of the No Free Lunch Theorem for generalizers was that for any world where your generalizer worked, there would be another world where it didn’t. The issue was the fit of your generalizer to the universe you were in.
Has anyone actually wrote out the bayesian updating for Newcomb? It should take quite a lot of evidence for me to give up on causality as is.
As it turns out, looking at the Newcomb’s Paradox wikipedia page, Wolpert was on the job for this problem, pointing out ” It is straightforward to prove that the two strategies for which boxes to choose make mutually inconsistent assumptions for the underlying Bayes net”. Yes, that’s about my feeling. A hypothetical is constructed which contradicts something for which we have great evidence. Choosing to overturn old conclusions on the basis of new evidence is a matter of the probabilities you’ve assigned to the different and contradictory theories.
Really nothing to see here. Hypothesizing strong evidence that contradicts something you’ve assigned high probability to naturally feels confusing. Of course it does.
Your past, Omega-observed self can cause both Omega’s prediction and your future choice without violating causality.
What you’re objecting to is your being predictable.
My past self is not the cause of my future choices, it is one of many distal causes for my future choices. Similarly, it is not the cause of Omega’s prediction. The direct cause of my future choice is my future self and his future situation, where Omega is going to rig the future situation so that my future self is screwed if he makes the usual causal analysis.
Predictable is fine. People predict my behavior all the time, and in general, it’s a good thing for both of us.
As far as Omega goes, I object to his toying with inferior beings.
We could probably rig up something to the same effect with dogs, using their biases and limitations against them so that we can predict their choices, and arrange it so that if they did the normally right thing to do, they always get screwed. I think that would be a rather malicious and sadistic thing to do to a dog, as I consider the same done to me.
As far as this “paradox” goes, I object to the smuggled recursion, which is just another game of “everything I say is a lie”. I similarly object to other “super rationality” ploys. I also object to the lack of explicit bayesian update analysis. Talky talky is what keeps a paradox going. Serious analysis makes one’s assumptions explicit.
The obvious difference between these hypotheticals is that you’re smart enough to figure out the right thing to do in this novel situation.