Newcomb-like problems arise when there is a causal thread passing through your cognitive algorithm which produces the correlation. There is no causality going through your cognitive algorithm to the migraine here. The author doesn’t know what a newcomb-like problem is.
But evidential actually tells you not to eat the chocolate? That’s a pretty spectacular failure mode—it seems like it could be extended to not taking your loved ones to the hospital because people tend to die there.
Newcomb-like problems arise when there is a causal thread passing through your cognitive algorithm which produces the correlation. There is no causality going through your cognitive algorithm to the migraine here. The author doesn’t know what a newcomb-like problem is.
Some authors define “Newcomblike problem” as one that brings evidential and decision theory into conflict, which this does.
So… in Newcomb’s problem, evidential says one-box, causal says two-box, causal clearly fails.
In Chocolate problem, evidential says avoid chocolate, causal says eat the chocolate, evidential clearly fails.
Thus neither theory is adequate.
Is that right?
I assume it’s a typo: evidential vs. causal decision theories.
Evidential decision theory wins for the wrong reasons, and causal decision theory just fails.
But evidential actually tells you not to eat the chocolate? That’s a pretty spectacular failure mode—it seems like it could be extended to not taking your loved ones to the hospital because people tend to die there.
Yeah, that was awkwardly worded, I was only referring to Newcomb.
I assume it’s a typo: evidential vs. causal decision theories.