In the smoking lesion, I do two worlds: in one I have lesion, in other I don’t, weighted with p and 1-p . That’s just how i process uncertainties . Then I apply my predictions to both worlds, given my action, and I obtain the results which I weight by p and 1-p (i never seen the possible worlds interact) . Then I can decide on action assuming 0<p<1 . I don’t even need to know the p, and updates to my estimate of p that result from my actions don’t change the decisions.
In the newcomb’s problem, i’m inclined to do exact same thing: let p is probability that the one-box was predicted, then onebox < twobox by 1000000 p + 0 (1-p) < 1001000 p + 1000 (1-p) . And I am totally going to do this if I am being predicted based on psychology test I took back in elementary school, or based on genetics. But I get told that the 1001000 p and 0 (1-p) never happens, i.e. I get told that the equation is wrong, and if i assign high enough confidence to that, higher than to my equation, I can strike out the 1001000 p and 0 (1-p) from the equation (and get some nonsense which i fix by removing the probabilities altogether), deciding to one-box as the best effort i can do when i’m told that my equation won’t work, and I don’t quite know why.
(The world model of mine being what it is, I’ll also have to come up with some explanations for how the predictor works before i assign high enough probability to the predictor working correctly for me. E.g. I could say that the predictor is predicting using a quantum coin flip and then cutting off branches in MWI where it was wrong, or I could say, the predictor is working via mind simulation, or even that my actions somehow go into the past. )
Of course it is bloody hard to formalize an agent that got a world model of some kind, and which can correct it’s equations if it is convinced with good enough evidence that the equation is somehow wrong (which is pretty much the premise of Newcomb’s paradox).
I propose a nonstupid decision theory then.
In the smoking lesion, I do two worlds: in one I have lesion, in other I don’t, weighted with p and 1-p . That’s just how i process uncertainties . Then I apply my predictions to both worlds, given my action, and I obtain the results which I weight by p and 1-p (i never seen the possible worlds interact) . Then I can decide on action assuming 0<p<1 . I don’t even need to know the p, and updates to my estimate of p that result from my actions don’t change the decisions.
In the newcomb’s problem, i’m inclined to do exact same thing: let p is probability that the one-box was predicted, then onebox < twobox by 1000000 p + 0 (1-p) < 1001000 p + 1000 (1-p) . And I am totally going to do this if I am being predicted based on psychology test I took back in elementary school, or based on genetics. But I get told that the 1001000 p and 0 (1-p) never happens, i.e. I get told that the equation is wrong, and if i assign high enough confidence to that, higher than to my equation, I can strike out the 1001000 p and 0 (1-p) from the equation (and get some nonsense which i fix by removing the probabilities altogether), deciding to one-box as the best effort i can do when i’m told that my equation won’t work, and I don’t quite know why.
(The world model of mine being what it is, I’ll also have to come up with some explanations for how the predictor works before i assign high enough probability to the predictor working correctly for me. E.g. I could say that the predictor is predicting using a quantum coin flip and then cutting off branches in MWI where it was wrong, or I could say, the predictor is working via mind simulation, or even that my actions somehow go into the past. )
Of course it is bloody hard to formalize an agent that got a world model of some kind, and which can correct it’s equations if it is convinced with good enough evidence that the equation is somehow wrong (which is pretty much the premise of Newcomb’s paradox).