Your estimate of your actions can be correct or relevant even if you’ve been predicted.
Huh? You break the simulation if you act differently than the prediction. Sure you can estimate or say whatever you want, but you can be wrong, and Omega can’t.
just run the algorithm in your mind and do what it says.
This really does not match my lived experience of predicting and committing myself, nor the vast majority of fiction or biographical work I’ve read. Actual studies on commitment levels and follow-through are generally more complicated, so it’s a little less clear how strongly counter-evident they are, but they’re certainly not evidence that humans are rational in these dimensions. You can claim to precommit. You can WANT to precommit. You can even believe it’s in your best interest to have precommitted. But when the time comes, that commitment is weaker than you thought.
You break the simulation if you act differently than the prediction.
I didn’t say you could act differently than the prediction. It’s correct that you can’t, but that’s not relevant for either variant of the problem.
Precommitment is a completely different concept from commitment. Commitment involves feelings, strength of will, etc. Precommitment involves none of those, and it only means running the simple algorithm. It doesn’t have a strength—it’s binary (either I run it, or not).
It’s this running of the simple algorithm in your mind that gives you the pseudomagical powers in Newcomb’s problem that manifest as the seeming ability to influence the past. (Omega already left, but because I’m precommited to one-box, his prediction will have been that I would one-box. This goes both ways, of course—if I would take both boxes, I will lose, even though Omega already left.)
You could use the word precommitment to mean something else—like wishing really hard to execute action X beforehand, and then updating on evidence and doing whatever appears to result in most utility. We could call this precommitment_2 (and the previous kind precommitment_1). The problem is that precommitting_2 to one-box implies precommitting_1 to two-box, and so it guarantees losing.
Then you’re wrong as a matter of biology. Neural networks can do that in general.
I could see an argument being made that if the precommitment algorithm contains a line “jump off a cliff,” the human might freeze in fear instead of being capable of doing that.
But if that line is “take one box,” I don’t see why a human being couldn’t do it.
An algorithm would be, to put it simply, a list of instructions.
So are you saying that a human isn’t capable of following a list of instructions, and if so, do you mean any list of of instructions at all, or only some specific ones?
A human isnt capable.of following a list of instructions perfectly, relentlessly, forever. The problem with a pre comitment is sticking to it...whether you think of it as an algorithm.or a resolution or a promise or an oath.
This doesn’t follow. Your estimate of your actions can be correct or relevant even if you’ve been predicted.
Humans can precommit just like simple machines—just run the algorithm in your mind and do what it says. There is nothing more to it.
Huh? You break the simulation if you act differently than the prediction. Sure you can estimate or say whatever you want, but you can be wrong, and Omega can’t.
This really does not match my lived experience of predicting and committing myself, nor the vast majority of fiction or biographical work I’ve read. Actual studies on commitment levels and follow-through are generally more complicated, so it’s a little less clear how strongly counter-evident they are, but they’re certainly not evidence that humans are rational in these dimensions. You can claim to precommit. You can WANT to precommit. You can even believe it’s in your best interest to have precommitted. But when the time comes, that commitment is weaker than you thought.
I didn’t say you could act differently than the prediction. It’s correct that you can’t, but that’s not relevant for either variant of the problem.
Precommitment is a completely different concept from commitment. Commitment involves feelings, strength of will, etc. Precommitment involves none of those, and it only means running the simple algorithm. It doesn’t have a strength—it’s binary (either I run it, or not).
It’s this running of the simple algorithm in your mind that gives you the pseudomagical powers in Newcomb’s problem that manifest as the seeming ability to influence the past. (Omega already left, but because I’m precommited to one-box, his prediction will have been that I would one-box. This goes both ways, of course—if I would take both boxes, I will lose, even though Omega already left.)
You could use the word precommitment to mean something else—like wishing really hard to execute action X beforehand, and then updating on evidence and doing whatever appears to result in most utility. We could call this precommitment_2 (and the previous kind precommitment_1). The problem is that precommitting_2 to one-box implies precommitting_1 to two-box, and so it guarantees losing.
That doesnt seem like something a human being could do.
Then you’re wrong as a matter of biology. Neural networks can do that in general.
I could see an argument being made that if the precommitment algorithm contains a line “jump off a cliff,” the human might freeze in fear instead of being capable of doing that.
But if that line is “take one box,” I don’t see why a human being couldn’t do it.
You mean artificial neural networks? Which can also do things like running forever without resting. I think a citation is needed.
An algorithm would be, to put it simply, a list of instructions.
So are you saying that a human isn’t capable of following a list of instructions, and if so, do you mean any list of of instructions at all, or only some specific ones?
A human isnt capable.of following a list of instructions perfectly, relentlessly, forever. The problem with a pre comitment is sticking to it...whether you think of it as an algorithm.or a resolution or a promise or an oath.
So you’re saying humans can’t follow an algorithm that would require to be followed perfectly, relentlessly and forever.
But one-boxing is neither relentless, nor forever. That leaves perfection.
Are you suggesting that humans can’t perfectly one-box? If so, are you saying they can only imperfectly one-box?