But your Omega does knowably (agent-provably) give it an award, hence it doesn’t play the intended role, doesn’t implement the thought experiment.
I think it would be fair to say that cousin_it’s (ha! Take that English grammar!) description of Omega’s behaviour does fit the problem specification we have given but certainly doesn’t match the problem we intended. That leaves us to fix the wording without making it look too obfuscated.
Taking another look at the actual problem specification it actually doesn’t look all that bad. The translation into logical propositions didn’t really do it justice. We have...
He will award you $1000 if he predicts you would pay him if he asked.
cousin_it allows “if” to resolve to “iif”, but translates “The player would pay if asked” into A → B; !B therefore ‘whatever’. Which is not quite what we mean when we use the phrase in English. We are trying to refer to the predicted outcome in a “possibly counterfactual but possibly real” reality.
Can you think of a way to say what we mean without any ambiguity and without changing the problem itself too much?
I think it would be fair to say that cousin_it’s (ha! Take that English grammar!) description of Omega’s behaviour does fit the problem specification we have given but certainly doesn’t match the problem we intended. That leaves us to fix the wording without making it look too obfuscated.
Taking another look at the actual problem specification it actually doesn’t look all that bad. The translation into logical propositions didn’t really do it justice. We have...
cousin_it allows “if” to resolve to “iif”, but translates “The player would pay if asked” into A → B; !B therefore ‘whatever’. Which is not quite what we mean when we use the phrase in English. We are trying to refer to the predicted outcome in a “possibly counterfactual but possibly real” reality.
Can you think of a way to say what we mean without any ambiguity and without changing the problem itself too much?