I’ll grant you that my formulation had a serious bug, but--
There are four possibilities:
The AI will press the button, the digit is even
The AI will not press the button, the digit is even, you don’t exist
The AI will press the button, the digit is odd, the word will kaboom
The AI will not press the button, the digit is odd.
Updating on the fact that the second possibility is not true is precisely equivalent to concluding that if the AI does not press the button the digit must be odd
Yes, if by that sentence you mean the logical proposition (AI presses button ⇒ digit is odd), also known as (digit odd \/ ~AI presses button).
and ensuring that the AI does not means choosing the digit to be odd.
I’ll only grant that if I actually end up building an AI that presses the button, and the digit is even, then Omega is a bad predictor, which would make the problem statement contradictory. Which is bad enough, but I don’t think I can be accused of minting causality from logical implication signs...
In any case,
If you already know that the digit is odd independent from the choice of the AI the whole thing reduces to a high stakes counterfactual mugging
That’s true. I think that’s also what Wei Dai had in mind in http://lesswrong.com/lw/214/late_great_filter_is_not_bad_news/ of the great filter post (and not the ability to change Omega’s coin to tails by not pressing the button!). My position is that you should not pay in counterfactual muggings whose counterfactuality was already known prior to your decision to become a timeless decision theorist, although you should program (yourself | your AI) to pay in counterfactual muggings you don’t yet know to be counterfactual.
I’ll grant you that my formulation had a serious bug, but--
Yes, if by that sentence you mean the logical proposition (AI presses button ⇒ digit is odd), also known as (digit odd \/ ~AI presses button).
I’ll only grant that if I actually end up building an AI that presses the button, and the digit is even, then Omega is a bad predictor, which would make the problem statement contradictory. Which is bad enough, but I don’t think I can be accused of minting causality from logical implication signs...
In any case,
That’s true. I think that’s also what Wei Dai had in mind in http://lesswrong.com/lw/214/late_great_filter_is_not_bad_news/ of the great filter post (and not the ability to change Omega’s coin to tails by not pressing the button!). My position is that you should not pay in counterfactual muggings whose counterfactuality was already known prior to your decision to become a timeless decision theorist, although you should program (yourself | your AI) to pay in counterfactual muggings you don’t yet know to be counterfactual.