Agreed, the problem immediately reminded me of “retroactive preparation” and time-loop logic. It is not really the same reasonning, but it has the same “turn causality on its head” aspect.
If I don’t have proof of the reliability of Omega’s predictions, I find myself less likely to be “unreasonnable” when the stakes are higher (that is, I’m more likely to two-box if it’s about saving the world).
I find it highly unlikely that an entity wandering across worlds can predict my actions to this level of detail, as it seems way harder than traveling through space or teleporting money. I might risk a net loss of $1 000 to figure it out (much like I’d be willing to spend $1000 to interact with such a space-traveling stuff-teleporting entity), but not a loss of a thousand lives. In the game as the article describe it, I would only one-box if “the loss of what box A contains and nothing in B” was an acceptable outcome.
I would be increasingly likely to one-box as the probability of the AI being actually able to predict my actions in advance increases.
@Nick_Tarleton
Agreed, the problem immediately reminded me of “retroactive preparation” and time-loop logic. It is not really the same reasonning, but it has the same “turn causality on its head” aspect.
If I don’t have proof of the reliability of Omega’s predictions, I find myself less likely to be “unreasonnable” when the stakes are higher (that is, I’m more likely to two-box if it’s about saving the world).
I find it highly unlikely that an entity wandering across worlds can predict my actions to this level of detail, as it seems way harder than traveling through space or teleporting money. I might risk a net loss of $1 000 to figure it out (much like I’d be willing to spend $1000 to interact with such a space-traveling stuff-teleporting entity), but not a loss of a thousand lives. In the game as the article describe it, I would only one-box if “the loss of what box A contains and nothing in B” was an acceptable outcome.
I would be increasingly likely to one-box as the probability of the AI being actually able to predict my actions in advance increases.