For the record, when I first really considered the problem, my reasoning was still very similar. It ran approximately as follows:
“The more strongly I am able to convince myself to one-box, the higher the probability that any simulations of me would also have one-boxed. Since I am currently able to strongly convince myself to one-box without prior exposure to the problem, it is extremely likely that my simulations would also one-box, therefore it is in our best interests to one-box.”
Note that I did not run estimated probabilties and tradeoffs based on the sizes of the reward, error probability of Omega, and confidence in my ability to one-box reliably. I am certain that there are combinations of those parameters which would make two-boxing better than one, but I did not do the math.
For the record, when I first really considered the problem, my reasoning was still very similar. It ran approximately as follows:
“The more strongly I am able to convince myself to one-box, the higher the probability that any simulations of me would also have one-boxed. Since I am currently able to strongly convince myself to one-box without prior exposure to the problem, it is extremely likely that my simulations would also one-box, therefore it is in our best interests to one-box.”
Note that I did not run estimated probabilties and tradeoffs based on the sizes of the reward, error probability of Omega, and confidence in my ability to one-box reliably. I am certain that there are combinations of those parameters which would make two-boxing better than one, but I did not do the math.