Now we are getting somewhere good! Certainty rarely shows up in predictions, especially about the future. Your decision theory may be timeless, but don’t confuse the map with the territory, the universe may not be timeless.
Unless you are assigning a numerical, non-zero, non-unity probability to Omega’s accuracy, you do not know when to one-box and when to two-box with arbitrary amounts of money in the boxes. And unless your FAI is a chump, it is considering LOTS of details in estimating Omega’s accuracy, no doubt including considerations of how much the FAI’s own finiteness of knowledge and computation fails to constrain the possibility that Omega is tricking it.
A NASA engineer had been telling Feynman that the liquid rocket motor had a zero probability of exploding on takeoff. Feynman convinced him that this was not an engineering answer. The NASA engineer then smiled and told Feynman the probability of the liquid rocket motor exploding on take off was “epsilon.” Feynman replied (and I paraphrase from memory) “Good! Now we are getting somewhere! Now all you have to tell me is what your estimate for the value of epsilon is, and how you arrived at that number.”
Any calculation of your estimate of Omega’s responsibility which does not include gigantic terms for the evaluation of the probability that Omega is tricking you in a way you haven’t figure out yet is likely to fail. I base that on the prevalence and importance of con games in the best natural experiment on intelligence we have: humans.
Now we are getting somewhere good! Certainty rarely shows up in predictions, especially about the future. Your decision theory may be timeless, but don’t confuse the map with the territory, the universe may not be timeless.
Unless you are assigning a numerical, non-zero, non-unity probability to Omega’s accuracy, you do not know when to one-box and when to two-box with arbitrary amounts of money in the boxes. And unless your FAI is a chump, it is considering LOTS of details in estimating Omega’s accuracy, no doubt including considerations of how much the FAI’s own finiteness of knowledge and computation fails to constrain the possibility that Omega is tricking it.
A NASA engineer had been telling Feynman that the liquid rocket motor had a zero probability of exploding on takeoff. Feynman convinced him that this was not an engineering answer. The NASA engineer then smiled and told Feynman the probability of the liquid rocket motor exploding on take off was “epsilon.” Feynman replied (and I paraphrase from memory) “Good! Now we are getting somewhere! Now all you have to tell me is what your estimate for the value of epsilon is, and how you arrived at that number.”
Any calculation of your estimate of Omega’s responsibility which does not include gigantic terms for the evaluation of the probability that Omega is tricking you in a way you haven’t figure out yet is likely to fail. I base that on the prevalence and importance of con games in the best natural experiment on intelligence we have: humans.