No, it proves I will not decide everything rationally if I don’t decide everything rationally.
Which is pretty tautologous.
The Omega example requires that I will not decide everything rationally.
The real world permits the possibility of a rational agent. Thus it makes sense to question what a rational agent would do.
Your scenario doesn’t permit a rational agent, thus it makes no sense to ask what a rational agent would do.
You’re missing the point Unknowns. In your scenario, my decision doesn’t depend on how I decide. It just depends on the setting of the box.
So I might as well just decide arbitrarily, and save effort.
In real life, your decision doesn’t depend on how you decide it. It just depends on the positions of your atoms and the laws of physics. So you might as well just decide arbitrarily, and save effort.
No, it proves I will not decide everything rationally if I don’t decide everything rationally. Which is pretty tautologous.
The Omega example requires that I will not decide everything rationally.
The real world permits the possibility of a rational agent. Thus it makes sense to question what a rational agent would do. Your scenario doesn’t permit a rational agent, thus it makes no sense to ask what a rational agent would do.
You’re missing the point Unknowns. In your scenario, my decision doesn’t depend on how I decide. It just depends on the setting of the box. So I might as well just decide arbitrarily, and save effort.
What would you do in your own scenario?
In real life, your decision doesn’t depend on how you decide it. It just depends on the positions of your atoms and the laws of physics. So you might as well just decide arbitrarily, and save effort.
I would one-box.
So, if Omega programmed you to two-box, you would one-box?
That’s not exactly consistent. In fact, that’s logically impossible.
Essentially, you’re denying your own scenario.