I think you’re missing my point. After the $1,000,000 has been taken, Irene doesn’t suddenly lose her free will. She’s perfectly capable of taking the $1000; she’s just decided not to.
You seem to think I’m making some claim like “one-boxing is irrational” or “Newcomb’s problem is impossible”, which is not at all what I’m doing. I’m trying to demonstrate that the idea of “rational agents just do what maximizes their utility and don’t worry about having to have a consistent underlying decision theory” appears to result in a contradiction as soon as Irene’s decision has been made.
I understood your point. What I’m saying is that Irene is Indeed capable of also taking the $1000, but if omega isn’t wrong, she only gets the million in cases where for some reason she doesn’t (and I gave a few examples).
I think your scenario is just too narrow. Sure, if Omega is wrong, and it’s not a simulation, and it’s a complete one shot, then the rational decision is to then also take the 1000, but if any of these aren’t true, then you better find some reason or way not to take those 1000, or you’ll never see the million in the first place, or you’ll them in reality, or you’ll never see them in the future.
How can you know what maximises your utility without having a sound underlying theory? ( But NOT, as I said in my other comment,a sound decision theory. You have to know that free will is real, or whether predictors are impossible. Then you might be able to have a decision theory adequate to the problem).
I think you’re missing my point. After the $1,000,000 has been taken, Irene doesn’t suddenly lose her free will. She’s perfectly capable of taking the $1000; she’s just decided not to.
You seem to think I’m making some claim like “one-boxing is irrational” or “Newcomb’s problem is impossible”, which is not at all what I’m doing. I’m trying to demonstrate that the idea of “rational agents just do what maximizes their utility and don’t worry about having to have a consistent underlying decision theory” appears to result in a contradiction as soon as Irene’s decision has been made.
I understood your point. What I’m saying is that Irene is Indeed capable of also taking the $1000, but if omega isn’t wrong, she only gets the million in cases where for some reason she doesn’t (and I gave a few examples).
I think your scenario is just too narrow. Sure, if Omega is wrong, and it’s not a simulation, and it’s a complete one shot, then the rational decision is to then also take the 1000, but if any of these aren’t true, then you better find some reason or way not to take those 1000, or you’ll never see the million in the first place, or you’ll them in reality, or you’ll never see them in the future.
Put more simply, two-boxing is the right answer in the cases where Omega is wrong.
How can you know what maximises your utility without having a sound underlying theory? ( But NOT, as I said in my other comment,a sound decision theory. You have to know that free will is real, or whether predictors are impossible. Then you might be able to have a decision theory adequate to the problem).