Maybe I’m missing something (I’m new to Bayes), but I honestly don’t see how any of this is actually a problem. I may just be repeating Yudkowsky’s point, but…
Omega is a superintelligence, who is right in every known prediction. This means, essentially, that he looks at you and decides what you’ll do, and he’s right 100 out of 100 times. So far, a perfect rate. He’s probably not going to mess up on you.
If you’re not trying to look at this with CDT, the answer is obvious: take box B. Omega knows you’ll do that and you’ll get the million. It’s not about the result changing after the boxes are put down, it’s about predictions about a person.
This should not be taken as an authoritative response. I’m answering as much to get my own understanding checked, as to answer your question:
Omega doesn’t exist. How we respond to the specific case of Omega setting up boxes is pretty irrelevant. The question we actually care about is what general principle we can use to decide Newcomb’s problem, and other decision-theoretically-analogous problems. It’s one thing to say that one-boxing is the correct choice; it is another thing to formulate a coherent principle which outputs that choice in this case, without deranged behavior in some other case.
If we’re looking at the problem without CDT, we want to figure out and formalize what we are looking at the problem with.
Ahh. Thank you, that actually solved my confusion. I was thinking about solving the problem, not how to solve the problem. I shall have to look through my responses to other thought experiments now.
Maybe I’m missing something (I’m new to Bayes), but I honestly don’t see how any of this is actually a problem. I may just be repeating Yudkowsky’s point, but… Omega is a superintelligence, who is right in every known prediction. This means, essentially, that he looks at you and decides what you’ll do, and he’s right 100 out of 100 times. So far, a perfect rate. He’s probably not going to mess up on you. If you’re not trying to look at this with CDT, the answer is obvious: take box B. Omega knows you’ll do that and you’ll get the million. It’s not about the result changing after the boxes are put down, it’s about predictions about a person.
This should not be taken as an authoritative response. I’m answering as much to get my own understanding checked, as to answer your question:
Omega doesn’t exist. How we respond to the specific case of Omega setting up boxes is pretty irrelevant. The question we actually care about is what general principle we can use to decide Newcomb’s problem, and other decision-theoretically-analogous problems. It’s one thing to say that one-boxing is the correct choice; it is another thing to formulate a coherent principle which outputs that choice in this case, without deranged behavior in some other case.
If we’re looking at the problem without CDT, we want to figure out and formalize what we are looking at the problem with.
Ahh. Thank you, that actually solved my confusion. I was thinking about solving the problem, not how to solve the problem. I shall have to look through my responses to other thought experiments now.