In the other Omega scenarios, the predictions are an integral part of the scenario. Remove the prediction and the whole thing falls apart.
In your scenario, the prediction doesn’t matter. Remove the prediction, and everything else is exactly the same.
The specific prediction isn’t important here, but the definition of Omega as a perfect predictor sure is important. This is exactly what I wanted to do: Ignore the details of the prediction and talk about Omega.
Removing the prediction entirely would cause the scenario to fall apart because then we could replace Omega with anything. Omega needs to be here and it needs to be making some prediction. The prediction itself is a causal fact only in the sense that Omega wouldn’t appear before you if it didn’t expect to get $5.
It’s a tautology, and that is my point. The only time Omega would ever appear is if its request would be granted.
In my opinion, it is more accurate to say that the reason behind your action is completely irrelevant. It doesn’t matter that the reason isn’t the prediction itself that is causing you to give Omega $5.
It is therefore absurd that you think your scenario says something about the other because they all involve predictions.
It isn’t really absurd. Placing restrictions on the scenario will cause things to go crazy and it is this craziness that I want to look at.
People still argue about one-boxing. The most obvious, direct application of this post is to show why one-boxing is the correct answer. Newcomb’s problem is actually why I ended up writing this. Every time I started working on the math behind Newcomb’s I would bump into the claim presented in this post and realize that people were going to object.
So, instead of talking about this claim inside of a post on Newcomb’s, I isolated it and presented it on its own. And people still objected to it, so I am glad I did this.
The specific prediction isn’t important here, but the definition of Omega as a perfect predictor sure is important. This is exactly what I wanted to do: Ignore the details of the prediction and talk about Omega.
Removing the prediction entirely would cause the scenario to fall apart because then we could replace Omega with anything. Omega needs to be here and it needs to be making some prediction. The prediction itself is a causal fact only in the sense that Omega wouldn’t appear before you if it didn’t expect to get $5.
It’s a tautology, and that is my point. The only time Omega would ever appear is if its request would be granted.
In my opinion, it is more accurate to say that the reason behind your action is completely irrelevant. It doesn’t matter that the reason isn’t the prediction itself that is causing you to give Omega $5.
It isn’t really absurd. Placing restrictions on the scenario will cause things to go crazy and it is this craziness that I want to look at.
People still argue about one-boxing. The most obvious, direct application of this post is to show why one-boxing is the correct answer. Newcomb’s problem is actually why I ended up writing this. Every time I started working on the math behind Newcomb’s I would bump into the claim presented in this post and realize that people were going to object.
So, instead of talking about this claim inside of a post on Newcomb’s, I isolated it and presented it on its own. And people still objected to it, so I am glad I did this.