In Newcomb’s problem, the affect on your behavior doesn’t come from Omega’s simulation function. Your behavior is modified by the information that Omega is simulating you. This information is either independent or dependent on the simulation function. If it is independent, this is not a side effect of the simulation function. If it is dependent, we can model this as an explicit effect of the simulation function.
While we can view the change in your behavior as a side effect, we don’t need to. This article does not convince me that there is a benefit to viewing it as a side effect.
Your action is probably dependent only on the output of the Omega function (including its output in counterfactual worlds).
Your action is not dependent only on the output of the function in the actual world.
We can model it as an effect of the function, but not as an effect of the output. I noticed this, which is why I put in the statement:
No, I took one box because of the function from my actions to states of the box. The side effect is no way dependent on the interior workings of Omega, but only on the output of Omega’s function in counterfactual universes. Omega’s code does not matter. All that matters is the mathematical function from the input to the output.
If we define side effect as relative to the one output of the function, then both examples are side effects. If we define side effect as relative to the entire function, then only the second example is a side effect.
Actually, I take that back, I think that both examples are side effects. Your output is a side effect of the Omega function, because it is not just a dependent on what Omega does on different inputs, it is also dependent on what Omega does in counterfactual universes.
I am confused by this issue, and I am not trying to present a coherent solution as much as I am trying to stimulate discussion on thinking outside of the input-output model of decision theory.
In Newcomb’s problem, the affect on your behavior doesn’t come from Omega’s simulation function. Your behavior is modified by the information that Omega is simulating you. This information is either independent or dependent on the simulation function. If it is independent, this is not a side effect of the simulation function. If it is dependent, we can model this as an explicit effect of the simulation function.
While we can view the change in your behavior as a side effect, we don’t need to. This article does not convince me that there is a benefit to viewing it as a side effect.
Your action is probably dependent only on the output of the Omega function (including its output in counterfactual worlds).
Your action is not dependent only on the output of the function in the actual world.
We can model it as an effect of the function, but not as an effect of the output. I noticed this, which is why I put in the statement:
If we define side effect as relative to the one output of the function, then both examples are side effects. If we define side effect as relative to the entire function, then only the second example is a side effect.
Actually, I take that back, I think that both examples are side effects. Your output is a side effect of the Omega function, because it is not just a dependent on what Omega does on different inputs, it is also dependent on what Omega does in counterfactual universes.
I am confused by this issue, and I am not trying to present a coherent solution as much as I am trying to stimulate discussion on thinking outside of the input-output model of decision theory.