Some people say this fails to account for the agent in the simulator, but it’s entirely possible that Omega may be able to figure out what action you will take based on high level reasoning, as opposed having to run a complete simulation of you.
Unless you are the simulation?
In so far as the paraconsistent approach may be more convenient for an implementation perspective than the first, we can justify it by tying it to raw counterfactuals.
Like one might justify deontology in terms of consequentialism?
However, when “you” and the environment are defined down to the atom, you can only implement one decision.
Does QM enable ‘true randomness’ (generators)?
They fail to realize that they can’t actually “change” their decision as there is a single decision that they will inevitably implement.
Or they fail to realize others can change their minds.
Unless you are the simulation?
Like one might justify deontology in terms of consequentialism?
Does QM enable ‘true randomness’ (generators)?
Or they fail to realize others can change their minds.