Seriously, Omega is not just counterfactual, he is impossible. Why do you guys keep asking us to believe so many impossible things before breakfast?
Omega is not obviously impossible: in theory, someone could scan your brain and simulate how you react in a specific situation. If you’re already an upload and running as pure code, this is even easier.
The question is particularly relevant when trying to develop a decision theory for artificial intelligences: there’s nothing impossible about the notion of two adversarial AIs having acquired each others’ source codes and basing their actions on how a simulated copy of the other would react. If you presume that this scenario is possible, and there seems to be no reason to assume that it wouldn’t be, then developing a decision theory capable of handling this situation is an important part of building an AI.
Omega is not obviously impossible: in theory, someone could scan your brain and simulate how you react in a specific situation. If you’re already an upload and running as pure code, this is even easier.
The question is particularly relevant when trying to develop a decision theory for artificial intelligences: there’s nothing impossible about the notion of two adversarial AIs having acquired each others’ source codes and basing their actions on how a simulated copy of the other would react. If you presume that this scenario is possible, and there seems to be no reason to assume that it wouldn’t be, then developing a decision theory capable of handling this situation is an important part of building an AI.