As I said, that argument is actually the most commonly presented one. However, there is actually a causal chain that would benefit an agent, causing it to adopt a Basilisk strategy: namely, if it thinks it is itself a simulation and will get punished otherwise.
As I said, that argument is actually the most commonly presented one. However, there is actually a causal chain that would benefit an agent, causing it to adopt a Basilisk strategy: namely, if it thinks it is itself a simulation and will get punished otherwise.