This is a bad argument, and to understand why it is bad, you should consider why you don’t routinely have the thought “I am probably in a simulation, and since value is fragile the people running the simulation probably have values wildly different than human values so I should do something insane right now”
This is a bad argument, and to understand why it is bad, you should consider why you don’t routinely have the thought “I am probably in a simulation, and since value is fragile the people running the simulation probably have values wildly different than human values so I should do something insane right now”
Even assuming that the simulators have wildly different values, why would doing something insane a good thing to do?
yes