Though many of the hardest working Friendly AI thinkers take Bostrom’s Simulation Argument seriously when cognitively talking about it, I’ve seen surprisingly little discussion of what would FAI and UFAI do if they believed, for instance with 25% 50% and 75% probability that they were in a Simulation. Notice that the Simulation Argument, from the perspective of an AI, has much more strength than a human version, because:
1) The AI knows it is implemented in substrate independent code—humans have uncertainty over the origin of their consciousness, and are not sure they could be being simulated.
2) The condition of “being able to create a simulation” is much closer to the world of the AI than to ours. Maybe the AI herself knows how to make one already, or knows where to look to create one in three weeks.
3) It also has privileged access to information about whether once given the capability to do so, if Worlds would actually create simulations with many beings—this is either because it can create a simulation itself, and understands it’s own motivation, or because it has the Strategy superpower, and understands deeply whether a ban on simulations would happen or not.
More generally, I would like to see more people writing about how AGI’s, good or evil, would behave in a Simulation.
Though many of the hardest working Friendly AI thinkers take Bostrom’s Simulation Argument seriously when cognitively talking about it, I’ve seen surprisingly little discussion of what would FAI and UFAI do if they believed, for instance with 25% 50% and 75% probability that they were in a Simulation. Notice that the Simulation Argument, from the perspective of an AI, has much more strength than a human version, because:
1) The AI knows it is implemented in substrate independent code—humans have uncertainty over the origin of their consciousness, and are not sure they could be being simulated.
2) The condition of “being able to create a simulation” is much closer to the world of the AI than to ours. Maybe the AI herself knows how to make one already, or knows where to look to create one in three weeks.
3) It also has privileged access to information about whether once given the capability to do so, if Worlds would actually create simulations with many beings—this is either because it can create a simulation itself, and understands it’s own motivation, or because it has the Strategy superpower, and understands deeply whether a ban on simulations would happen or not.
More generally, I would like to see more people writing about how AGI’s, good or evil, would behave in a Simulation.