If the problem here is that the entity being simulated ceases to exist, an alternative solution would be to move the entity into an ongoing simulation that won’t be terminated. Clearly, this would require an ever-increasing number of resources as the number of simulations increased, but perhaps that would be a good thing—the AI’s finite ability to support conscious entities would impose an upper bound on the number of simulations it would run. If it was important to be able to run such a simulation, it could, but it wouldn’t do so frivolously.
Before you say anything, I don’t actually think the above is a good solution. It’s more like a constraint to be routed around than a goal to be achieved. Plus, it’s far too situational and probably wouldn’t produce desirable results in situations we didn’t design it for.
The thing is, it isn’t important to come up with the correct solution(s) ourselves. We should instead make the AI understand the problem and want to solve it. We need to carefully design the AI’s utility function, making it treat conscious entities in simulations as normal people and respect simulated lives as it respects ours. We will of course need a highly precise definition for consciousness that applies not just to modern-day humans but also to entities the AI could simulate.
Here’s how I see it—as long as the AI values the life of any entity that values its own life, the AI will find ways to keep those entities alive. As long as it considers entities in simulations to be equivalent to entities outside, it will avoid terminating their simulation and killing them. The AI (probably) would still make predictions using simulations; it would just avoid the specific situation of destroying a conscious entity that wanted to continue living.
If the problem here is that the entity being simulated ceases to exist, an alternative solution would be to move the entity into an ongoing simulation that won’t be terminated. Clearly, this would require an ever-increasing number of resources as the number of simulations increased, but perhaps that would be a good thing—the AI’s finite ability to support conscious entities would impose an upper bound on the number of simulations it would run. If it was important to be able to run such a simulation, it could, but it wouldn’t do so frivolously.
Before you say anything, I don’t actually think the above is a good solution. It’s more like a constraint to be routed around than a goal to be achieved. Plus, it’s far too situational and probably wouldn’t produce desirable results in situations we didn’t design it for.
The thing is, it isn’t important to come up with the correct solution(s) ourselves. We should instead make the AI understand the problem and want to solve it. We need to carefully design the AI’s utility function, making it treat conscious entities in simulations as normal people and respect simulated lives as it respects ours. We will of course need a highly precise definition for consciousness that applies not just to modern-day humans but also to entities the AI could simulate.
Here’s how I see it—as long as the AI values the life of any entity that values its own life, the AI will find ways to keep those entities alive. As long as it considers entities in simulations to be equivalent to entities outside, it will avoid terminating their simulation and killing them. The AI (probably) would still make predictions using simulations; it would just avoid the specific situation of destroying a conscious entity that wanted to continue living.