She will be unconscious, but still send messages about pain. Current LLMs can do it. Also, as it is simulation, there are recording of her previous messages or of a similar woman, so they can be copypasted. Her memories can be computed without actually putting her in pain.
So if I am understanding your proposal correctly, then a Friendly AI will make a woman unconscious during moments of intense suffering and then implant her memories of pain. Why would it do it though? Why not just remove the experience of pain entirely? In fact, why does Friendly AI seem so insistent on keeping billions of people in a state of false belief by planting false memories. That seems to me like a manipulation.
Friendly AI could just reveal to the people in simulation the truth and let them decide if they want to stay in a simulation or move to the “real” world. I expect that at least some people (including me) would choose to move to a higher plain of reality if that was the case.
Furthermore, why not just resurrect all these people into worlds with no suffering? Such worlds would also take up less computing power than our world so the Friendly AI doing the simulation would have another reason to pursue this option.
Resurrection of the dead is the part of human value system. We need a completely non-human bliss, like hedonium, to escape this.
Creation of new happy people also seems to be similarly valuable. After all, most arguments against creating new happy people would apply to resurrecting the dead. I would expect most people who oppose the creation of new happy people to oppose the Ressurection Simulation.
But leaving that aside, I don’t think we need to invoke hedonium here. Simulations full of happy, blissful people would be enough. For example, it is not obvious to me that resurrecting one person into our world is better than creating two happy people in a blissful world. I don’t think that my value system is extremely weird, either. A person following a regular classical utilitarianism would probably arrive at the same conclusion.
There is an even deeper issue. It might be the case that somehow, the proposed theory of personal identity fails and all the “resurrections” would just be creating new people. This would be really unpleasant considering that now it turns out that Friendly AI spent more resources tocreate less people who experience more suffering and less happiness than it would’ve if it followed my proposal.
Even the people who don’t consistently follow classical utilitarianism should be happy with my proposed solution of resurrecting dead people into blissful worlds, which kills two birds with one stone.
Moreover, even creating new human is affected by this arguments. What if my children will suffer? So it is basically anti-natalist argument.
It’s not an anti-natalist argument to say that you should create (or resurrect) people into a world with more happiness and less suffering instead of a world with less happiness and more suffering.
To put it into an analogy, if you are presented with two options: a) have a happy child with no chronic diseases and b) have a suffering child with a chronic disease, then option (a) is the more moral option under my value system.
This is similar to choosing between a) resurrecting people into a blissful world with no chronic diseases and b) resurrecting people into a world with chronic diseases.
The discussion about anti-natalism actually made me think of another argument for why we are probably not in a simulation that you’ve described. I think that creating new happy people is good (an an explicitly anti-anti-natalist position). I expect (based on our conversation so far) that so do you. If that’s the case, then we would still expect ourselves to be in a blissful simulation as opposed to being in a simulation of our world. Here is my thought process:
The history of the “real” world would presumably be similar to ours. That means that (if Friendly AI was to follow your strategy) there would be 110 billion dead people to resurrect. This AI happens to completely agree with everything you’ve said so far in our conversation. So it goes ahead and resurrects 110 billion people.
Perfect, now it’s left with a lot of resources on its hands because an AI pursuing a strategy that depends on so many assumptions should have more than enough resources to tolerate a scenario where one of the assumptions turns out to be false.
Thus, this Friendly AI spends a big chunk of resources on creating new happy people into blissful simulations. Given that such simulations require fewer resources, we would expect more people to be in such simulations than in the simulations of worlds like ours.
Even if you don’t agree with the reasoning above, you should agree that it would be pretty weird and ad-hoc if Friendly AI had exactly the amount of resources to resurrect 110 billion people into a world like ours but not enough resources to resurrect (110 + N) billion people into a blissful simulation. Thus, we ought to expect more people to be in blissful simulation than in a world like ours.
Given plausible anthropics, we should thus expect that, if we are being simulated by a Friendly AI, we would be in a blissful world (like the ones I described). Since we are not in such a world, we should decrease our credence in the hypothesis of us being simulated by a Friendly AI.
Furthermore, why not just resurrect all these people into worlds with no suffering?
My point is that it is impossible to resurrect anyone (in this model) without him reliving his life again first, after that he obviously gets eternal blissful life in real (not simulated) world.
This may be not factually true, btw, - current LLMs can create good models of past people without running past simulation of their previous life explicitly.
The discussion about anti-natalism actually made me think of another argument for why we are probably not in a simulation that you’ve described
It is a variant of Doomsday argument. This idea is even more controversial than simulation argument. There is no future with many people in it. Friendly AI can fight DA curse via simulations—by creating many people who do not know their real time position which can be one more argument for simulation, but it requires rather wired decision theory.
This may be not factually true, btw, - current LLMs can create good models of past people without running past simulation of their previous life explicitly.
Yup, I agree.
It is a variant of Doomsday argument. This idea is even more controversial than simulation argument. There is no future with many people in it.
This makes my case even stronger! Basically, if a Friendly AI has no issues with simulating conscious beings in general, then we have good reasons to expect it to simulate more observers in blissful worlds than in worlds like ours.
If the Doomsday Argument tells us that Friendly AI didn’t simulate more observers in blissful worlds than in worlds like ours, then that gives us even more reasons to think that we are not being simulated by a Friendly AI in the way that you have described.
So if I am understanding your proposal correctly, then a Friendly AI will make a woman unconscious during moments of intense suffering and then implant her memories of pain. Why would it do it though? Why not just remove the experience of pain entirely? In fact, why does Friendly AI seem so insistent on keeping billions of people in a state of false belief by planting false memories. That seems to me like a manipulation.
Friendly AI could just reveal to the people in simulation the truth and let them decide if they want to stay in a simulation or move to the “real” world. I expect that at least some people (including me) would choose to move to a higher plain of reality if that was the case.
Furthermore, why not just resurrect all these people into worlds with no suffering? Such worlds would also take up less computing power than our world so the Friendly AI doing the simulation would have another reason to pursue this option.
Creation of new happy people also seems to be similarly valuable. After all, most arguments against creating new happy people would apply to resurrecting the dead. I would expect most people who oppose the creation of new happy people to oppose the Ressurection Simulation.
But leaving that aside, I don’t think we need to invoke hedonium here. Simulations full of happy, blissful people would be enough. For example, it is not obvious to me that resurrecting one person into our world is better than creating two happy people in a blissful world. I don’t think that my value system is extremely weird, either. A person following a regular classical utilitarianism would probably arrive at the same conclusion.
There is an even deeper issue. It might be the case that somehow, the proposed theory of personal identity fails and all the “resurrections” would just be creating new people. This would be really unpleasant considering that now it turns out that Friendly AI spent more resources to create less people who experience more suffering and less happiness than it would’ve if it followed my proposal.
Even the people who don’t consistently follow classical utilitarianism should be happy with my proposed solution of resurrecting dead people into blissful worlds, which kills two birds with one stone.
It’s not an anti-natalist argument to say that you should create (or resurrect) people into a world with more happiness and less suffering instead of a world with less happiness and more suffering.
To put it into an analogy, if you are presented with two options: a) have a happy child with no chronic diseases and b) have a suffering child with a chronic disease, then option (a) is the more moral option under my value system.
This is similar to choosing between a) resurrecting people into a blissful world with no chronic diseases and b) resurrecting people into a world with chronic diseases.
The discussion about anti-natalism actually made me think of another argument for why we are probably not in a simulation that you’ve described. I think that creating new happy people is good (an an explicitly anti-anti-natalist position). I expect (based on our conversation so far) that so do you. If that’s the case, then we would still expect ourselves to be in a blissful simulation as opposed to being in a simulation of our world. Here is my thought process:
The history of the “real” world would presumably be similar to ours. That means that (if Friendly AI was to follow your strategy) there would be 110 billion dead people to resurrect. This AI happens to completely agree with everything you’ve said so far in our conversation. So it goes ahead and resurrects 110 billion people.
Perfect, now it’s left with a lot of resources on its hands because an AI pursuing a strategy that depends on so many assumptions should have more than enough resources to tolerate a scenario where one of the assumptions turns out to be false.
Thus, this Friendly AI spends a big chunk of resources on creating new happy people into blissful simulations. Given that such simulations require fewer resources, we would expect more people to be in such simulations than in the simulations of worlds like ours.
Even if you don’t agree with the reasoning above, you should agree that it would be pretty weird and ad-hoc if Friendly AI had exactly the amount of resources to resurrect 110 billion people into a world like ours but not enough resources to resurrect (110 + N) billion people into a blissful simulation. Thus, we ought to expect more people to be in blissful simulation than in a world like ours.
Given plausible anthropics, we should thus expect that, if we are being simulated by a Friendly AI, we would be in a blissful world (like the ones I described). Since we are not in such a world, we should decrease our credence in the hypothesis of us being simulated by a Friendly AI.
My point is that it is impossible to resurrect anyone (in this model) without him reliving his life again first, after that he obviously gets eternal blissful life in real (not simulated) world.
This may be not factually true, btw, - current LLMs can create good models of past people without running past simulation of their previous life explicitly.
It is a variant of Doomsday argument. This idea is even more controversial than simulation argument. There is no future with many people in it. Friendly AI can fight DA curse via simulations—by creating many people who do not know their real time position which can be one more argument for simulation, but it requires rather wired decision theory.
Yup, I agree.
This makes my case even stronger! Basically, if a Friendly AI has no issues with simulating conscious beings in general, then we have good reasons to expect it to simulate more observers in blissful worlds than in worlds like ours.
If the Doomsday Argument tells us that Friendly AI didn’t simulate more observers in blissful worlds than in worlds like ours, then that gives us even more reasons to think that we are not being simulated by a Friendly AI in the way that you have described.