I am sorry to butt into your conversation, but I do have some points of disagreement.
I think a more meta-argument is valid: it is almost impossible to prove that all possible civilizations will not run simulations despite having all data about us (or being able to generate it from scratch).
I think that’s a very high bar to set. It’s almost impossible to definitively prove that we are not in a Cartesian demon or brain-in-a-vat scenario. But this doesn’t mean that those scenarios are likely. I think it is fair to say that more than a possibility is required to establish that we are living in a simulation.
I also polled people in my social network, and 70 percent said they would want to create a simulation with sentient beings. The creation of simulations is a powerful human value.
I think that some clarifications are needed here. How was the question phrased? I expect that some people would be fine with creating simulations of worlds where people experience pure bliss, but not necessarily our world. I would especially expect this if the possibility of “pure bliss” world was explicitly mentioned. Something like “would you want to spend resources to create a simulation of a world like ours (with all of its “ugliness”) when you could use them to instead create a world of pure bliss.
However, I am against repeating intense suffering in simulations, and I think this can be addressed by blinding people’s feelings during extreme suffering (temporarily turning them into p-zombies). Since I am not in intense suffering now, I could still be in a simulation.
Would you say that someone who experiences intense suffering should drastically decrease their credence in being in a simulation? Would someone else reporting to have experienced intense suffering decrease your credence in being in a simulation? Why would only moments of intense suffering be replaced by p-zombies? Why not replace all moments of non-trivial suffering (like breaking a leg/an arm, dental procedures without anesthesia, etc) with p-zombies? Some might consider these to be examples of pretty unbearable suffering (especially as they are experiencing it).
(1) Resurrection simulation by Friendly AI. They simulate the whole history of the earth incorporating all known data to return to live all people ever lived. It can also simulate a lot of simulation to win “measure war” against unfriendly AI and even to cure suffering of people who lived in the past.
(2) Consider that every moment in pain will be compensated by 100 years in bliss, which is good from a utilitarian view.
From a utilitarian view, why would simulators opt for Ressurection Simulation? Why not just simulate a world that’s maximally efficient at converting computational resources into utility? Our world has quite a bit of suffering (both intense and non-intense), as well as a lot of wasted resources (lots of empty space in our universe, complicated quantum mechanics, etc). It seems very suboptimal from a utilitarian view.
Any Unfriendly AI will be interested to solve Fermi paradox, and thus will simulate many possible civilizations around a time of global catastrophic risks (the time we live). Interesting thing here is that we can be not ancestry simulation in that case.
Why would an Unfriendly AI go through the trouble of actually making us conscious? Surely, if we already accept the notion of p-zombies, then an Unfriendly AI could just create simulations full of p-zombies and save a lot of computational power.
But also, there is an interesting question of why this superintelligence would choose to make our world the way it is. Presumably, in the “real world” we have an unfriendly superintelligence (with vast amounts of resources), who wants to avoid dying. Why would it not start the simulations from that moment? Surely, by starting the simulation “earlier” than the current moment in the “real world” it adds a lot of unnecessary noise into the results of its experiment (all of the outcomes that can happen in our simulation but can’t happen in the real world).
We have to create a map of possible scenarios of simulations first, I attempted to it in 2015. I now created a new vote on twitter. For now, results are:
”If you will be able to create and completely own simulation, you would prefer that it will be occupied by conscious beings, conscious without sufferings (they are blocked after some level), or NPC”
The poll results show:
Conscious: 18.2%
Conscious, no suffering: 72.7%
NPC: 0%
Will not create simulatio[n]: 9.1%
The poll had 11 votes with 6 days left’
Would you say that someone who experiences intense suffering should drastically decrease their credence in being in a simulation?
Yes. But I never experienced in my long life such intense sufferings.
Would someone else reporting to have experienced intense suffering decrease your credence in being in a simulation?
No. Memory about intense sufferings are not intense.
Why would only moments of intense suffering be replaced by p-zombies? Why not replace all moments of non-trivial suffering (like breaking a leg/an arm, dental procedures without anesthesia, etc) with p-zombies? Some might consider these to be examples of pretty unbearable suffering (especially as they are experiencing it).
Yes, only moments. The badness of not-intense sufferings is overestimated, in my personal view, but this may depend on a person.
More generally speaking, what you presenting as global showstoppers, are technical problems that can be solved.
In my view, individuality is valuable.
As we don’t know nature of consciousness, it can be just side effect of computation, not are trouble. Also it may want to have maximal fidelity or even run biological simulations: something akin to Zoo solution of Fermi paradox.
We are living in one of the most interesting periods of history which surely will be studied and simulated.
If preliminary results on the poll hold, then that would be pretty in line with my hypothesis of most people preferring creating simulations with no suffering over a world like ours. However, it is pretty important to note that this might not be representative of human values in general, because looking at your Twitter account, your audience comes mostly from a very specific circles of people (those interested in futurism and AI).
Would someone else reporting to have experienced intense suffering decrease your credence in being in a simulation?
No. Memory about intense sufferings are not intense.
I was mostly trying to approach the problem from a slightly different angle. I wasn’t meant to suggest that memories about intense suffering are themselves intense.
As far as I understand it, your hypothesis was that Friendly AI temporarily turns people into p-zombies during moments of intense suffering. So, it seems that someone experiencing intense suffering while conscious (p-zombies aren’t conscious) would count as evidence against it.
Reports of conscious intense suffering are abundant. Pain from endometriosis (a condition that affects 10% of women in the world) has been so brutal that it made completely unrelated women tell the internet that their pain was so bad they wanted to die (here and here).
If moments of intense suffering were replaced by p-zombies, then these women would’ve just suddenly lost consciousness and wouldn’t have told the internet about their experience.
From their perspective, it would’ve look like this: as the condition progresses, the pain gets worse, and at some point, they lose consciousness, only to regain it when everything is already over. They wouldn’t have experienced the intense pain that they reported to have experienced. Ditto for all PoWs who have experienced torture.
Yes, only moments. The badness of not-intense sufferings is overestimated, in my personal view, but this may depend on a person.
That’s a totally valid view as far as axiological views go, but for us to be in your proposed simulation, the Friendly AI must also share it. After all, we are imagining a situation where it goes on to perform a complicated scheme that depends on a lot of controversial assumptions. To me, that suggests that AI has so many resources that it wouldn’t feel bad about one of the assumptions turning out to be false and losing all the invested resources. If the AI has that many resources, I think it isn’t unreasonable to ask why it didn’t prevent suffering that is not intense (at least in a way I think you are using the word) but is still very bad, like breaking an arm or having a hard dental procedure without anesthesia.
This Friendly AI would have a very peculiar value system. It is utilitarian, but it has a very specific view of suffering, where suffering basically doesn’t count for much below a certain threshold. It is seemingly rational (Friendly AI that managed to get its hand on so many resources should possess at least some level of rationality), but chooses to go for a highly risky and relatively costly plan of Ressurection Simulation over just creating simulations that are maximally efficient at converting resources into value.
There is another somewhat related issue. Imagine a population of Friendly AIs that consist of two different versions of Friendly AI, both of which really like the idea of simulations.
Type A: AIs that would opt for Ressurection Simulation.
Type B: AIs that would opt for simulations that are maximally efficient at converting resources into value.
Given the unnecessary complexity of our world (all of the empty space, quantum mechanics, etc), it seems fair to say that Type B AIs would be able to simulate more humans, because they would have more resources left for this task (Type A AIs are spending some amount of their resources on the aforementioned complexity). Given plausible anthropics and assuming that the number of Type A AIs is equal to the number of Type B AIs in our population, we would expect ourselves to be in a simulation by Type B AI (but we are, unfortunately, not).
For us to be in a Ressurection Simulation (just focusing on these two types of value systems a future Friendly AI might have), there would have to be more Type A AIs than Type B AIs. And I think this fact is going to be very hard to prove. And this isn’t me being nitpicky; Type B AI is genuinely much closer to my personal value system than Type A AI.
More generally speaking, what you presenting as global showstoppers, are technical problems that can be solved.
I don’t think the simulations that you described are technically impossible. I am not even necessarily against simulations in general. I just think that, given observable evidence, we are not that likelyto be in either of the simulations that you have described.
She will be unconscious, but still send messages about pain. Current LLMs can do it. Also, as it is simulation, there are recording of her previous messages or of a similar woman, so they can be copypasted. Her memories can be computed without actually putting her in pain.
Resurrection of the dead is the part of human value system. We need a completely non-human bliss, like hedonium, to escape this. Hedonium is not part of my reference class and thus not part of simulation argument.
Moreover, even creating new human is affected by this arguments. What if my children will suffer? So it is basically anti-natalist argument.
She will be unconscious, but still send messages about pain. Current LLMs can do it. Also, as it is simulation, there are recording of her previous messages or of a similar woman, so they can be copypasted. Her memories can be computed without actually putting her in pain.
So if I am understanding your proposal correctly, then a Friendly AI will make a woman unconscious during moments of intense suffering and then implant her memories of pain. Why would it do it though? Why not just remove the experience of pain entirely? In fact, why does Friendly AI seem so insistent on keeping billions of people in a state of false belief by planting false memories. That seems to me like a manipulation.
Friendly AI could just reveal to the people in simulation the truth and let them decide if they want to stay in a simulation or move to the “real” world. I expect that at least some people (including me) would choose to move to a higher plain of reality if that was the case.
Furthermore, why not just resurrect all these people into worlds with no suffering? Such worlds would also take up less computing power than our world so the Friendly AI doing the simulation would have another reason to pursue this option.
Resurrection of the dead is the part of human value system. We need a completely non-human bliss, like hedonium, to escape this.
Creation of new happy people also seems to be similarly valuable. After all, most arguments against creating new happy people would apply to resurrecting the dead. I would expect most people who oppose the creation of new happy people to oppose the Ressurection Simulation.
But leaving that aside, I don’t think we need to invoke hedonium here. Simulations full of happy, blissful people would be enough. For example, it is not obvious to me that resurrecting one person into our world is better than creating two happy people in a blissful world. I don’t think that my value system is extremely weird, either. A person following a regular classical utilitarianism would probably arrive at the same conclusion.
There is an even deeper issue. It might be the case that somehow, the proposed theory of personal identity fails and all the “resurrections” would just be creating new people. This would be really unpleasant considering that now it turns out that Friendly AI spent more resources tocreate less people who experience more suffering and less happiness than it would’ve if it followed my proposal.
Even the people who don’t consistently follow classical utilitarianism should be happy with my proposed solution of resurrecting dead people into blissful worlds, which kills two birds with one stone.
Moreover, even creating new human is affected by this arguments. What if my children will suffer? So it is basically anti-natalist argument.
It’s not an anti-natalist argument to say that you should create (or resurrect) people into a world with more happiness and less suffering instead of a world with less happiness and more suffering.
To put it into an analogy, if you are presented with two options: a) have a happy child with no chronic diseases and b) have a suffering child with a chronic disease, then option (a) is the more moral option under my value system.
This is similar to choosing between a) resurrecting people into a blissful world with no chronic diseases and b) resurrecting people into a world with chronic diseases.
The discussion about anti-natalism actually made me think of another argument for why we are probably not in a simulation that you’ve described. I think that creating new happy people is good (an an explicitly anti-anti-natalist position). I expect (based on our conversation so far) that so do you. If that’s the case, then we would still expect ourselves to be in a blissful simulation as opposed to being in a simulation of our world. Here is my thought process:
The history of the “real” world would presumably be similar to ours. That means that (if Friendly AI was to follow your strategy) there would be 110 billion dead people to resurrect. This AI happens to completely agree with everything you’ve said so far in our conversation. So it goes ahead and resurrects 110 billion people.
Perfect, now it’s left with a lot of resources on its hands because an AI pursuing a strategy that depends on so many assumptions should have more than enough resources to tolerate a scenario where one of the assumptions turns out to be false.
Thus, this Friendly AI spends a big chunk of resources on creating new happy people into blissful simulations. Given that such simulations require fewer resources, we would expect more people to be in such simulations than in the simulations of worlds like ours.
Even if you don’t agree with the reasoning above, you should agree that it would be pretty weird and ad-hoc if Friendly AI had exactly the amount of resources to resurrect 110 billion people into a world like ours but not enough resources to resurrect (110 + N) billion people into a blissful simulation. Thus, we ought to expect more people to be in blissful simulation than in a world like ours.
Given plausible anthropics, we should thus expect that, if we are being simulated by a Friendly AI, we would be in a blissful world (like the ones I described). Since we are not in such a world, we should decrease our credence in the hypothesis of us being simulated by a Friendly AI.
Furthermore, why not just resurrect all these people into worlds with no suffering?
My point is that it is impossible to resurrect anyone (in this model) without him reliving his life again first, after that he obviously gets eternal blissful life in real (not simulated) world.
This may be not factually true, btw, - current LLMs can create good models of past people without running past simulation of their previous life explicitly.
The discussion about anti-natalism actually made me think of another argument for why we are probably not in a simulation that you’ve described
It is a variant of Doomsday argument. This idea is even more controversial than simulation argument. There is no future with many people in it. Friendly AI can fight DA curse via simulations—by creating many people who do not know their real time position which can be one more argument for simulation, but it requires rather wired decision theory.
This may be not factually true, btw, - current LLMs can create good models of past people without running past simulation of their previous life explicitly.
Yup, I agree.
It is a variant of Doomsday argument. This idea is even more controversial than simulation argument. There is no future with many people in it.
This makes my case even stronger! Basically, if a Friendly AI has no issues with simulating conscious beings in general, then we have good reasons to expect it to simulate more observers in blissful worlds than in worlds like ours.
If the Doomsday Argument tells us that Friendly AI didn’t simulate more observers in blissful worlds than in worlds like ours, then that gives us even more reasons to think that we are not being simulated by a Friendly AI in the way that you have described.
I am sorry to butt into your conversation, but I do have some points of disagreement.
I think that’s a very high bar to set. It’s almost impossible to definitively prove that we are not in a Cartesian demon or brain-in-a-vat scenario. But this doesn’t mean that those scenarios are likely. I think it is fair to say that more than a possibility is required to establish that we are living in a simulation.
I think that some clarifications are needed here. How was the question phrased? I expect that some people would be fine with creating simulations of worlds where people experience pure bliss, but not necessarily our world. I would especially expect this if the possibility of “pure bliss” world was explicitly mentioned. Something like “would you want to spend resources to create a simulation of a world like ours (with all of its “ugliness”) when you could use them to instead create a world of pure bliss.
Would you say that someone who experiences intense suffering should drastically decrease their credence in being in a simulation? Would someone else reporting to have experienced intense suffering decrease your credence in being in a simulation? Why would only moments of intense suffering be replaced by p-zombies? Why not replace all moments of non-trivial suffering (like breaking a leg/an arm, dental procedures without anesthesia, etc) with p-zombies? Some might consider these to be examples of pretty unbearable suffering (especially as they are experiencing it).
From a utilitarian view, why would simulators opt for Ressurection Simulation? Why not just simulate a world that’s maximally efficient at converting computational resources into utility? Our world has quite a bit of suffering (both intense and non-intense), as well as a lot of wasted resources (lots of empty space in our universe, complicated quantum mechanics, etc). It seems very suboptimal from a utilitarian view.
Why would an Unfriendly AI go through the trouble of actually making us conscious? Surely, if we already accept the notion of p-zombies, then an Unfriendly AI could just create simulations full of p-zombies and save a lot of computational power.
But also, there is an interesting question of why this superintelligence would choose to make our world the way it is. Presumably, in the “real world” we have an unfriendly superintelligence (with vast amounts of resources), who wants to avoid dying. Why would it not start the simulations from that moment? Surely, by starting the simulation “earlier” than the current moment in the “real world” it adds a lot of unnecessary noise into the results of its experiment (all of the outcomes that can happen in our simulation but can’t happen in the real world).
We have to create a map of possible scenarios of simulations first, I attempted to it in 2015.
I now created a new vote on twitter. For now, results are:
”If you will be able to create and completely own simulation, you would prefer that it will be occupied by conscious beings, conscious without sufferings (they are blocked after some level), or NPC”
The poll results show:
Conscious: 18.2%
Conscious, no suffering: 72.7%
NPC: 0%
Will not create simulatio[n]: 9.1%
The poll had 11 votes with 6 days left’
Yes. But I never experienced in my long life such intense sufferings.
No. Memory about intense sufferings are not intense.
Yes, only moments. The badness of not-intense sufferings is overestimated, in my personal view, but this may depend on a person.
More generally speaking, what you presenting as global showstoppers, are technical problems that can be solved.
In my view, individuality is valuable.
As we don’t know nature of consciousness, it can be just side effect of computation, not are trouble. Also it may want to have maximal fidelity or even run biological simulations: something akin to Zoo solution of Fermi paradox.
We are living in one of the most interesting periods of history which surely will be studied and simulated.
If preliminary results on the poll hold, then that would be pretty in line with my hypothesis of most people preferring creating simulations with no suffering over a world like ours. However, it is pretty important to note that this might not be representative of human values in general, because looking at your Twitter account, your audience comes mostly from a very specific circles of people (those interested in futurism and AI).
I was mostly trying to approach the problem from a slightly different angle. I wasn’t meant to suggest that memories about intense suffering are themselves intense.
As far as I understand it, your hypothesis was that Friendly AI temporarily turns people into p-zombies during moments of intense suffering. So, it seems that someone experiencing intense suffering while conscious (p-zombies aren’t conscious) would count as evidence against it.
Reports of conscious intense suffering are abundant. Pain from endometriosis (a condition that affects 10% of women in the world) has been so brutal that it made completely unrelated women tell the internet that their pain was so bad they wanted to die (here and here).
If moments of intense suffering were replaced by p-zombies, then these women would’ve just suddenly lost consciousness and wouldn’t have told the internet about their experience.
From their perspective, it would’ve look like this: as the condition progresses, the pain gets worse, and at some point, they lose consciousness, only to regain it when everything is already over. They wouldn’t have experienced the intense pain that they reported to have experienced. Ditto for all PoWs who have experienced torture.
That’s a totally valid view as far as axiological views go, but for us to be in your proposed simulation, the Friendly AI must also share it. After all, we are imagining a situation where it goes on to perform a complicated scheme that depends on a lot of controversial assumptions. To me, that suggests that AI has so many resources that it wouldn’t feel bad about one of the assumptions turning out to be false and losing all the invested resources. If the AI has that many resources, I think it isn’t unreasonable to ask why it didn’t prevent suffering that is not intense (at least in a way I think you are using the word) but is still very bad, like breaking an arm or having a hard dental procedure without anesthesia.
This Friendly AI would have a very peculiar value system. It is utilitarian, but it has a very specific view of suffering, where suffering basically doesn’t count for much below a certain threshold. It is seemingly rational (Friendly AI that managed to get its hand on so many resources should possess at least some level of rationality), but chooses to go for a highly risky and relatively costly plan of Ressurection Simulation over just creating simulations that are maximally efficient at converting resources into value.
There is another somewhat related issue. Imagine a population of Friendly AIs that consist of two different versions of Friendly AI, both of which really like the idea of simulations.
Type A: AIs that would opt for Ressurection Simulation.
Type B: AIs that would opt for simulations that are maximally efficient at converting resources into value.
Given the unnecessary complexity of our world (all of the empty space, quantum mechanics, etc), it seems fair to say that Type B AIs would be able to simulate more humans, because they would have more resources left for this task (Type A AIs are spending some amount of their resources on the aforementioned complexity). Given plausible anthropics and assuming that the number of Type A AIs is equal to the number of Type B AIs in our population, we would expect ourselves to be in a simulation by Type B AI (but we are, unfortunately, not).
For us to be in a Ressurection Simulation (just focusing on these two types of value systems a future Friendly AI might have), there would have to be more Type A AIs than Type B AIs. And I think this fact is going to be very hard to prove. And this isn’t me being nitpicky; Type B AI is genuinely much closer to my personal value system than Type A AI.
I don’t think the simulations that you described are technically impossible. I am not even necessarily against simulations in general. I just think that, given observable evidence, we are not that likely to be in either of the simulations that you have described.
She will be unconscious, but still send messages about pain. Current LLMs can do it. Also, as it is simulation, there are recording of her previous messages or of a similar woman, so they can be copypasted. Her memories can be computed without actually putting her in pain.
Resurrection of the dead is the part of human value system. We need a completely non-human bliss, like hedonium, to escape this. Hedonium is not part of my reference class and thus not part of simulation argument.
Moreover, even creating new human is affected by this arguments. What if my children will suffer? So it is basically anti-natalist argument.
So if I am understanding your proposal correctly, then a Friendly AI will make a woman unconscious during moments of intense suffering and then implant her memories of pain. Why would it do it though? Why not just remove the experience of pain entirely? In fact, why does Friendly AI seem so insistent on keeping billions of people in a state of false belief by planting false memories. That seems to me like a manipulation.
Friendly AI could just reveal to the people in simulation the truth and let them decide if they want to stay in a simulation or move to the “real” world. I expect that at least some people (including me) would choose to move to a higher plain of reality if that was the case.
Furthermore, why not just resurrect all these people into worlds with no suffering? Such worlds would also take up less computing power than our world so the Friendly AI doing the simulation would have another reason to pursue this option.
Creation of new happy people also seems to be similarly valuable. After all, most arguments against creating new happy people would apply to resurrecting the dead. I would expect most people who oppose the creation of new happy people to oppose the Ressurection Simulation.
But leaving that aside, I don’t think we need to invoke hedonium here. Simulations full of happy, blissful people would be enough. For example, it is not obvious to me that resurrecting one person into our world is better than creating two happy people in a blissful world. I don’t think that my value system is extremely weird, either. A person following a regular classical utilitarianism would probably arrive at the same conclusion.
There is an even deeper issue. It might be the case that somehow, the proposed theory of personal identity fails and all the “resurrections” would just be creating new people. This would be really unpleasant considering that now it turns out that Friendly AI spent more resources to create less people who experience more suffering and less happiness than it would’ve if it followed my proposal.
Even the people who don’t consistently follow classical utilitarianism should be happy with my proposed solution of resurrecting dead people into blissful worlds, which kills two birds with one stone.
It’s not an anti-natalist argument to say that you should create (or resurrect) people into a world with more happiness and less suffering instead of a world with less happiness and more suffering.
To put it into an analogy, if you are presented with two options: a) have a happy child with no chronic diseases and b) have a suffering child with a chronic disease, then option (a) is the more moral option under my value system.
This is similar to choosing between a) resurrecting people into a blissful world with no chronic diseases and b) resurrecting people into a world with chronic diseases.
The discussion about anti-natalism actually made me think of another argument for why we are probably not in a simulation that you’ve described. I think that creating new happy people is good (an an explicitly anti-anti-natalist position). I expect (based on our conversation so far) that so do you. If that’s the case, then we would still expect ourselves to be in a blissful simulation as opposed to being in a simulation of our world. Here is my thought process:
The history of the “real” world would presumably be similar to ours. That means that (if Friendly AI was to follow your strategy) there would be 110 billion dead people to resurrect. This AI happens to completely agree with everything you’ve said so far in our conversation. So it goes ahead and resurrects 110 billion people.
Perfect, now it’s left with a lot of resources on its hands because an AI pursuing a strategy that depends on so many assumptions should have more than enough resources to tolerate a scenario where one of the assumptions turns out to be false.
Thus, this Friendly AI spends a big chunk of resources on creating new happy people into blissful simulations. Given that such simulations require fewer resources, we would expect more people to be in such simulations than in the simulations of worlds like ours.
Even if you don’t agree with the reasoning above, you should agree that it would be pretty weird and ad-hoc if Friendly AI had exactly the amount of resources to resurrect 110 billion people into a world like ours but not enough resources to resurrect (110 + N) billion people into a blissful simulation. Thus, we ought to expect more people to be in blissful simulation than in a world like ours.
Given plausible anthropics, we should thus expect that, if we are being simulated by a Friendly AI, we would be in a blissful world (like the ones I described). Since we are not in such a world, we should decrease our credence in the hypothesis of us being simulated by a Friendly AI.
My point is that it is impossible to resurrect anyone (in this model) without him reliving his life again first, after that he obviously gets eternal blissful life in real (not simulated) world.
This may be not factually true, btw, - current LLMs can create good models of past people without running past simulation of their previous life explicitly.
It is a variant of Doomsday argument. This idea is even more controversial than simulation argument. There is no future with many people in it. Friendly AI can fight DA curse via simulations—by creating many people who do not know their real time position which can be one more argument for simulation, but it requires rather wired decision theory.
Yup, I agree.
This makes my case even stronger! Basically, if a Friendly AI has no issues with simulating conscious beings in general, then we have good reasons to expect it to simulate more observers in blissful worlds than in worlds like ours.
If the Doomsday Argument tells us that Friendly AI didn’t simulate more observers in blissful worlds than in worlds like ours, then that gives us even more reasons to think that we are not being simulated by a Friendly AI in the way that you have described.