This whole post seems to assign moral values to actions, rather than states. If it is morally negative to end a simulated person’s existence, does this mean something different that saying that the universe without that simulated person has a lower moral value than the universe with that person’s existence? If not, doesn’t that give us a moral obligation to create and maintain all the simulations we can, rather than avoiding their creation? The more I think about this post, the more it seems that the optimum response is to simulate as many super-happy people as possible, and to hell with the non-simulated world (assuming the simulated people would vastly outweigh the non-simulated people in terms of ‘ammount experienced’).
You are going to die, and there’s nothing your parents can do to stop that. Was it morally wrong for them to bring about your existence in the first place?
Suppose some people have crippling disabilities that cause large amounts of suffering in their lives (arguably, some people do). If we could detect the inevitable development of such disabilities at an early embryonic stage, would we be morally obligated to abort the fetuses?
If an FAI is going to run a large number of simulations, is there some Rule of Large Numbers result that tells us that the simulations experiencing great amounts of pleasure match or overwhelm the simulations experiencing great amounts of pain (or could we construct the algorithms in such a way as to produce this result)? If so, we may be morally obligated to not solve this problem.
Assuming you support people’s “right to die,” what if we simply ensured that all simulated agents ask to be deleted at the end of their run? (I am here reminded of a vegetarian friend of mine who decided the meat industry would be even more horrible if we managed to engineer cows that asked to be eaten).
You’re touching on some unresolved issues, and some issues that are resolved but complicated to solve without maths beyond my grasp.
From what I understand, there’s a lot of our current and past values involved, and how we would think now and want now vs what we would think and want post-modification.
To pick a particularly emotional subject for most people, let’s suppose there’s some person “K” who’s just so friggen good at sex and psychological domination that even if they rape someone that person will, after the initial shock and trauma, quickly recover within a day and immediately without further intervention become permanently addicted to sex, with their mind rewiring itself to fully enjoy a life full of sex with anyone they can have sex with for the rest of their life, and from their own point of view finding that life as fulfilling as possible.
Is K then morally obligated to rape as many people as possible?
In this kind of questions, people usually have strong emotional moral convictions.
Food for thought:
This whole post seems to assign moral values to actions, rather than states. If it is morally negative to end a simulated person’s existence, does this mean something different that saying that the universe without that simulated person has a lower moral value than the universe with that person’s existence? If not, doesn’t that give us a moral obligation to create and maintain all the simulations we can, rather than avoiding their creation? The more I think about this post, the more it seems that the optimum response is to simulate as many super-happy people as possible, and to hell with the non-simulated world (assuming the simulated people would vastly outweigh the non-simulated people in terms of ‘ammount experienced’).
You are going to die, and there’s nothing your parents can do to stop that. Was it morally wrong for them to bring about your existence in the first place?
Suppose some people have crippling disabilities that cause large amounts of suffering in their lives (arguably, some people do). If we could detect the inevitable development of such disabilities at an early embryonic stage, would we be morally obligated to abort the fetuses?
If an FAI is going to run a large number of simulations, is there some Rule of Large Numbers result that tells us that the simulations experiencing great amounts of pleasure match or overwhelm the simulations experiencing great amounts of pain (or could we construct the algorithms in such a way as to produce this result)? If so, we may be morally obligated to not solve this problem.
Assuming you support people’s “right to die,” what if we simply ensured that all simulated agents ask to be deleted at the end of their run? (I am here reminded of a vegetarian friend of mine who decided the meat industry would be even more horrible if we managed to engineer cows that asked to be eaten).
You’re touching on some unresolved issues, and some issues that are resolved but complicated to solve without maths beyond my grasp.
From what I understand, there’s a lot of our current and past values involved, and how we would think now and want now vs what we would think and want post-modification.
To pick a particularly emotional subject for most people, let’s suppose there’s some person “K” who’s just so friggen good at sex and psychological domination that even if they rape someone that person will, after the initial shock and trauma, quickly recover within a day and immediately without further intervention become permanently addicted to sex, with their mind rewiring itself to fully enjoy a life full of sex with anyone they can have sex with for the rest of their life, and from their own point of view finding that life as fulfilling as possible.
Is K then morally obligated to rape as many people as possible?
In this kind of questions, people usually have strong emotional moral convictions.