I came up with this after watching a science fiction film, which shall remain nameless due to spoilers, where the protagonist is briefly in a similar situation to the scenario at the end. I’m not sure how original it is, but I certainly don’t recall seeing anything like it before.
Imagine, for simplicity, a purely selfish agent. Call it Alice. Alice is an expected utility maximizer, and she gains utility from eating cakes. Omega appears and offers her a deal—they will flip a fair coin, and give Alice three cakes if it comes up heads. If it comes up tails, they will take one cake away her stockpile. Alice runs the numbers, determines that the expected utility is positive, and accepts the deal. Just another day in the life of a perfectly truthful superintelligence offering inexplicable choices.
The next day, Omega returns. This time, they offer a slightly different deal—instead of flipping a coin, they will perfectly simulate Alice once. This copy will live out her life just as she would have done in reality—except that she will be given three cakes. The original Alice, however, receives nothing. She reasons that this is equivalent to the last deal, and accepts.
(If you disagree, consider the time between Omega starting the simulation and providing the cake. What subjective odds should she give for receiving cake?)
Imagine a second agent, Bob, who gets utility from Alice getting utility. One day, Omega show up and offers to flip a fair coin. If it comes up heads, they will give Alice—who knows nothing of this—three cakes. If it comes up tails, they will take one cake from her stockpile. He reasons as Alice did an accepts.
Guess what? The next day, Omega returns, offering to simulate Alice and give her you-know-what (hint: it’s cakes.) Bob reasons just as Alice did in the second paragraph there and accepts the bargain.
Humans value each other’s utility. Most notably, we value our lives, and we value each other not being tortured. If we simulate someone a billion times, and switch off one simulation, this is equivalent to risking their life at odds of 1:1,000,000,000. If we simulate someone and torture one of the simulations, this is equivalent to risking a one-in-a-billion chance of them being tortured. Such risks are often acceptable, if enough utility is gained by success. We often risk our own lives at worse odds.
If we simulate an entire society a trillion times, or 3^^^^^^3 times, or some similarly vast number, and then simulate something horrific—an individual’s private harem or torture chamber or hunting ground—then the people in this simulation *are not real*. Their needs and desires are worth, not nothing, but far less then the merest whims of those who are Really Real. They are, in effect, zombies—not quite p-zombies, since they are conscious, but e-zombies—reasoning, intelligent beings that can talk and scream and beg for mercy but *do not matter*.
My mind rebels at the notion that such a thing might exist, even in theory, and yet … if it were a similarly tiny *chance*, for similar reward, I would shut up and multiply and take it. This could be simply scope insensitivity, or some instinctual dislike of tribe members declaring themselves superior.
Well, there it is! The weirdest of Weirdtopias, I should think. Have I missed some obvious flaw? Have I made some sort of technical error? This is a draft, so criticisms will likely be encorporated into the final product (if indeed someone doesn’t disprove it entirely.)
DRAFT:Ethical Zombies—A Post On Reality-Fluid
I came up with this after watching a science fiction film, which shall remain nameless due to spoilers, where the protagonist is briefly in a similar situation to the scenario at the end. I’m not sure how original it is, but I certainly don’t recall seeing anything like it before.
Imagine, for simplicity, a purely selfish agent. Call it Alice. Alice is an expected utility maximizer, and she gains utility from eating cakes. Omega appears and offers her a deal—they will flip a fair coin, and give Alice three cakes if it comes up heads. If it comes up tails, they will take one cake away her stockpile. Alice runs the numbers, determines that the expected utility is positive, and accepts the deal. Just another day in the life of a perfectly truthful superintelligence offering inexplicable choices.
The next day, Omega returns. This time, they offer a slightly different deal—instead of flipping a coin, they will perfectly simulate Alice once. This copy will live out her life just as she would have done in reality—except that she will be given three cakes. The original Alice, however, receives nothing. She reasons that this is equivalent to the last deal, and accepts.
(If you disagree, consider the time between Omega starting the simulation and providing the cake. What subjective odds should she give for receiving cake?)
Imagine a second agent, Bob, who gets utility from Alice getting utility. One day, Omega show up and offers to flip a fair coin. If it comes up heads, they will give Alice—who knows nothing of this—three cakes. If it comes up tails, they will take one cake from her stockpile. He reasons as Alice did an accepts.
Guess what? The next day, Omega returns, offering to simulate Alice and give her you-know-what (hint: it’s cakes.) Bob reasons just as Alice did in the second paragraph there and accepts the bargain.
Humans value each other’s utility. Most notably, we value our lives, and we value each other not being tortured. If we simulate someone a billion times, and switch off one simulation, this is equivalent to risking their life at odds of 1:1,000,000,000. If we simulate someone and torture one of the simulations, this is equivalent to risking a one-in-a-billion chance of them being tortured. Such risks are often acceptable, if enough utility is gained by success. We often risk our own lives at worse odds.
If we simulate an entire society a trillion times, or 3^^^^^^3 times, or some similarly vast number, and then simulate something horrific—an individual’s private harem or torture chamber or hunting ground—then the people in this simulation *are not real*. Their needs and desires are worth, not nothing, but far less then the merest whims of those who are Really Real. They are, in effect, zombies—not quite p-zombies, since they are conscious, but e-zombies—reasoning, intelligent beings that can talk and scream and beg for mercy but *do not matter*.
My mind rebels at the notion that such a thing might exist, even in theory, and yet … if it were a similarly tiny *chance*, for similar reward, I would shut up and multiply and take it. This could be simply scope insensitivity, or some instinctual dislike of tribe members declaring themselves superior.
Well, there it is! The weirdest of Weirdtopias, I should think. Have I missed some obvious flaw? Have I made some sort of technical error? This is a draft, so criticisms will likely be encorporated into the final product (if indeed someone doesn’t disprove it entirely.)