Isn’t the second option the same as Omega offering to clone you, put the clone in hell for a finite amount of time and then destroy it, and give you the money immediately (assuming the money is adjusted to compensate for any lost time in hell in the original example)? So the option is actually to be paid a lot of money in exchange for allowing Omega to torture a person (nominally “you”) who will never experience any further positive utility. I would take two slaps in the face even without compensation instead of that option. I don’t consider my similarity to a person as a reason to treat them as a redundant copy.
That option runs into the problem that you’ve just let Omega extort money by threatening to create a person, torture it, and then destroy it. That seems problematic in other ways.
Everything Omega does is horribly problematic because Omega is at best an UFAI. I’ve never seen “preemptively neutralize Omega completely” offered as an option in any of the hypothetical scenarios even though that’s obviously the very best choice.
Is it really in anyone’s best interest to ever cooperate with Omega given that Omega seems intent on continuing a belligerent campaign of threats against humanity? “I’ll give you a choice between $1,000,000 or $1,000 today, but tomorrow I might threaten to slap you or throw you in hell. Oh, btw, I just simulated you against your will 2^100 times to maintain my perfect record on one/two-boxing.”
I may be overly tired and that may sound like hyperbole, but I do think that any rational agent encountering a far more powerful agent known to be at least not-friendly should think long and hard about the probability that the powerful agent can be trusted with even seemingly innocuous situations. There may be no way to Win. Some form of defection or defiance of the powerful agent may yield more dignity utilons than playing along with any of the choices offered by Omega. Survival machines may not value dignity and self-determination, but many humans value them quite highly.
I’d totally go for the memory loss/clone destruction option. To me it’s the final outcome that matters most, so if you start with one poor me and end with one rich me without the memory of anything unpleasant, it’s clearly a better option than ending up with one still-pretty-poor me with smarting cheeks. This is, of course, my subjective utility, I have no claim that it is better than anyone else’s for them.
To me it’s the final outcome that matters most … it’s clearly a better option than ending up with one still-pretty-poor me … This is, of course, my subjective utility, I have no claim that it is better than anyone else’s for them.
How could one know with any certainty what’s better for them (in the murkier cases)? Alternatively, if you do have a process that allows you to learn what’s better to you, you should claim that you can also help others to apply that process in order to figure out what’s better to them (which may be a different thing than what the process says about you).
You can of course decide what to do, but having ability to implement your own decisions is separate from having ability to find decisions that are reliably correct, from knowing that the decisions you make are clearly right or pursuing what in fact matters the most.
Does that apply only to copies of you or to people in general? Would you choose to torture all of humanity for a finite time, make them forget it, and then you receive 1 utilon?
Does that apply only to copies of you or to people in general?
As I explained, I do not presume to make decisions for others.
Would you choose to torture all of humanity for a finite time, make them forget it, and then you receive 1 utilon?
I would not, see above. A better question would have been “Would you choose to slightly inconvenience a person you dislike for a short time, make them forget it, and then you receive 3^^^3 utilons?” If I answered “yes” (and I probably would), then you could probe further to see where exactly my self-professed non-interference breaks down. This is the standard way of forking the dust specks-vs-torture boundary and showing the resulting inconsistency.
Similar strategies apply to clarifying other seemingly absolute positions, including yours (“I don’t consider my similarity to a person as a reason to treat them as a redundant copy.”) Presumably at some point the answers become “I don’t know”, rather than Yes/No.
I am fairly certain the only way that I would treat a clone of myself differently than another independent person is if we continued to share internal mental experiences. Then again, I would probably stop thinking of myself and a random person off the street as different people if I started sharing mental experiences with them, too.
In other words, while I would reject sending my fully independent clone to hell in order to gain utility, I might agree to fully share the mental experience with the clone in hell so long as the clone also got to experience the extra utility Omega paid me to balance out hell. That brings up a rather interesting question; if two people share mental experiences do they achieve double the utility of each person individually, or merely the set union of their individual utilities? Or something else?
while I would reject sending my fully independent clone to hell in order to gain utility, I might agree to fully share the mental experience with the clone in hell so long as the clone also got to experience the extra utility Omega paid me to balance out hell.
This seems to contradict your earlier assertion that
the second option the same as Omega offering to clone you, put the clone in hell for a finite amount of time and then destroy it, and give you the money immediately
because if you and the clone are one and the same (no cloning happened, you were tortured and then memory-wiped), “both” of you reap the rewards.
because if you and the clone are one and the same (no cloning happened, you were tortured and then memory-wiped), “both” of you reap the rewards.
We are not the same person after the point of the decision. There’s no continuity of experience. The tortured me experiences none of the utility, and the enriched me experiences none of the torture. That was why I thought of the cloning interpretation to begin with.
Isn’t the second option the same as Omega offering to clone you, put the clone in hell for a finite amount of time and then destroy it, and give you the money immediately (assuming the money is adjusted to compensate for any lost time in hell in the original example)? So the option is actually to be paid a lot of money in exchange for allowing Omega to torture a person (nominally “you”) who will never experience any further positive utility. I would take two slaps in the face even without compensation instead of that option. I don’t consider my similarity to a person as a reason to treat them as a redundant copy.
That option runs into the problem that you’ve just let Omega extort money by threatening to create a person, torture it, and then destroy it. That seems problematic in other ways.
Everything Omega does is horribly problematic because Omega is at best an UFAI. I’ve never seen “preemptively neutralize Omega completely” offered as an option in any of the hypothetical scenarios even though that’s obviously the very best choice.
Is it really in anyone’s best interest to ever cooperate with Omega given that Omega seems intent on continuing a belligerent campaign of threats against humanity? “I’ll give you a choice between $1,000,000 or $1,000 today, but tomorrow I might threaten to slap you or throw you in hell. Oh, btw, I just simulated you against your will 2^100 times to maintain my perfect record on one/two-boxing.”
I may be overly tired and that may sound like hyperbole, but I do think that any rational agent encountering a far more powerful agent known to be at least not-friendly should think long and hard about the probability that the powerful agent can be trusted with even seemingly innocuous situations. There may be no way to Win. Some form of defection or defiance of the powerful agent may yield more dignity utilons than playing along with any of the choices offered by Omega. Survival machines may not value dignity and self-determination, but many humans value them quite highly.
I’m using this comment to test the −5 karma rule. Just ignore it.
Much clearer way to think about it.
I’d totally go for the memory loss/clone destruction option. To me it’s the final outcome that matters most, so if you start with one poor me and end with one rich me without the memory of anything unpleasant, it’s clearly a better option than ending up with one still-pretty-poor me with smarting cheeks. This is, of course, my subjective utility, I have no claim that it is better than anyone else’s for them.
How could one know with any certainty what’s better for them (in the murkier cases)? Alternatively, if you do have a process that allows you to learn what’s better to you, you should claim that you can also help others to apply that process in order to figure out what’s better to them (which may be a different thing than what the process says about you).
You can of course decide what to do, but having ability to implement your own decisions is separate from having ability to find decisions that are reliably correct, from knowing that the decisions you make are clearly right or pursuing what in fact matters the most.
Does that apply only to copies of you or to people in general? Would you choose to torture all of humanity for a finite time, make them forget it, and then you receive 1 utilon?
As I explained, I do not presume to make decisions for others.
I would not, see above. A better question would have been “Would you choose to slightly inconvenience a person you dislike for a short time, make them forget it, and then you receive 3^^^3 utilons?” If I answered “yes” (and I probably would), then you could probe further to see where exactly my self-professed non-interference breaks down. This is the standard way of forking the dust specks-vs-torture boundary and showing the resulting inconsistency.
Similar strategies apply to clarifying other seemingly absolute positions, including yours (“I don’t consider my similarity to a person as a reason to treat them as a redundant copy.”) Presumably at some point the answers become “I don’t know”, rather than Yes/No.
I am fairly certain the only way that I would treat a clone of myself differently than another independent person is if we continued to share internal mental experiences. Then again, I would probably stop thinking of myself and a random person off the street as different people if I started sharing mental experiences with them, too.
In other words, while I would reject sending my fully independent clone to hell in order to gain utility, I might agree to fully share the mental experience with the clone in hell so long as the clone also got to experience the extra utility Omega paid me to balance out hell. That brings up a rather interesting question; if two people share mental experiences do they achieve double the utility of each person individually, or merely the set union of their individual utilities? Or something else?
This seems to contradict your earlier assertion that
because if you and the clone are one and the same (no cloning happened, you were tortured and then memory-wiped), “both” of you reap the rewards.
We are not the same person after the point of the decision. There’s no continuity of experience. The tortured me experiences none of the utility, and the enriched me experiences none of the torture. That was why I thought of the cloning interpretation to begin with.