I’m starting to wonder if much of the confusion around topics like this are due to undefined and inconsistent/changing intuitions about utility from your knowledge of copies/clones. It seems weird not to care about clones, and even weirder not to care about whether a potential clone is reified.
Also, since perfect prediction removes causality from the equation, you can just let the genie cheat and perform whatever evil he’s going to do after you choose. And finally, “perfect life” implies infinite utility, so I’m ignoring that part and replacing it with “gets a thing they really want”, to avoid mixing up multiple distinct thought problems.
The puzzle seems equivalent to “there are a million and one people somewhat similar to you, in that they think they’ve found a lamp and are being offered this puzzle. One of them gets a perfect life if that’s their choice, and pelted with eggs if that’s their choice. A million of them get tortured if the one chooses a perfect life and simply disappear like they never existed if the one chooses the eggs”
I don’t see any regret in choosing the eggs. I want to be the kind of person who’ll make the sacrifice if it erases rather than tortures a million similar entities. Never existing is only slightly worse than existing briefly and being erased, so the logic holds.
Did you see my discussion of semi-clones above (ctrl-f should find it for you)? Do you believe that you necessarily care about them? You might also want to look at the True Prisoner’s Dilemma to better understand the intuition behind not co-operating.
Yes, and I’m not sure it helps—I care about everyone, at least a little bit. I seem to care about people closer to me more than people distant, but I think I’d agree to be pelted with eggs to prevent a million tortured people from existing. There is some number less than a million where my intuition flips, and I suspect that I’m inconsistent and dutch-book-able in the details (although log utility on number of other-tortures-prevented might save me).
I don’t know a good way to construct the conundrum you’re trying to: where I don’t care about the copies except for the one which is me. I kind of think identity doesn’t work that way—a perfect copy _is_ me, an imperfect copy is imperfectly-me. I am you, if I had your genes and experiences rather than mine.
EDIT: epistemic status for this theory of identity—speculative. There is something wrong with the naive intuition that “me” is necessarily a singular thread through time, and that things like sleeping and significant model updates (and perhaps every update) don’t create any drift in caring for some situations more than others.
I think so. Caring about a future being (“self”) only in the case where there’s physical continuity (imperfect as it is) and excluding continuity through copying is wrong. The distinction between (imperfect) continuity and (partial) similarity likewise seems broken.
I’m starting to wonder if much of the confusion around topics like this are due to undefined and inconsistent/changing intuitions about utility from your knowledge of copies/clones. It seems weird not to care about clones, and even weirder not to care about whether a potential clone is reified.
Also, since perfect prediction removes causality from the equation, you can just let the genie cheat and perform whatever evil he’s going to do after you choose. And finally, “perfect life” implies infinite utility, so I’m ignoring that part and replacing it with “gets a thing they really want”, to avoid mixing up multiple distinct thought problems.
The puzzle seems equivalent to “there are a million and one people somewhat similar to you, in that they think they’ve found a lamp and are being offered this puzzle. One of them gets a perfect life if that’s their choice, and pelted with eggs if that’s their choice. A million of them get tortured if the one chooses a perfect life and simply disappear like they never existed if the one chooses the eggs”
I don’t see any regret in choosing the eggs. I want to be the kind of person who’ll make the sacrifice if it erases rather than tortures a million similar entities. Never existing is only slightly worse than existing briefly and being erased, so the logic holds.
Did you see my discussion of semi-clones above (ctrl-f should find it for you)? Do you believe that you necessarily care about them? You might also want to look at the True Prisoner’s Dilemma to better understand the intuition behind not co-operating.
Yes, and I’m not sure it helps—I care about everyone, at least a little bit. I seem to care about people closer to me more than people distant, but I think I’d agree to be pelted with eggs to prevent a million tortured people from existing. There is some number less than a million where my intuition flips, and I suspect that I’m inconsistent and dutch-book-able in the details (although log utility on number of other-tortures-prevented might save me).
I don’t know a good way to construct the conundrum you’re trying to: where I don’t care about the copies except for the one which is me. I kind of think identity doesn’t work that way—a perfect copy _is_ me, an imperfect copy is imperfectly-me. I am you, if I had your genes and experiences rather than mine.
EDIT: epistemic status for this theory of identity—speculative. There is something wrong with the naive intuition that “me” is necessarily a singular thread through time, and that things like sleeping and significant model updates (and perhaps every update) don’t create any drift in caring for some situations more than others.
Sure that is what you prefer, but is a selfish agent incoherent?
I think so. Caring about a future being (“self”) only in the case where there’s physical continuity (imperfect as it is) and excluding continuity through copying is wrong. The distinction between (imperfect) continuity and (partial) similarity likewise seems broken.