I’m not sure I consider the two scenarios identical, but I’m still struggling to construct a model of identity and the value of life that works under cloning. And ultimately I think the differences are relevant.
But I agree that your version raises some of the same questions, so let’s start there.
I agree that there are versions of P and B for which it is in everyone’s best interests that the button not be pushed. Again, just to be concrete, I’ll propose a specific such example: B = $1, P= -$1,000,000. (I’m using $ here to denote a certain amount of constant-value stuff, not just burning a million-dollar bill.)
To press that button is simply foolish, for the same reason that spending $500,000 to purchase $0.50 is foolish. And I agree that Pavitra’s proposal is a foolish choice in the same way.
And I agree that when a sufficiently costly mistake is sufficiently compelling, we do well to collectively eliminate the choice—take away the button—from one another, and that determining when that threshold has been met is basically an empirical question. (I mostly think that words like “government” and “society” confuse the issue here, but I don’t disagree with your use of them.)
I’m not sure I agree that these aren’t ethical questions, but I’m not sure that matters.
So far, so good.
Where the questions of the nature of identity creep back in for me is precisely in the equation of a thousand copies of me, created on demand, with a thousand existing people. It just stops being quite so clear whose interests are being protected, and from whom, and what kinds of social entities are entitled to make those kinds of laws.
I guess the intuition I am struggling with is that we derive our collective right to restrict one another’s individual freedom of choice in part from the collective consequences of our individual choices… that if there truly are no externalities to your behavior, then I have no right to interfere with that behavior. Call that the Principle of Independence.
If you exchange N hours of unpleasantness for you for an hour of pleasantness for you, and it all happens inside a black box with negligible externalities… the POI says I have negligible say in that matter. And that seems to scale more or less indefinitely, although at larger scales I start to care about externalities (like opportunity costs) that seemed negligible at smaller scales.
And if what you do inside that black box is create a thousand clones of yourself and set them to mining toothpicks in the Pointlessly Unpleasant Toothpick Mines for twenty years, and then sell the resulting box of toothpicks for a dollar… well, um… I mean, you’re insane, but… I guess I’m saying I don’t have standing there either.
I’m not happy with that conclusion, but I’m not unhappy enough to want to throw out the POI, either.
I’m not sure I consider the two scenarios identical, but I’m still struggling to construct a model of identity and the value of life that works under cloning. And ultimately I think the differences are relevant.
But I agree that your version raises some of the same questions, so let’s start there.
I agree that there are versions of P and B for which it is in everyone’s best interests that the button not be pushed. Again, just to be concrete, I’ll propose a specific such example: B = $1, P= -$1,000,000. (I’m using $ here to denote a certain amount of constant-value stuff, not just burning a million-dollar bill.)
To press that button is simply foolish, for the same reason that spending $500,000 to purchase $0.50 is foolish. And I agree that Pavitra’s proposal is a foolish choice in the same way.
And I agree that when a sufficiently costly mistake is sufficiently compelling, we do well to collectively eliminate the choice—take away the button—from one another, and that determining when that threshold has been met is basically an empirical question. (I mostly think that words like “government” and “society” confuse the issue here, but I don’t disagree with your use of them.)
I’m not sure I agree that these aren’t ethical questions, but I’m not sure that matters.
So far, so good.
Where the questions of the nature of identity creep back in for me is precisely in the equation of a thousand copies of me, created on demand, with a thousand existing people. It just stops being quite so clear whose interests are being protected, and from whom, and what kinds of social entities are entitled to make those kinds of laws.
I guess the intuition I am struggling with is that we derive our collective right to restrict one another’s individual freedom of choice in part from the collective consequences of our individual choices… that if there truly are no externalities to your behavior, then I have no right to interfere with that behavior. Call that the Principle of Independence.
If you exchange N hours of unpleasantness for you for an hour of pleasantness for you, and it all happens inside a black box with negligible externalities… the POI says I have negligible say in that matter. And that seems to scale more or less indefinitely, although at larger scales I start to care about externalities (like opportunity costs) that seemed negligible at smaller scales.
And if what you do inside that black box is create a thousand clones of yourself and set them to mining toothpicks in the Pointlessly Unpleasant Toothpick Mines for twenty years, and then sell the resulting box of toothpicks for a dollar… well, um… I mean, you’re insane, but… I guess I’m saying I don’t have standing there either.
I’m not happy with that conclusion, but I’m not unhappy enough to want to throw out the POI, either.
So that’s kind of where I am.