If FAI paternalism is okay, then the FAI can decide that you would be best-off being wire-headed, apply the modification, and there you are, happy for all eternity.
Merely having the inability to regret an occurrence doesn’t make the occurrence coincide with one’s preferences. I couldn’t regret an unexpected, instantaneous death from which I was never revived, either; I emphatically don’t prefer one.
But wire-heading is not death. It is the opposite—the most fulfilling experience possible, to which everything else pales in comparison.
It seems you think paternalism is okay if it is pure in intent and flawless in execution.
It has been shown that vulnerability to smoking addiction is due to a certain gene. Suppose we could create a virus that would silently spread through the human population and fix this gene in everyone, willing or not. Suppose our intent is pure, and we know that this virus would operate flawlessly, only affecting this gene and having no other effects.
But wire-heading is not death. It is the opposite—the most fulfilling experience possible, to which everything else pales in comparison.
...”fulfilling”? Wire-heading only fulfills “make me happy”—it doesn’t fulfill any other goal that a person may have.
“Fulfilling”—in the sense of “To accomplish or carry into effect, as an intention, promise, or prophecy, a desire, prayer, or requirement, etc.; to complete by performance; to answer the requisitions of; to bring to pass, as a purpose or design; to effectuate” (Webster 1913) - is precisely what wire-heading cannot do.
Your other goals are immaterial and pointless to the outside world.
Nevertheless, suppose the FAI respects such a desire. This is questionable, because in the FAI’s mind, this is tantamount to letting a depressed patient stay depressed, simply because a neurotransmitter imbalance causes them to want to stay depressed. But suppose it respects this tendency.
In that case, the cheapest way to satisfy your desire, in terms of consumption of resources, is to create a simulation where you feel like you are thinking, learning and exploring, though in reality your brain is in a vat.
You’d probably be better off just being happy and sharing in the FAI’s infinite wisdom.
Would you do me a favor and refer to this hypothesized agent as a DAI (Denis Artificial Intelligence)? Such an entity is nothing I would call Friendly, and, given the widespread disagreement on what is Friendly, I believe any rhetorical candidates should be referred to by other names. In the meantime:
Your other goals are immaterial and pointless to the outside world.
I reject this point. Let me give a concrete example.
Recently I have been playing a lot of Forza Motorsport 2 on the XBox 360. I have recently made some gaming buddies who are more experienced in the game than I am—both better at driving in the game and better at tuning cars in the game. (Like Magic: the Gathering, Forza 2 is explicitly played on both the preparation and performance levels, although tilted more towards the latter.) I admire the skills they have developed in creating and controlling their vehicles and, wishing to admire myself in a similar fashion, wish to develop my own skills to a similar degree.
Consider you want to explore and learn and build ad infinitum. Progress in your activities requires you to control increasing amounts of matter and consume increasing amounts of energy, until such point as you conflict with others who also want to build and explore. When that point is reached, the only way the FAI can make you all happy is to intervene while you all sleep, put you in separate vats, and from then on let each of you explore an instance of the universe that it simulates for you.
Should it let you wage Star Wars on each other instead? And how would that be different from no AI to begin with?
You seem to be engaging in all-or-nothing thinking. Because I want more X does not mean that I want to maximize Xto the exclusion of all other possibilities. I want to explore and learn and build, but I also want to act fairly toward my fellow sapients/sentients. And I want to be happy, and I want my happiness to stem causally from exploring, learning, building, and fairness. And I want a thousand other things I’m not aware of.
An AI which examines my field of desires and maximizes one to the exclusion of all others is actively inimical to my current desires, and to all extrapolations of my current desires I can see.
But everything you do is temporary. All the results you get from it are temporary.
If you seek quality of experience, then the AI can wirehead you and give you that, with minimal consumption of resources. Even if you do not want a constant ultimate experience, all the thousands of your needs are more efficiently fulfilled in a simulation, than letting you directly manipulate matter. Allowing you to waste real resources is inimical both to the length of your life and that of everyone else.
If you seek personal growth, then the AI already is everything you can aspire to be. Your best bet at personal growth is interfacing or merging with its consciousness. And everyone can do that, as opposed to isolated growth of individual beings, which would consume resources that need to be available for others and for the AI.
If you seek personal growth, then the AI already is everything you can aspire to be.
Why would I build an AI which would steal everything I want to do and leave me with nothing worth doing? That doesn’t sound like the kind of future I want to build.
Edit:
But everything you do is temporary. All the results you get from it are temporary.
That just adds a constraint to what I may accomplish—it doesn’t change my preferences.
Why would I build an AI which would steal everything I want to do and leave me with nothing worth doing? That doesn’t sound like the kind of future I want to build.
Because only one creature can be maximized, and it’s better it’s an AI than a person.
Even if we don’t necessarily want the AI to maximize itself immediately, it will always need to be more powerful than any possible threat, and therefore more powerful than any other creature.
If you want the ultimate protector, it has to be the ultimate thing.
I don’t want it maximized, I want it satisficed—and I, at least, am willing to exchange a small existential risk for a better world. “They who can give up essential liberty to obtain a little temporary safety” &c.
If the AI can search the universe and determine that it is adequately secure from existential threats, I don’t want it expanding very quickly beyond that. Leave some room for us!
But the AI has to plan for a maximized outcome until the end of the universe. In order to maximize the benefit from energy before thermal death, resource efficiency right now is as important as when resources will be scarcest.
This is unless the AI discovers that thermal death can be overcome, in which case, great! But what we know so far indicates that the universe will eventually die, even if many billions of year in the future. So conservative resource management is important from day 1.
There are things I could say in reply, but I suspect we are simply talking past each other. I may reply later if I have some new insight into the nature of our disagreement.
The way I understand our disagreement is, you see FAI as a limited-functionality add-on that makes a few aspects of our lives easier for us, while I see it as an unstoppable force, with great implications for everything in its causal future, which just can’t not revolutionize everything, including how we feel, how we think, what we do. I believe I’m following the chain of reasoning to the end, whereas you appear to think we can stop after the first couple steps.
You also keep asserting to know in which particular way FAI is going to change things. Instead of repeating the same statements, you should recognise the disagreement, and address it directly, instead of continuing to profess the original assertions.
I don’t think that’s the source of our disagreement—as I mentioned in another thread, if prudence demanded that the population (or some large fraction thereof) be uploaded in software to free up the material substance for other purposes, I would not object. I could even accept major changes to social norms (such as legalization of nonconsensual sex, to use Eliezer Yudkowsky’s example). Our confirmed point of disagreement is not your thesis that “a human population which acquired an FAI would become immensely different from today’s”, it is your thesis that “a human population which acquired an FAI would become wireheads”. Super Happy People, maybe—not wireheads.
One quality that’s relevant to Friendly AI is that it does stop, when appropriate. It’s entirely plausible (according to Eliezer; last time I checked) that a FAI would never do anything that wasn’t a response to an existential threat (i.e. something that could wipe out or severely alter humanity), if that was the course of action most in keeping with our CEV.
It seems you think paternalism is okay if it is pure in intent and flawless in execution.
Whoa whoa whoa wait what? No. Not under a blanket description like that, at any rate. If you want to wirehead, and that’s your considered and stable desire, I say go for it. Have a blast. Just don’t drag us into it.
Would you be in favor of releasing this virus?
No. I’d be in favor of making it available in a controlled non-contagious form to individuals who were interested, though.
I would save the drunk friend (unless I had some kind of special knowledge, such as that the friend got drunk in order to enable him or herself to go through with a plan to indulge a considered and stable sober desire for death). In the case of the depressed friend, I’d want to refer to my best available knowledge of what that friend would have said about the situation prior to acquiring the neurotransmitter imbalance, and act accordingly.
It seems you think paternalism is okay if it is pure in intent and flawless in execution.
You’re twisting my words. I said that FAI paternalism would be different—which it would be, qualitatively and quantitatively. “Pure in intent and flawless in execution” are very fuzzy words, prone to being interpreted differently by different people, and only a very specific set of interpretations of those words would describe FAI.
It has been shown that vulnerability to smoking addiction is due to a certain gene. Suppose we could create a virus that would silently spread through the human population and fix this gene in everyone, willing or not. Suppose our intent is pure, and we know that this virus would operate flawlessly, only affecting this gene and having no other effects.
Would you be in favor of releasing this virus?
I’m with Alicorn on this one: If it can be made into a contagious virus, it can almost certainly be made into a non-contagious one, and that would be the ethical thing to do. However, if it can’t be made into a non-contagious virus, I would personally not release it, and I’m going to refrain from predicting what a FAI would do in that case; part of the point of building a FAI is to be able to give those kinds of decisions to a mind that’s able to make unbiased (or much less biased, if you prefer; there’s a lot of room for improvement in any case) decisions that affect groups of people too large for humans to effectively model.
I understand. That makes some sense. Though smokers’ judgement is impaired by their addiction, one can imagine that at least they will have periods of sanity when they can choose to fix the addiction gene themselves.
We do appear to differ in the case when an infectious virus is the only option to help smokers fix that gene. I would release the virus in that case. I have no qualms taking that decision and absorbing the responsibility.
This seems contradictory to your earlier claims about wireheading. Say that some smokers get a lot of pleasure from smoking, and don’t want to stop, and in fact would experience more pleasure in their lives if they kept the addiction. You’d release the virus?
Is it friendly to rescue a drunk friend who is about to commit suicide, knowing that they’ll come to their senses? Or is it friendly to let them die, because their current preference is to die?
That depends on whether they decided to commit suicide while in a normal-for-them frame of mind, not on their current preference. The first part of the question implies that they didn’t, in which case the correct response is to rescue them, wait for them to get sober, and talk it out—and then they can commit suicide, if they still feel the need.
Very well, then. Next example. Your friend is depressed, and they want to commit suicide. You know that their real problem is a neurotransmitter imbalance that can be easily fixed. However, that same neurotransmitter imbalance is depriving them of any will to fix it, and in fact they refuse to cooperate. You know that if you fix their imbalance regardless, they will be happy, they will live a fulfilled life, and they will be grateful to you for it. Is it friendly to intervene and fix the imbalance, or is it friendly to let them die, seeing as depression and thoughts of suicide are a normal-for-them frame of mind?
This is an excellent answer, and squares well with mine: If they merely want to commit suicide, they may not have considered all the alternatives. If they have considered all the achievable alternatives, and their preference is to commit suicide, I’d support them doing so.
If this is leading in a direction where “wireheading” is identified with “being happy and living a fulfilled life”, then we might as well head it off at the pass.
Being happy—being in a pleasurable state—isn’t enough, we would insist that our future lives should also be meaningful (which I would argue is part of “fulfilled”).
This isn’t merely a subjective attribute, as is “happy” which could be satisfied by permanently blissing out. It has objective consequences; you can tell “meaningful” from the outside. Meaningful arrangements of matter are improbable but lawful, structured but hard to predict, and so on.
“Being totally happy all the time” is a state of mind, the full description of which would compress very well, just as the description of zillions of molecules of gas can be compressed to a handful of parameters. “Meaningful” corresponds to states of mind with more structure and order.
If we are to be somehow “fixed” we would want the “fix” to preserve or restore the property we have now, of being the type of creature who can (and in fact do) choose for themselves.
The preference for “objective meaningfulness”—for states which do not compress very well—seems to me a fairly arbitrary (meaningless) preference. I don’t think it’s much different from paperclip maximization.
Who is to observe the “meaningful” states, if everyone is in a state where they are happy?
I am not even convinced that “happy and fulfilled” compresses easily. But if it did, what is the issue? Everyone will be so happy as to not mind the absence of complicated states.
I would go so far as to say that seeking complicated states is something we do right now because it is the most engaging substitute we have for being happy.
And not everyone does this. Most people prefer to empty their minds instead. It may even be that seeking complexity is a type of neurotic tendency.
Should the FAI be designed with a neurotic tendency?
I can give you my general class of answers to this kind of problem: I will always attempt to the best of my ability to talk someone I care about out of doing something that will cause them to irretrievably cease to function as a person—a category which includes both suicide and wireheading. However, if in spite of my best persuasive efforts—which are likely to have a significant effect, if I’m actually friends with the person—they still want to go through with such a thing, I will support them in doing so.
The specific implementation of the first part in this case would be to try to talk them into trying the meds, with the (accurate) promise that I would be willing to help them suicide if they still wanted to do that after a certain number of months (dependent on how long the meds take to work).
There are so many different anti-depressants and the methods for choosing which ones are optimal basically come down to the intuition of the psychiatrist. It can take years to iterate through all the possible combinations of psychiatric medication if they keep failing to fix the neurotransmitter imbalance. I think anything short of 2 years is not long enough to conclude that a person’s brain is irreparably broken. It’s also a field that has a good chance of rapid development, such that a brain that seems irreparably broken today will certainly not always be unfixable.
--
I explored a business in psychiatric genetic testing, and identified about 20 different mutations that could help psychiatrists make treatment decisions, but it was infeasible to bring to market right now without having millions of dollars for research, and the business case is not strong enough for me to raise millions of dollars for research. It’ll hit the market within 10 years, sooner if the business case becomes stronger for me doing it or if I have the spare $20k to go out and get the relevant patent to see what doors that opens.
I expect the first consequence of widespread genetic testing for mental health is for NRIs to become much more widely prescribed as the firstline treatment for depression.
I’m OK with the deletion of very-short-lived copies of myself if there are good reasons to do it. For example, supposing after cryonic suspension I’m revived with scanning and WBE. Unfortunately, unbeknownst to those reviving me, I have a phobia of the Michelin Man and the picture of him on the wall means I deal with the shock of my revival very badly. I’d want the revival team to just shut down, change the picture on the wall and try again.
I can also of course imagine lots of circumstances where deletion of copies would be much less morally justifiable.
I’m OK with the deletion of very-short-lived copies of myself if there are good reasons to do it.
There’s a very nice thought experiment that helps demonstrate this (I think it’s from Nozick). Imagine a sleeping pill that makes you fall asleep in thirty minutes, but you won’t remember the last fifteen minutes of being awake. From the point of view of your future self, the fifteen minutes you don’t remember is exactly like a short-lived copy that got deleted after fifteen minutes. It’s unlikely that anyone would claim taking the pill is unethical, or that you’re killing a version of yourself by doing so.
It’s unlikely that anyone would claim taking the pill is unethical, or that you’re killing a version of yourself by doing so.
Armchair reasoning: I can imagine the mental clone and the original existing at the same time, side-by-side. I cannot imagine myself with the memory loss and myself without the memory loss as existing at the same time. Also, whatever actions my past self does actually affects my future self regardless of what I remember. As such, my instinct is to think of the copy as a separate identity and my past self as the same identity.
Imagine a scenario where I cut off my arm. I am responsible. If my copy cuts off my arm, he would be responsible, not “me.”
This is all playing semantics with personal identity. I am not trying to espouse any particular belief; I am only offering one possible difference between the idea of forgetting your past and copying yourself.
Yeah, okay. You are illustrating my point exactly. Not everyone thinks the way you do about identity and not everyone thinks the way I mentioned about identity. I don’t hold hard and fast about it one way or the other.
But the original example of someone who loses 15 minutes being similar to killing off a copy who only lived for 15 minutes implies a whole ton of things about identity. The word “copy” is too ambiguous to say, “Your copy is you.”
If I switched in, “X’s copy is X” and then started talking about various cultural examples of copying we quickly run into trouble. Why does “X’s copy is X” work for people? Unless I missed a definition of terms comment or post somewhere, I don’t see how we can just assume that is true.
The first use of “copy” I found in this thread is:
Probably the “friendly” action would be to create an un-drunk copy of them, and ask the copy to decide.
It was followed by:
And what do you do with the copy? Kill it?
As best as I can tell, you take the sentence, “Your copy is you” to be a tautology or definition or something along those veins. (I could obviously be wrong; please correct me if I am.) What would you call a functionally identical version of X with a separate, distinct Identity? Is it even possible? If it is, use that instead of “copy” when reading my comment:
Imagine a scenario where I cut off my arm. I am responsible. If my copy cuts off my arm, he would be responsible, not “me.”
When I read the original comment I responded to:
From the point of view of your future self, the fifteen minutes you don’t remember is exactly like a short-lived copy that got deleted after fifteen minutes.
I was not assuming your definition of copy. Which could entirely be my fault, but I find it hard to believe that you didn’t understand my point enough to predict this response. If you did, it would have been much faster to simply say, “When people at LessWrong talk about copies they mean blah.” In which case I would have responded, “Oh, okay, that makes sense. Ignore my comment.”
The semantics get easier if you think of both as being copies, so you have past-self, copy-1, and copy-2. Then you can ask which copy is you, or if they’re both you. (If past-self is drunk, copy-1 is drunk, and copy-2 is sober, which copy is really more “you”?)
I’d actually be kinda hesitant of such pills and would need to think it out. The version of me that is in those 15 minutes might be a bit unhappy about the situation, for one thing.
And it basically results in 15 minutes of experience that simply “go away”? no gradual transition/merging into the mainline experience, simply 15 minutes that get completely wiped?
Certainly—this is the restore-from-backup scenario, for which Blueberry’s sleeping-pill comparison was apt. (I would definitely like to make a secure backup before taking a risk, personally.) What I wanted to suggest was that duplicate-for-analysis was less clear-cut.
What’s the difference? Supposing that as a matter of course the revival team try a whole bunch of different virtual environments looking for the best results, is that restore-from-backup or duplicate-for-analysis?
Suppose that we ironically find that the limitations on compute hardware mean that no matter how much we spend we hit an exact 1:1 ratio between subjective and real time, but that the hardware is super-cheap. Also, there’s no brain “merge” function. I might fork off a copy to watch a movie to review it for myself, to decide whether the “real me” should watch it.
As MrHen pointed out, you can imagine the ‘duplicate’ and ‘original’ existing side-by-side—this affects intuitions in a number of ways. To pump intuition for a moment, we consider identical twins to be different people due to the differences in their experiences, despite their being nearly identical on a macro level. I haven’t done the calculations to decide where the border of acceptable use of duplication lies, but deleting a copy which diverged from the original twenty years before clearly appears to be over the line.
It’s very hard to know how I would face the prospect of being deleted and replaced with a twenty-minute-old backup in real life!
I may be answering an un-asked question, since I haven’t been following this conversation, but the following solution to the issue of clones occurs to me:
Leave it up to the clone.
Make suicide fully legal and easily available (possibly ‘suicide of any copy of a person in cases where more than one copy exists’, though that could allow twins greater leeway depending on how you define ‘person’ - perhaps also add a time limit: the split must have occurred within N years). When a clone is created, it’s automatically given the rights to 1⁄2 of the original’s wealth. If the clone suicides, the original ‘inherits’ the wealth back. If the clone decides not to suicide, it automatically keeps the wealth that it has the rights to.
Given that a clone is functionally the same person as the original, this should be an ethical solution (assuming that you consider suicide ethical at all) - someone would have to be very sure that they’d be able to go through with suicide, or very comfortable with the idea of splitting their wealth in half, in order to be willing to take the risk of creating a clone. The only problem that I see is with unsplittable things like careers and relationships. (Flip a coin? Let the other people involved decide?)
This seems like a good solution. If I cloned myself, I’d want it to be established beforehand which copy would stay around, and which copy would go away. For instance, if you’re going to make a copy that goes to watch a movie to see if the movie is worth your time, the copy that watches the movie should go away, because if it’s good the surviving version of yourself will watch it anyway.
someone would have to be very sure that they’d be able to go through with suicide
I (and thus my clones) don’t see it as suicide, more like amnesia, so we’d have no problem going through with it if the benefit outweighed the amnesia.
If you keep the clone around, in terms of splitting their wealth, both clones can work and make money, so you should get about twice the income for less than twice the expenses (you could share some things). In terms of relationships, you could always bring the clones into a relationship. A four way relationship, made up of two copies of each original person, might be interesting.
A four way relationship, made up of two copies of each original person, might be interesting.
Hmm… *Imagines such a relationship with significant other.* Holy hell that would be weird. The amount of puzzling scenarios I can think of just by sitting here is extravagant. Does anyone know of a decent novel based on this premise?
I don’t think those kinds of situations will need to be spelled out in advance, actually. Coming up with a plan that’s acceptable to both versions of yourself before going through with the cloning should be about as easy as coming up with a plan that’s acceptable to just one version, once you’re using the right kind of framework to think about it. (You should be about equally willing to take either role, in other words, otherwise your clone is likely to rebel, and since they’re considered independent from the get-go (and not bound by any contracts they didn’t sign, I assume), there’s not much you can do about that.)
Setting up four-way relationships would definitely be interesting. Another scenario that I like is one where you make a clone to pursue an alternate life-path that you suspect might be better but think is too risky—after a year (or whatever), whichever of you is less happy could suicide and give their wealth to the other one, or both could decide that their respective paths are good and continue with half-wealth.
The more I think about this, the more I want to make a bunch of clones of myself. I don’t even see why I’d need to destroy them. I shouldn’t have to pay for them; they can get their own jobs, so wealth isn’t that much of a concern.
Coming up with a plan that’s acceptable to both versions of yourself before going through with the cloning should be about as easy as coming up with a plan that’s acceptable to just one version, once you’re using the right kind of framework to think about it.
The concern is that immediately after you clone, both copies agree that Copy 1 should live and Copy 2 should die, but afterwards, Copy 2 doesn’t want to lose those experiences. If you decide beforehand that you only want one of you around, and Copy 2 is created specifically to be destroyed, there should be a way to bind Copy 2 to suicide.
Calling it murder seems extreme, since you end up surviving. What’s the difference between binding a copy to suicide and binding yourself to take a sleep-amnesia pill?
If it’s not utterly voluntary when committed, I don’t class it as suicide. (I also consider ‘driving someone to suicide’ to actually be murder.)
My solution to resolving the ethical dilemma is, to reword it, to give the clone full human rights from the moment it’s created (actually a slightly expanded version of current human rights, since we’re currently prohibited from suiciding). I assume that it’s not currently possible to enforce a contract that will directly cause one party’s death; that aspect of inter-human interaction should remain. The wealth-split serves as a balance in two ways: Suddenly having your wealth halved would be traumatic for almost anyone, which gives a clone that had planned to suicide extra impetus to do so, and also should strongly discourage people from taking unnecessary risks when making clones. In other words, that’s not a bug, it’s a feature.
The difference between what you proposed and the sleeping pill scenario is that in the latter, there’s never a situation where an individual is deprived of rights.
If it’s not utterly voluntary when committed, I don’t class it as suicide.
I’m still unclear why you classify it as death at all. You end up surviving it.
I think you’re thinking of a each copy as an individual. I’m thinking of the copies collectively as a tool used by an individual.
The difference between what you proposed and the sleeping pill scenario is that in the latter, there’s never a situation where an individual is deprived of rights.
Ok, say you enter into a binding agreement forcing yourself to take a sleeping pill tomorrow. You have someone there to enforce it if necessary. The next day, you change your mind, and the person forces you to take the pill anyway. Have you been deprived of rights? (If it helps, substitute eating dessert, or gambling, or doing heroin for taking the pill.)
I think you’re thinking of a each copy as an individual. I’m thinking of the copies collectively as a tool used by an individual.
Yes, I am, and as far as I can tell mine’s the accurate model. Each copy is separately alive and conscious; they should no more be treated as the same individual than twins are treated as the same individual. (Otherwise, why is there any ethical question at all?)
Ok, say you enter into a binding agreement forcing yourself to take a sleeping pill tomorrow. … Have you been deprived of rights?
This kind of question comes up every so often here, and I still haven’t heard or thought of an answer that satisfies me. I don’t see it as relevant here, though, because I do recognize the clone as a separate individual who shouldn’t be coerced.
Yes, I am, and as far as I can tell mine’s the accurate model.
But if my copies and I don’t think that way, is it still accurate for us? We agree to be bound by any original agreement, and we think any of us are still alive as long as one of us is, so there’s no death involved. Well, death of a living organism, but not death of a person.
I don’t see it as relevant here, though, because I do recognize the clone as a separate individual who shouldn’t be coerced.
It’s the same question, because I’m assuming both copy A and copy B agree to be bound by the agreement immediately after copying (which is the same as the original making a plan immediately before copying). Both copies share a past, so if you can be bound by your past agreements, so can each copy. Even if the copies are separate individuals, they don’t have separate pasts.
If you and all your copies think that way, then you shouldn’t have to worry about them defecting in the first place, and the rule is irrelevant for you. How sure are you that that’s what you really believe, though? Sure enough to bet 1⁄2 your wealth?
My concern with having specific copies be bound to past agreements is that I don’t trust that people won’t abuse that: It’s easy not to see the clone as ‘yourself’, but as an easily exploitable other. Here’s a possible solution to that problem (though one that I don’t like as well as not having the clone bound by prior agreements at all): Clones can only be bound by prior agreements that randomly determine which one acts as the ‘new’ clone and which acts as the ‘old’ clone. So, if you split off a clone to go review a movie for you, and pre-bind the clone to die after reporting back, there’s a 50% chance—determined by a coin flip—that it’s you, the original, who will review the movie, and the clone who will continue with your life.
There isn’t an “original”. After the copying, there’s Copy A and Copy B. Both are me. I’m fine with randomly selecting whether Copy A or Copy B goes to see the movie, but it doesn’t matter, since they’re identical (until one sees the movie). In fact, there is no way to not randomly select which copy sees the movie.
From the point of view of the clone who sees the movie (say it’s bad), “suiciding” is the same as him going back in time and not seeing the movie. So I’d always stick to a prior agreement in a case like that.
If you and all your copies think that way, then you shouldn’t have to worry about them defecting in the first place, and the rule is irrelevant for you. How sure are you that that’s what you really believe, though? Sure enough to bet 1⁄2 your wealth?
I don’t really have any wealth to speak of. But they’re all me. If I won’t defect, then they won’t. The question is just whether or not we might disagree on what’s best for me. In which case, we can either go by prior agreement, or just let them all live. If the other mes really wanted to live, I’d let them. For instance, say I made 5 copies and all 5 of us went out to try different approaches to a career, agreeing the best one would survive. If a year later more than one claimed to have the best result for Blueberry, I might as well let more than one live.
ETA: However, there might be situations where I can only have one copy survive. For instance, I’m in a grad program now that I’d like to finish, and more than one of me can’t be enrolled for administrative reasons. So if I really need only one of me, I guess we could decide randomly which one would survive. I’m all right with forcing a copy to suicide if he changes his mind, since I’m making that decision for all the clones ahead of time to lead to the best outcome for Blueberry.
If one of the clones developed enough individuality to change his mind and disagree with the others, I definitely don’t see how you could consider that one anything other than an individual.
Likewise, if all of the clones decided to change their minds and go their separate ways, that would be functionally the same as you-as-a-single-person-with-a-single-body changing your mind about something, and the general rule there is that humans are allowed to do that, without being interfered with. I don’t see any reason to change that rule.
Be careful of generalizing from one example. I’m relatively certain that the vast majority of people who might consider cloning themselves wouldn’t see it the way you do, and would in fact need significant safeguards to protect the version of themselves who remembers waking up in a lab from being abused by the version of themselves who remembers going home after having their DNA sampled and their brain scanned.
I did have people like you in mind, at least peripherally, in my original suggestion, though: I’m fairly sure that the original proposal doesn’t take away any rights that you already have. (To the best of my knowledge, it is illegal for someone to force you to take a sleeping pill, even if you previously agreed to it, and my knowledge there is a bit better than average; remember that I worked at a nursing home.)
I’m relatively certain that the vast majority of people who might consider cloning themselves wouldn’t see it the way you do, and would in fact need significant safeguards to protect the version of themselves who remembers waking up in a lab from being abused by the version of themselves who remembers going home after having their DNA sampled and their brain scanned.
I’d like to hear more about this. First, I was imagining an identical atom-for-atom duplicate being constructed, in such a way that there is no fact of the matter who’s the original. As in, you press a button and there are two of you. I wasn’t thinking about an organism grown in a lab. But I’m not sure that matters, except that the lab scenario makes it easier to think of one copy being in control of the other copy.
You think the majority of people would worry about, and would need to worry about, one copy abusing the other copy? Why? The copies would have to fight for control first, which should be an even fight. And what would the point be?
I’m fairly sure that the original proposal doesn’t take away any rights that you already have. To the best of my knowledge, it is illegal for someone to force you to take a sleeping pill, even if you previously agreed to it.
Yes, that’s illegal except maybe in an emergency psychiatric situation. Here’s an idea: a time-delayed suicide pill, with no antidote, that one of the copies can take immediately after the cloning. That’s equivalent to having the agreement enforced, but it doesn’t take away any rights either. I think that addresses your concern.
I expect to get back to this; I had to take care of something for work and now I’m too tired to do it justice. If I haven’t responded to it within 18 hours, please remind me.
After conferring with Blueberry via PM, we agree that we’ll need to talk in realtime to get much further with this. Our schedules are both fairly busy right now, but we intend to try to turn the discussion into a top post. (I’d also be amenable to making the log public, or letting other people observe or participate, but I haven’t talked to Blue about that.)
I imagine it would be much like a case of amnesia, only with less disorientation.
Edit: Wait, I’m looking at the wrong half. One moment.
Edit: I suppose it would depend on the circumstances—“fear” is an obvious one, although mitigated to an extent by knowing that I would not be leaving a hole behind me (no grieving relatives, etc.).
Depends on how much it cost me to make it, and how much it costs to keep it around. I’m permanently busy, I’m sure I could use a couple of extra hands around the house ;)
If FAI paternalism is okay, then the FAI can decide that you would be best-off being wire-headed, apply the modification, and there you are, happy for all eternity.
You’ll never regret it.
Merely having the inability to regret an occurrence doesn’t make the occurrence coincide with one’s preferences. I couldn’t regret an unexpected, instantaneous death from which I was never revived, either; I emphatically don’t prefer one.
But wire-heading is not death. It is the opposite—the most fulfilling experience possible, to which everything else pales in comparison.
It seems you think paternalism is okay if it is pure in intent and flawless in execution.
It has been shown that vulnerability to smoking addiction is due to a certain gene. Suppose we could create a virus that would silently spread through the human population and fix this gene in everyone, willing or not. Suppose our intent is pure, and we know that this virus would operate flawlessly, only affecting this gene and having no other effects.
Would you be in favor of releasing this virus?
...”fulfilling”? Wire-heading only fulfills “make me happy”—it doesn’t fulfill any other goal that a person may have.
“Fulfilling”—in the sense of “To accomplish or carry into effect, as an intention, promise, or prophecy, a desire, prayer, or requirement, etc.; to complete by performance; to answer the requisitions of; to bring to pass, as a purpose or design; to effectuate” (Webster 1913) - is precisely what wire-heading cannot do.
Your other goals are immaterial and pointless to the outside world.
Nevertheless, suppose the FAI respects such a desire. This is questionable, because in the FAI’s mind, this is tantamount to letting a depressed patient stay depressed, simply because a neurotransmitter imbalance causes them to want to stay depressed. But suppose it respects this tendency.
In that case, the cheapest way to satisfy your desire, in terms of consumption of resources, is to create a simulation where you feel like you are thinking, learning and exploring, though in reality your brain is in a vat.
You’d probably be better off just being happy and sharing in the FAI’s infinite wisdom.
Would you do me a favor and refer to this hypothesized agent as a DAI (Denis Artificial Intelligence)? Such an entity is nothing I would call Friendly, and, given the widespread disagreement on what is Friendly, I believe any rhetorical candidates should be referred to by other names. In the meantime:
I reject this point. Let me give a concrete example.
Recently I have been playing a lot of Forza Motorsport 2 on the XBox 360. I have recently made some gaming buddies who are more experienced in the game than I am—both better at driving in the game and better at tuning cars in the game. (Like Magic: the Gathering, Forza 2 is explicitly played on both the preparation and performance levels, although tilted more towards the latter.) I admire the skills they have developed in creating and controlling their vehicles and, wishing to admire myself in a similar fashion, wish to develop my own skills to a similar degree.
What is the DAI response to this?
An FAI-enhanced World of Warcraft?
You can still interact with others even though you’re in a vat.
Though as I commented elsewhere, chances are that FAI could fabricate more engaging companions for you than mere human beings.
And chances are that all this is inferior to being the ultimate wirehead.
That could be fairly awesome.
If it comes to that, I could see making the compromise.
This relates to subjects discussed in the other thread—I’ll let that conversation stand in for my reply to it.
Well...
Consider you want to explore and learn and build ad infinitum. Progress in your activities requires you to control increasing amounts of matter and consume increasing amounts of energy, until such point as you conflict with others who also want to build and explore. When that point is reached, the only way the FAI can make you all happy is to intervene while you all sleep, put you in separate vats, and from then on let each of you explore an instance of the universe that it simulates for you.
Should it let you wage Star Wars on each other instead? And how would that be different from no AI to begin with?
You seem to be engaging in all-or-nothing thinking. Because I want more X does not mean that I want to maximize X to the exclusion of all other possibilities. I want to explore and learn and build, but I also want to act fairly toward my fellow sapients/sentients. And I want to be happy, and I want my happiness to stem causally from exploring, learning, building, and fairness. And I want a thousand other things I’m not aware of.
An AI which examines my field of desires and maximizes one to the exclusion of all others is actively inimical to my current desires, and to all extrapolations of my current desires I can see.
But everything you do is temporary. All the results you get from it are temporary.
If you seek quality of experience, then the AI can wirehead you and give you that, with minimal consumption of resources. Even if you do not want a constant ultimate experience, all the thousands of your needs are more efficiently fulfilled in a simulation, than letting you directly manipulate matter. Allowing you to waste real resources is inimical both to the length of your life and that of everyone else.
If you seek personal growth, then the AI already is everything you can aspire to be. Your best bet at personal growth is interfacing or merging with its consciousness. And everyone can do that, as opposed to isolated growth of individual beings, which would consume resources that need to be available for others and for the AI.
Why would I build an AI which would steal everything I want to do and leave me with nothing worth doing? That doesn’t sound like the kind of future I want to build.
Edit:
That just adds a constraint to what I may accomplish—it doesn’t change my preferences.
Because only one creature can be maximized, and it’s better it’s an AI than a person.
Even if we don’t necessarily want the AI to maximize itself immediately, it will always need to be more powerful than any possible threat, and therefore more powerful than any other creature.
If you want the ultimate protector, it has to be the ultimate thing.
I don’t want it maximized, I want it satisficed—and I, at least, am willing to exchange a small existential risk for a better world. “They who can give up essential liberty to obtain a little temporary safety” &c.
If the AI can search the universe and determine that it is adequately secure from existential threats, I don’t want it expanding very quickly beyond that. Leave some room for us!
But the AI has to plan for a maximized outcome until the end of the universe. In order to maximize the benefit from energy before thermal death, resource efficiency right now is as important as when resources will be scarcest.
This is unless the AI discovers that thermal death can be overcome, in which case, great! But what we know so far indicates that the universe will eventually die, even if many billions of year in the future. So conservative resource management is important from day 1.
There are things I could say in reply, but I suspect we are simply talking past each other. I may reply later if I have some new insight into the nature of our disagreement.
The way I understand our disagreement is, you see FAI as a limited-functionality add-on that makes a few aspects of our lives easier for us, while I see it as an unstoppable force, with great implications for everything in its causal future, which just can’t not revolutionize everything, including how we feel, how we think, what we do. I believe I’m following the chain of reasoning to the end, whereas you appear to think we can stop after the first couple steps.
You also keep asserting to know in which particular way FAI is going to change things. Instead of repeating the same statements, you should recognise the disagreement, and address it directly, instead of continuing to profess the original assertions.
I don’t think that’s the source of our disagreement—as I mentioned in another thread, if prudence demanded that the population (or some large fraction thereof) be uploaded in software to free up the material substance for other purposes, I would not object. I could even accept major changes to social norms (such as legalization of nonconsensual sex, to use Eliezer Yudkowsky’s example). Our confirmed point of disagreement is not your thesis that “a human population which acquired an FAI would become immensely different from today’s”, it is your thesis that “a human population which acquired an FAI would become wireheads”. Super Happy People, maybe—not wireheads.
One quality that’s relevant to Friendly AI is that it does stop, when appropriate. It’s entirely plausible (according to Eliezer; last time I checked) that a FAI would never do anything that wasn’t a response to an existential threat (i.e. something that could wipe out or severely alter humanity), if that was the course of action most in keeping with our CEV.
Whoa whoa whoa wait what? No. Not under a blanket description like that, at any rate. If you want to wirehead, and that’s your considered and stable desire, I say go for it. Have a blast. Just don’t drag us into it.
No. I’d be in favor of making it available in a controlled non-contagious form to individuals who were interested, though.
Apologies, Alicorn—I was confusing you with Adelene. I was paying all attention to the content and not enough to who is the author.
Only the first paragraph (but wire-heading is not death) is directed at your comment. The rest is actually directed at Adelene.
My point was that you used “you won’t regret it” as a point in favor of wireheading, whereas it does not serve as a point in favor of death.
Can you check the thread of this comment:
http://lesswrong.com/lw/1o9/welcome_to_heaven/1iia?context=3#comments
and let me know what your response to that thread is?
I would save the drunk friend (unless I had some kind of special knowledge, such as that the friend got drunk in order to enable him or herself to go through with a plan to indulge a considered and stable sober desire for death). In the case of the depressed friend, I’d want to refer to my best available knowledge of what that friend would have said about the situation prior to acquiring the neurotransmitter imbalance, and act accordingly.
You’re twisting my words. I said that FAI paternalism would be different—which it would be, qualitatively and quantitatively. “Pure in intent and flawless in execution” are very fuzzy words, prone to being interpreted differently by different people, and only a very specific set of interpretations of those words would describe FAI.
I’m with Alicorn on this one: If it can be made into a contagious virus, it can almost certainly be made into a non-contagious one, and that would be the ethical thing to do. However, if it can’t be made into a non-contagious virus, I would personally not release it, and I’m going to refrain from predicting what a FAI would do in that case; part of the point of building a FAI is to be able to give those kinds of decisions to a mind that’s able to make unbiased (or much less biased, if you prefer; there’s a lot of room for improvement in any case) decisions that affect groups of people too large for humans to effectively model.
I understand. That makes some sense. Though smokers’ judgement is impaired by their addiction, one can imagine that at least they will have periods of sanity when they can choose to fix the addiction gene themselves.
We do appear to differ in the case when an infectious virus is the only option to help smokers fix that gene. I would release the virus in that case. I have no qualms taking that decision and absorbing the responsibility.
This seems contradictory to your earlier claims about wireheading. Say that some smokers get a lot of pleasure from smoking, and don’t want to stop, and in fact would experience more pleasure in their lives if they kept the addiction. You’d release the virus?
I maintain that an AI that would do that isn’t Friendly.
I believe that my definition of Friendliness is in keeping with the standard definition that’s in use here.
How are you defining Friendliness, that you would consider an AI that would wirehead someone against their will to be Friendly?
Is it friendly to rescue a drunk friend who is about to commit suicide, knowing that they’ll come to their senses? Or is it friendly to let them die, because their current preference is to die?
That depends on whether they decided to commit suicide while in a normal-for-them frame of mind, not on their current preference. The first part of the question implies that they didn’t, in which case the correct response is to rescue them, wait for them to get sober, and talk it out—and then they can commit suicide, if they still feel the need.
Very well, then. Next example. Your friend is depressed, and they want to commit suicide. You know that their real problem is a neurotransmitter imbalance that can be easily fixed. However, that same neurotransmitter imbalance is depriving them of any will to fix it, and in fact they refuse to cooperate. You know that if you fix their imbalance regardless, they will be happy, they will live a fulfilled life, and they will be grateful to you for it. Is it friendly to intervene and fix the imbalance, or is it friendly to let them die, seeing as depression and thoughts of suicide are a normal-for-them frame of mind?
It doesn’t follow that they prefer to commit suicide.
This is an excellent answer, and squares well with mine: If they merely want to commit suicide, they may not have considered all the alternatives. If they have considered all the achievable alternatives, and their preference is to commit suicide, I’d support them doing so.
If this is leading in a direction where “wireheading” is identified with “being happy and living a fulfilled life”, then we might as well head it off at the pass.
Being happy—being in a pleasurable state—isn’t enough, we would insist that our future lives should also be meaningful (which I would argue is part of “fulfilled”).
This isn’t merely a subjective attribute, as is “happy” which could be satisfied by permanently blissing out. It has objective consequences; you can tell “meaningful” from the outside. Meaningful arrangements of matter are improbable but lawful, structured but hard to predict, and so on.
“Being totally happy all the time” is a state of mind, the full description of which would compress very well, just as the description of zillions of molecules of gas can be compressed to a handful of parameters. “Meaningful” corresponds to states of mind with more structure and order.
If we are to be somehow “fixed” we would want the “fix” to preserve or restore the property we have now, of being the type of creature who can (and in fact do) choose for themselves.
The preference for “objective meaningfulness”—for states which do not compress very well—seems to me a fairly arbitrary (meaningless) preference. I don’t think it’s much different from paperclip maximization.
Who is to observe the “meaningful” states, if everyone is in a state where they are happy?
I am not even convinced that “happy and fulfilled” compresses easily. But if it did, what is the issue? Everyone will be so happy as to not mind the absence of complicated states.
I would go so far as to say that seeking complicated states is something we do right now because it is the most engaging substitute we have for being happy.
And not everyone does this. Most people prefer to empty their minds instead. It may even be that seeking complexity is a type of neurotic tendency.
Should the FAI be designed with a neurotic tendency?
I’m not so sure.
I can give you my general class of answers to this kind of problem: I will always attempt to the best of my ability to talk someone I care about out of doing something that will cause them to irretrievably cease to function as a person—a category which includes both suicide and wireheading. However, if in spite of my best persuasive efforts—which are likely to have a significant effect, if I’m actually friends with the person—they still want to go through with such a thing, I will support them in doing so.
The specific implementation of the first part in this case would be to try to talk them into trying the meds, with the (accurate) promise that I would be willing to help them suicide if they still wanted to do that after a certain number of months (dependent on how long the meds take to work).
There are so many different anti-depressants and the methods for choosing which ones are optimal basically come down to the intuition of the psychiatrist. It can take years to iterate through all the possible combinations of psychiatric medication if they keep failing to fix the neurotransmitter imbalance. I think anything short of 2 years is not long enough to conclude that a person’s brain is irreparably broken. It’s also a field that has a good chance of rapid development, such that a brain that seems irreparably broken today will certainly not always be unfixable.
--
I explored a business in psychiatric genetic testing, and identified about 20 different mutations that could help psychiatrists make treatment decisions, but it was infeasible to bring to market right now without having millions of dollars for research, and the business case is not strong enough for me to raise millions of dollars for research. It’ll hit the market within 10 years, sooner if the business case becomes stronger for me doing it or if I have the spare $20k to go out and get the relevant patent to see what doors that opens.
I expect the first consequence of widespread genetic testing for mental health is for NRIs to become much more widely prescribed as the firstline treatment for depression.
Probably the “friendly” action would be to create an un-drunk copy of them, and ask the copy to decide.
And what do you do with the copy? Kill it?
I’m OK with the deletion of very-short-lived copies of myself if there are good reasons to do it. For example, supposing after cryonic suspension I’m revived with scanning and WBE. Unfortunately, unbeknownst to those reviving me, I have a phobia of the Michelin Man and the picture of him on the wall means I deal with the shock of my revival very badly. I’d want the revival team to just shut down, change the picture on the wall and try again.
I can also of course imagine lots of circumstances where deletion of copies would be much less morally justifiable.
There’s a very nice thought experiment that helps demonstrate this (I think it’s from Nozick). Imagine a sleeping pill that makes you fall asleep in thirty minutes, but you won’t remember the last fifteen minutes of being awake. From the point of view of your future self, the fifteen minutes you don’t remember is exactly like a short-lived copy that got deleted after fifteen minutes. It’s unlikely that anyone would claim taking the pill is unethical, or that you’re killing a version of yourself by doing so.
Armchair reasoning: I can imagine the mental clone and the original existing at the same time, side-by-side. I cannot imagine myself with the memory loss and myself without the memory loss as existing at the same time. Also, whatever actions my past self does actually affects my future self regardless of what I remember. As such, my instinct is to think of the copy as a separate identity and my past self as the same identity.
Your copy would also take actions that affects your future self. What is the difference here?
Imagine a scenario where I cut off my arm. I am responsible. If my copy cuts off my arm, he would be responsible, not “me.”
This is all playing semantics with personal identity. I am not trying to espouse any particular belief; I am only offering one possible difference between the idea of forgetting your past and copying yourself.
That doesn’t make any sense. Your copy is you.
Yeah, okay. You are illustrating my point exactly. Not everyone thinks the way you do about identity and not everyone thinks the way I mentioned about identity. I don’t hold hard and fast about it one way or the other.
But the original example of someone who loses 15 minutes being similar to killing off a copy who only lived for 15 minutes implies a whole ton of things about identity. The word “copy” is too ambiguous to say, “Your copy is you.”
If I switched in, “X’s copy is X” and then started talking about various cultural examples of copying we quickly run into trouble. Why does “X’s copy is X” work for people? Unless I missed a definition of terms comment or post somewhere, I don’t see how we can just assume that is true.
The first use of “copy” I found in this thread is:
It was followed by:
As best as I can tell, you take the sentence, “Your copy is you” to be a tautology or definition or something along those veins. (I could obviously be wrong; please correct me if I am.) What would you call a functionally identical version of X with a separate, distinct Identity? Is it even possible? If it is, use that instead of “copy” when reading my comment:
When I read the original comment I responded to:
I was not assuming your definition of copy. Which could entirely be my fault, but I find it hard to believe that you didn’t understand my point enough to predict this response. If you did, it would have been much faster to simply say, “When people at LessWrong talk about copies they mean blah.” In which case I would have responded, “Oh, okay, that makes sense. Ignore my comment.”
The semantics get easier if you think of both as being copies, so you have past-self, copy-1, and copy-2. Then you can ask which copy is you, or if they’re both you. (If past-self is drunk, copy-1 is drunk, and copy-2 is sober, which copy is really more “you”?)
Yeah, actually, that helps a lot. Using that language most of the followup questions I have obvious enough to skip bringing up. Thanks.
I’d actually be kinda hesitant of such pills and would need to think it out. The version of me that is in those 15 minutes might be a bit unhappy about the situation, for one thing.
Such pills do exist in the real world: a lot of sleeping pills have similar effects, as does consuming significant amounts of alcohol.
And it basically results in 15 minutes of experience that simply “go away”? no gradual transition/merging into the mainline experience, simply 15 minutes that get completely wiped?
eeew.
For that matter, so does falling asleep in the normal way.
Certainly—this is the restore-from-backup scenario, for which Blueberry’s sleeping-pill comparison was apt. (I would definitely like to make a secure backup before taking a risk, personally.) What I wanted to suggest was that duplicate-for-analysis was less clear-cut.
What’s the difference? Supposing that as a matter of course the revival team try a whole bunch of different virtual environments looking for the best results, is that restore-from-backup or duplicate-for-analysis?
Suppose that we ironically find that the limitations on compute hardware mean that no matter how much we spend we hit an exact 1:1 ratio between subjective and real time, but that the hardware is super-cheap. Also, there’s no brain “merge” function. I might fork off a copy to watch a movie to review it for myself, to decide whether the “real me” should watch it.
As MrHen pointed out, you can imagine the ‘duplicate’ and ‘original’ existing side-by-side—this affects intuitions in a number of ways. To pump intuition for a moment, we consider identical twins to be different people due to the differences in their experiences, despite their being nearly identical on a macro level. I haven’t done the calculations to decide where the border of acceptable use of duplication lies, but deleting a copy which diverged from the original twenty years before clearly appears to be over the line.
Absolutely, which is why I specified short-lived above.
Though it’s very hard to know how I would face the prospect of being deleted and replaced with a twenty-minute-old backup in real life!
I may be answering an un-asked question, since I haven’t been following this conversation, but the following solution to the issue of clones occurs to me:
Leave it up to the clone.
Make suicide fully legal and easily available (possibly ‘suicide of any copy of a person in cases where more than one copy exists’, though that could allow twins greater leeway depending on how you define ‘person’ - perhaps also add a time limit: the split must have occurred within N years). When a clone is created, it’s automatically given the rights to 1⁄2 of the original’s wealth. If the clone suicides, the original ‘inherits’ the wealth back. If the clone decides not to suicide, it automatically keeps the wealth that it has the rights to.
Given that a clone is functionally the same person as the original, this should be an ethical solution (assuming that you consider suicide ethical at all) - someone would have to be very sure that they’d be able to go through with suicide, or very comfortable with the idea of splitting their wealth in half, in order to be willing to take the risk of creating a clone. The only problem that I see is with unsplittable things like careers and relationships. (Flip a coin? Let the other people involved decide?)
This seems like a good solution. If I cloned myself, I’d want it to be established beforehand which copy would stay around, and which copy would go away. For instance, if you’re going to make a copy that goes to watch a movie to see if the movie is worth your time, the copy that watches the movie should go away, because if it’s good the surviving version of yourself will watch it anyway.
I (and thus my clones) don’t see it as suicide, more like amnesia, so we’d have no problem going through with it if the benefit outweighed the amnesia.
If you keep the clone around, in terms of splitting their wealth, both clones can work and make money, so you should get about twice the income for less than twice the expenses (you could share some things). In terms of relationships, you could always bring the clones into a relationship. A four way relationship, made up of two copies of each original person, might be interesting.
Hmm… *Imagines such a relationship with significant other.* Holy hell that would be weird. The amount of puzzling scenarios I can think of just by sitting here is extravagant. Does anyone know of a decent novel based on this premise?
I don’t think those kinds of situations will need to be spelled out in advance, actually. Coming up with a plan that’s acceptable to both versions of yourself before going through with the cloning should be about as easy as coming up with a plan that’s acceptable to just one version, once you’re using the right kind of framework to think about it. (You should be about equally willing to take either role, in other words, otherwise your clone is likely to rebel, and since they’re considered independent from the get-go (and not bound by any contracts they didn’t sign, I assume), there’s not much you can do about that.)
Setting up four-way relationships would definitely be interesting. Another scenario that I like is one where you make a clone to pursue an alternate life-path that you suspect might be better but think is too risky—after a year (or whatever), whichever of you is less happy could suicide and give their wealth to the other one, or both could decide that their respective paths are good and continue with half-wealth.
The more I think about this, the more I want to make a bunch of clones of myself. I don’t even see why I’d need to destroy them. I shouldn’t have to pay for them; they can get their own jobs, so wealth isn’t that much of a concern.
The concern is that immediately after you clone, both copies agree that Copy 1 should live and Copy 2 should die, but afterwards, Copy 2 doesn’t want to lose those experiences. If you decide beforehand that you only want one of you around, and Copy 2 is created specifically to be destroyed, there should be a way to bind Copy 2 to suicide.
Disagree. I would class that as murder, not suicide, and consider creating a clone who would be subject to such binding to be unethical.
Calling it murder seems extreme, since you end up surviving. What’s the difference between binding a copy to suicide and binding yourself to take a sleep-amnesia pill?
If it’s not utterly voluntary when committed, I don’t class it as suicide. (I also consider ‘driving someone to suicide’ to actually be murder.)
My solution to resolving the ethical dilemma is, to reword it, to give the clone full human rights from the moment it’s created (actually a slightly expanded version of current human rights, since we’re currently prohibited from suiciding). I assume that it’s not currently possible to enforce a contract that will directly cause one party’s death; that aspect of inter-human interaction should remain. The wealth-split serves as a balance in two ways: Suddenly having your wealth halved would be traumatic for almost anyone, which gives a clone that had planned to suicide extra impetus to do so, and also should strongly discourage people from taking unnecessary risks when making clones. In other words, that’s not a bug, it’s a feature.
The difference between what you proposed and the sleeping pill scenario is that in the latter, there’s never a situation where an individual is deprived of rights.
I’m still unclear why you classify it as death at all. You end up surviving it.
I think you’re thinking of a each copy as an individual. I’m thinking of the copies collectively as a tool used by an individual.
Ok, say you enter into a binding agreement forcing yourself to take a sleeping pill tomorrow. You have someone there to enforce it if necessary. The next day, you change your mind, and the person forces you to take the pill anyway. Have you been deprived of rights? (If it helps, substitute eating dessert, or gambling, or doing heroin for taking the pill.)
I don’t think any such agreement could be legally binding under current law, which is relevant since we’re talking about rights.
Yes, I am, and as far as I can tell mine’s the accurate model. Each copy is separately alive and conscious; they should no more be treated as the same individual than twins are treated as the same individual. (Otherwise, why is there any ethical question at all?)
This kind of question comes up every so often here, and I still haven’t heard or thought of an answer that satisfies me. I don’t see it as relevant here, though, because I do recognize the clone as a separate individual who shouldn’t be coerced.
But if my copies and I don’t think that way, is it still accurate for us? We agree to be bound by any original agreement, and we think any of us are still alive as long as one of us is, so there’s no death involved. Well, death of a living organism, but not death of a person.
It’s the same question, because I’m assuming both copy A and copy B agree to be bound by the agreement immediately after copying (which is the same as the original making a plan immediately before copying). Both copies share a past, so if you can be bound by your past agreements, so can each copy. Even if the copies are separate individuals, they don’t have separate pasts.
If you and all your copies think that way, then you shouldn’t have to worry about them defecting in the first place, and the rule is irrelevant for you. How sure are you that that’s what you really believe, though? Sure enough to bet 1⁄2 your wealth?
My concern with having specific copies be bound to past agreements is that I don’t trust that people won’t abuse that: It’s easy not to see the clone as ‘yourself’, but as an easily exploitable other. Here’s a possible solution to that problem (though one that I don’t like as well as not having the clone bound by prior agreements at all): Clones can only be bound by prior agreements that randomly determine which one acts as the ‘new’ clone and which acts as the ‘old’ clone. So, if you split off a clone to go review a movie for you, and pre-bind the clone to die after reporting back, there’s a 50% chance—determined by a coin flip—that it’s you, the original, who will review the movie, and the clone who will continue with your life.
There isn’t an “original”. After the copying, there’s Copy A and Copy B. Both are me. I’m fine with randomly selecting whether Copy A or Copy B goes to see the movie, but it doesn’t matter, since they’re identical (until one sees the movie). In fact, there is no way to not randomly select which copy sees the movie.
From the point of view of the clone who sees the movie (say it’s bad), “suiciding” is the same as him going back in time and not seeing the movie. So I’d always stick to a prior agreement in a case like that.
I don’t really have any wealth to speak of. But they’re all me. If I won’t defect, then they won’t. The question is just whether or not we might disagree on what’s best for me. In which case, we can either go by prior agreement, or just let them all live. If the other mes really wanted to live, I’d let them. For instance, say I made 5 copies and all 5 of us went out to try different approaches to a career, agreeing the best one would survive. If a year later more than one claimed to have the best result for Blueberry, I might as well let more than one live.
ETA: However, there might be situations where I can only have one copy survive. For instance, I’m in a grad program now that I’d like to finish, and more than one of me can’t be enrolled for administrative reasons. So if I really need only one of me, I guess we could decide randomly which one would survive. I’m all right with forcing a copy to suicide if he changes his mind, since I’m making that decision for all the clones ahead of time to lead to the best outcome for Blueberry.
Response to ETA:
If one of the clones developed enough individuality to change his mind and disagree with the others, I definitely don’t see how you could consider that one anything other than an individual.
Likewise, if all of the clones decided to change their minds and go their separate ways, that would be functionally the same as you-as-a-single-person-with-a-single-body changing your mind about something, and the general rule there is that humans are allowed to do that, without being interfered with. I don’t see any reason to change that rule.
Be careful of generalizing from one example. I’m relatively certain that the vast majority of people who might consider cloning themselves wouldn’t see it the way you do, and would in fact need significant safeguards to protect the version of themselves who remembers waking up in a lab from being abused by the version of themselves who remembers going home after having their DNA sampled and their brain scanned.
I did have people like you in mind, at least peripherally, in my original suggestion, though: I’m fairly sure that the original proposal doesn’t take away any rights that you already have. (To the best of my knowledge, it is illegal for someone to force you to take a sleeping pill, even if you previously agreed to it, and my knowledge there is a bit better than average; remember that I worked at a nursing home.)
I’d like to hear more about this. First, I was imagining an identical atom-for-atom duplicate being constructed, in such a way that there is no fact of the matter who’s the original. As in, you press a button and there are two of you. I wasn’t thinking about an organism grown in a lab. But I’m not sure that matters, except that the lab scenario makes it easier to think of one copy being in control of the other copy.
You think the majority of people would worry about, and would need to worry about, one copy abusing the other copy? Why? The copies would have to fight for control first, which should be an even fight. And what would the point be?
Yes, that’s illegal except maybe in an emergency psychiatric situation. Here’s an idea: a time-delayed suicide pill, with no antidote, that one of the copies can take immediately after the cloning. That’s equivalent to having the agreement enforced, but it doesn’t take away any rights either. I think that addresses your concern.
Next up: a game of Russian Roulette against YOURSELF!
I expect to get back to this; I had to take care of something for work and now I’m too tired to do it justice. If I haven’t responded to it within 18 hours, please remind me.
After conferring with Blueberry via PM, we agree that we’ll need to talk in realtime to get much further with this. Our schedules are both fairly busy right now, but we intend to try to turn the discussion into a top post. (I’d also be amenable to making the log public, or letting other people observe or participate, but I haven’t talked to Blue about that.)
I imagine it would be much like a case of amnesia, only with less disorientation.
Edit: Wait, I’m looking at the wrong half. One moment.
Edit: I suppose it would depend on the circumstances—“fear” is an obvious one, although mitigated to an extent by knowing that I would not be leaving a hole behind me (no grieving relatives, etc.).
Depends on how much it cost me to make it, and how much it costs to keep it around. I’m permanently busy, I’m sure I could use a couple of extra hands around the house ;)