Would you step through the transporter? If you answered no, would it be moral to force you through the transporter? What if I didn’t know your wishes, but had to extrapolate? Under what conditions would it be okay?
I don’t see how any of that depends on the question of which computations (copies of me) get labeled with “personal identity” and which don’t.
Also, take the more vile forms of Pascal’s mugging and acausal trades. If something threatens torture to a simulation of you, should you be concerned about actually experiencing the torture, thereby subverting your rationalist impulse to shut up and multiply utility?
Depending on specifics, yes. But I don’t see how this depends on the labeling question. This just boils down to “what do I expect to experience in the future?” which I don’t see as being related to “personal identity”.
This just boils down to “what do I expect to experience in the future?” which I don’t see as being related to “personal identity”.
Forget the phrase “personal identity”. If I am a powerful AI from the future and I come back to tell you that I will run a simulation of you so we can go bowling together, do you or do you not expect to experience bowling with me in the future, and why?
Suppose that my husband and I believe that while we’re sleeping, someone will paint a blue dot on either my forehead, or my husband’s, determined randomly. We expect to see a blue dot when we wake up… and we also expect not to see a blue dot when we wake up. This is a perfectly reasonable state for two people to be in, and not at all problematic.
Suppose I believe that while I’m sleeping, a powerful AI will duplicate me (if you like, in such a way that both duplicates experience computational continuity with the original) and paint a blue dot on one duplicate’s forehead. When I wake up, I expect to see a blue dot when I wake up… and I also expect not to see a blue dot when I wake up. This is a perfectly reasonable state for a duplicated person to be in, and not at all problematic.
Similarly, I both expect to experience bowling with you, and expect to not experience bowling with you (supposing that the original continues to operate while the simulation goes bowling).
The situation isn’t analogous, however. Let’s posit that you’re still alive when the simulation is ran. In fact, aside from technology there’s no reason to put it in the future or involve an AI. I’m a brain scanning researcher that shows up at your house tomorrow, with all the equipment to do a non-destructive mind upload and whole-brain simulation. I tell you that I am going to scan your brain, start the simulation, then don VR goggles and go virtual-bowling with “you”. Once the scanning is done you and your husband are free to go to the beach or whatever, while I go bowling with TheVirtualDave.
What probability would you put on you ending up bowling instead of at the beach?
Well, let’s call P1 my probability of actually going to the beach, even if you never show up. That is, (1-P1) is the probability that traffic keeps me from getting there, or my car breaks down, or whatever. And let’s call P2 my probability of your VR/simulation rig working. That is, (1-P2) is the probability that the scanner fails, etc. etc.
In your scenario, I put a P1 probability of ending up at the beach, and a P2 probability of ending up bowling. If both are high, then I’m confident that I will do both.
There is no “instead of”. Going to the beach does not prevent me from bowling. Going bowling does not prevent me from going to the beach. Someone will go to the beach, and someone will go bowling, and both of those someones will be me.
As I alluded to in another reply, assuming perfectly reliable scanning, and assuming that you hate losing in bowling to MarkAI, how do you decide whether to go practice bowling or to do something else you like more?
If it’s important to me not to lose in bowling, I practice bowling, since I expect to go bowling. (Assuming uninteresting scanning tech.) If it’s also important to me to show off my rocking abs at the beach, I do sit-ups, since I expect to go to the beach. If I don’t have the time to do both, I make a tradeoff, and I’m not sure exactly how I make that tradeoff, but it doesn’t include assuming that the going to the beach somehow happens more or happens less or anything like that than the going bowling.
Admittedly, this presumes that the bowling-me will go on to live a normal lifetime. If I know the simulation will be turned off right after the bowling match, I might not care so much about winning the bowling match. (Then again, I might care a lot more.) By the same token, if I know the original will be shot tomorrow morning I might not care so much abuot my abs. (Then again, I might care more. I’m really not confident about how the prospect of upcoming death affects my choices; still less how it does so when I expect to keep surviving as well.)
What is your probability that you will wake up tomorrow morning? What is your probability that you will wake up Friday morning? I expect to do both, so my probabilities of those two things add up to ~2.
In Mark’s scenario, I expect to go bowling and I expect to go to the beach. My probabilities of those two things similarly add up to ~2.
I think we have the same model of the situation, but I feel compelled to normalize my probability. A guess as to why:
I can rephrase Mark’s question as, “In 10 hours, will you remember having gone to the beach or having bowled?” (Assume the simulation will continue running!) There’ll be a you that went bowling and a you that went to the beach, but no single you that did both of those things. Your successive wakings example doesn’t have this property.
I suppose I answer 50% to indicate my uncertainty about which future self we’re talking about, since there are two possible referents. Maybe this is unhelpful.
That said, normalizing my probability as though there were only going to be one of me at the end of the process doesn’t seem at all compelling to me. I don’t have any uncertainty about which future self we’re talking about—we’re talking about both of them.
Suppose that you and your husband are planning to take the day off tomorrow, and he is planning to go bowling, and you are planning to go to the beach, and I ask the two of you “what’s y’all’s probability that one of y’all will go bowling, and what’s y’all’s probability that one of y’all will go to the beach?” It seems the correct answers to those questions will add up to more than 1, even though no one person will experience bowling AND going to the beach. In 10 hours, one of you will will remember having gone to the beach, and one will remember having bowled.
This is utterly unproblematic when we’re talking about two people.
In the duplication case, we’re still talking about two people, it’s just that right now they are both me, so I get to answer for both of them. So, in 10 hours, I (aka “one of me”) will remember having gone to the beach. I will also remember having bowled. I will not remember having gone to the beach and having bowled. And my probabilities add up to more than 1.
I recognize that it doesn’t seem that way to you, but it really does seem like the obvious way to think about it to me.
I was asking a disguised question. I really wanted to know: “which of the two future selfs do you identify with, and why?”
Oh, that’s easy. Both of them, equally. Assuming accurate enough simulations etc., of course.
ETA: Why? Well, they’ll both think that they’re me, and I can’t think of a way to disprove the claim of one without also disproving the claim of the other.
ETA: Why? Well, they’ll both think that they’re me, and I can’t think of a way to disprove the claim of one without also disproving the claim of the other.
Any of the models of consciousness-as-continuity would offer a definitive prediction.
Any of the models of consciousness-as-continuity would offer a definitive prediction.
IMO, there literally is no fact of the matter here, so I will bite the bullet and say that any model that supposes there is one is wrong. :) I’ll reconsider if you can point to an objective feature of reality that changes depending on the answer to this. (So-and-so will think it to be immoral doesn’t count!)
PS: The up/down karma vote isn’t a record of what you agree with, but whether a post has been reasonably argued.
It is neither of those things. This isn’t debate club. We don’t have to give people credit for finding the most clever arguments for a wrong position.
I make no comment about the subject of debate is in this context (I don’t know or care which party is saying crazy things about ‘conciousness’). I downvoted the parent specifically because it made a normative assertion about how people should use the karma mechanism which is neither something I support nor an accurate description of an accepted cultural norm. This is an example of voting being used legitimately in a way that is nothing to do with whether the post has been reasonably argued.
I did use the term “reasonably argued” but I didn’t mean clever. Maybe “rationally argued”? By my own algorithm a cleverly argued but clearly wrong argument would not garner an up vote.
I gave you an upvote for explaining your down vote.
I did use the term “reasonably argued” but I didn’t mean clever. Maybe “rationally argued”? By my own algorithm a cleverly argued but clearly wrong argument would not garner an up vote.
You are right, ‘clever’ contains connotations that you wouldn’t intend. I myself have used ‘clever’ as term of disdain and I don’t want to apply that to what you are talking about. Let’s say stick with either of the terms you used and agree that we are talking about arguments that are sound, cogent and reasonable rather than artful rhetoric that exploits known biases in human social behaviour to score persuasion points. I maintain that even then down-votes are sometimes appropriate. Allow me to illustrate.
There are two outwardly indistinguishable boxes with buttons that display heads or tails when pressed. You know that one of the boxes returns true 70% of the time, the other returns heads 40% of the time. A third party, Joe, has experimented with the first box three times and tells you that each time it returned true. This represents an argument that the first box is the “70%” box. Now, assume that I have observed the internals of the boxes and know that the first box is, in fact, the 40% box.
Whether I downvote Joe’s comment depends on many things. Obviously, tone matters a lot, as does my impression of whether Joe’s bias is based on dis-ingenuity or more innocent ignorance. But even in the case when Joe is arguing in good faith there are some cases where a policy attempting to improve the community will advocate downvoting the contribution. For example if there is a significant selection bias in what kind of evidence people like Joe have exposed themselves to then popular perception after such people share their opinions will tend to be even more biased than the individuals alone. In that case downvoting Joe’s comment improves the discussion. The ideal outcome would be for Joe to learn to stfu until he learns more.
More simply I observe that even the most ‘rational’ of arguments can be harmful if the selection process for the creation and repetition of those arguments is at all biased.
I won’t because that’s not what I’m arguing. My position is that subjective experience has moral consequences, and therefore matters.
OK, that’s fine, but I’m not convinced—I’m having trouble thinking of something that I consider to be a moral issue that doesn’t have a corresponding consequence in the territory.
PS: That downvote wasn’t me. I’m aware of how votes work around here. :)
Example: is it moral to power-cycle (hibernate, turn off, power on, restore) a computer running an self-aware AI? WIll future machine intelligences view any less-than-necessary AGI experiments I run the same way we do Josef Mengele’s work in Auschwitz? Is it a possible failure mode that an unfriendly/not-proovably-friendly AI that experiences routine power cycling might uncover this line of reasoning and decide it doesn’t want to “die” every night when the lights go off? What would it do then?
OK, in a hypothetical world where somehow pausing a conscious computation—maintaining all data such that it could be restarted losslessly—is murder, those are concerns. Agreed. I’m not arguing against that.
My position is that pausing a computation as above happens to not be murder/death, and that those who believe it is murder/death are mistaken. The example I’m looking for is something objective that would demonstrate this sort of pausing is murder/death. (In my view, the bad thing about death is its permanence, that’s most of why we care about murder and what makes it a moral issue.)
As Eliezer mentioned in his reply (in different words), if power cycling is death, what’s the shortest suspension time that isn’t? Currently most computers run synchronously off a common clock. The computation is completely suspended between clock cycles. Does this mean that an AI running on such a computer is murdered billions of times every second? If so, then morality leading to this absurd conclusion is not a useful one.
Edit: it’s actually worse than that: digital computation happens mostly within a short time of the clock level switch. The rest of the time between transitions is just to ensure that the electrical signals relax to within their tolerance levels. Which means that the AI in question is likely dead 90% of the time.
What Eliezer and you describe is more analogous to task switching on a timesharing system, and yes my understanding of computational continuity theory is that such a machine would not be sent to oblivion 120 times a second. No, such a computer would be strangely schizophrenic, but also completely self-consistent at any moment in time.
But computational continuity does have a different answer in the case of intermediate non-computational states. For example, saving the state of a whole brain emulation to magnetic disk, shutting off the machine, and restarting it sometime later. In the mean time, shutting off the machine resulted in decoupling/decoherence of state between the computational elements of the machine, and general reversion back to a state of thermal noise. This does equal death-of-identity, and is similar to the transporter thought experiment. The relevance may be more obvious when you think about taking the drive out and loading it in another machine, copying the contents of the disk, or running multiple simulations from a single checkpoint (none of these change the facts, however).
In the mean time, shutting off the machine resulted in decoupling/decoherence of state between the computational elements of the machine, and general reversion back to a state of thermal noise.
It is probably best for you to stay away from the physics/QM point of view on this, since you will lose: the states “between the computational elements”, whatever you may mean by that, decohere and relax to “thermal noise” much quicker than the time between clock transitions, so there no difference between a nanosecond an an hour.
Maybe what you mean is more logic-related? For example, when a self-aware algorithm (including a human) expects one second to pass and instead measures a full hour (because it was suspended), it interprets that discrepancy of inputs as death? If so, shouldn’t any unexpected discrepancy, like sleeping past your alarm clock, or day-dreaming in class, be treated the same way?
This does equal death-of-identity, and is similar to the transporter thought experiment.
I agree that forking a consciousness is not a morally trivial issue, but that’s different from temporary suspension and restarting, which happens all the time to people and machines. I don’t think that conflating the two is helpful.
It is probably best for you to stay away from the physics/QM point of view on this, since you will lose: the states “between the computational elements”, whatever you may mean by that, decohere and relax to “thermal noise” much quicker than the time between clock transitions, so there no difference between a nanosecond an an hour.
Maybe what you mean is more logic-related?...
No, I meant the physical explanation (I am a physicist, btw). It is possible for a system to exhibit features at certain frequencies, whilst only showing noise at others. Think standing waves, for example.
I agree that forking a consciousness is not a morally trivial issue, but that’s different from temporary suspension and restarting, which happens all the time to people and machines. I don’t think that conflating the two is helpful.
When does it ever happen to people? When does your brain, even just regions ever stop functioning, entirely? You do not remember deep sleep because you are not forming memories, not because your brain has stopped functioning. What else could you be talking about?
Hmm, I get a feeling that none of these are your true objections and that, for some reason, you want to equate suspension to death. I should have stayed disengaged from this conversation. I’ll try to do so now. Hope you get your doubts resolved to your satisfaction eventually.
I don’t want to, I just think that the alternatives lead to absurd outcomes that can’t possibly be correct (see my analysis of the teleporter scenario).
I really have a hard time imagining a universe where there exists a thing that is preserved when 10^-9 seconds pass between computational steps but not when 10^3 pass between steps (while I move the harddrive to another box).
Prediction: TheOtherDave will say 50%, Beach!Dave and Bowling!Dave would both consider both to be the “original”. Assuming sufficiently accurate scanning & simulating.
I’ll give a 50% chance that I’ll experience that. (One copy of me continues in the “real” world, another copy of me appears in a simulation and goes bowling.)
(If you ask this question as “the AI is going to run N copies of the bowling simulation”, then I’m not sure how to answer—I’m not sure how to weight N copies of the exact same experience. My intuition is that I should still give a 50% chance, unless the simulations are going to differ in some respect, then I’d give a N/(N+1) chance.)
I need to think about your answer, as right now it doesn’t make any sense to me. I suspect that whatever intuition underlies it is the source of our disagreement/confusion.
@linkhyrule5 had an answer better than the one I had in mind. The probability of us going bowling together is approximately equal to the probability that you are already in said simulation, if computational continuity is what matters.
If there were a 6th Day like service I could sign up for where if anything were to happen to me, a clone/simulation of with my memories would be created, I’d sign up for it in a heartbeat. Because if something were to happen to me I wouldn’t want to deprive my wife of her husband, or my daughters of their father. But that is purely altruistic: I would have P(~0) expectation that I would actually experience that resurrection. Rather, some doppelganger twin that in every outward way behaves like me will take up my life where I left off. And that’s fine, but let’s be clear about the difference.
If you are not the simulation the AI was referring to, then you and it will not go bowling together, period. Because when said bowling occurs, you’ll be dead. Or maybe you’ll be alive and well and off doing other things while the simulation is going on. But under no circumstances should you expect to wake up as the simulation, as we are assuming them to be causally separate.
At least from my way of thinking. I’m not sure I understand yet where you are coming from well enough to predict what you’d expect to experience.
@linkhyrule5 had an answer better than the one I had in mind. The probability of us going bowling together is approximately equal to the probability that you are already in said simulation, if computational continuity is what matters.
You could understand my 50% answer to be expressing my uncertainty as to whether I’m in the simulation or not. It’s the same thing.
I don’t understand what “computational continuity” means. Can you explain it using a program that computes the digits of pi as an example?
Rather, some doppelganger twin that in every outward way behaves like me will take up my life where I left off. And that’s fine, but let’s be clear about the difference.
I think you’re making a distinction that exists only in the map, not in the territory. Can you point to something in the territory that this matters for?
I come back to tell you that I will run a simulation of you so we can go bowling together
Presumably you create a sim-me which includes the experience of having this conversation with you (the AI).
do you or do you not expect to experience bowling with me in the future, and why?
Let me interpret the term “expect” concretely as “I better go practice bowling now, so that sim-me can do well against you later” (assuming I hate losing). If I don’t particularly enjoy bowling and rather do something else, how much effort is warranted vs doing something I like?
The answer is not unambiguous and depends on how much I (meat-me) care about future sim-me having fun and not embarrassing sim-self. If sim-me continues on after meat-me passes away, I care very much about sim-me’s well being. On the other hand, if the sim-me program is halted after the bowling game, then I (meat-me) don’t care much about that sim-loser. After all, meat-me (who will not go bowling) will continue to exist, at least for a while. You might feel differently about sim-you, of course. There is a whole range of possible scenarios here. Feel free to specify one in more detail.
TL;DR: If the simulation will be the only copy of “me” in existence, I act as if I expect to experience bowling.
I don’t see how any of that depends on the question of which computations (copies of me) get labeled with “personal identity” and which don’t.
Depending on specifics, yes. But I don’t see how this depends on the labeling question. This just boils down to “what do I expect to experience in the future?” which I don’t see as being related to “personal identity”.
Forget the phrase “personal identity”. If I am a powerful AI from the future and I come back to tell you that I will run a simulation of you so we can go bowling together, do you or do you not expect to experience bowling with me in the future, and why?
Yes, with probability P(simulation), or no, with probability P(not simulation), depending.
Suppose that my husband and I believe that while we’re sleeping, someone will paint a blue dot on either my forehead, or my husband’s, determined randomly. We expect to see a blue dot when we wake up… and we also expect not to see a blue dot when we wake up. This is a perfectly reasonable state for two people to be in, and not at all problematic.
Suppose I believe that while I’m sleeping, a powerful AI will duplicate me (if you like, in such a way that both duplicates experience computational continuity with the original) and paint a blue dot on one duplicate’s forehead. When I wake up, I expect to see a blue dot when I wake up… and I also expect not to see a blue dot when I wake up. This is a perfectly reasonable state for a duplicated person to be in, and not at all problematic.
Similarly, I both expect to experience bowling with you, and expect to not experience bowling with you (supposing that the original continues to operate while the simulation goes bowling).
The situation isn’t analogous, however. Let’s posit that you’re still alive when the simulation is ran. In fact, aside from technology there’s no reason to put it in the future or involve an AI. I’m a brain scanning researcher that shows up at your house tomorrow, with all the equipment to do a non-destructive mind upload and whole-brain simulation. I tell you that I am going to scan your brain, start the simulation, then don VR goggles and go virtual-bowling with “you”. Once the scanning is done you and your husband are free to go to the beach or whatever, while I go bowling with TheVirtualDave.
What probability would you put on you ending up bowling instead of at the beach?
Well, let’s call P1 my probability of actually going to the beach, even if you never show up. That is, (1-P1) is the probability that traffic keeps me from getting there, or my car breaks down, or whatever. And let’s call P2 my probability of your VR/simulation rig working. That is, (1-P2) is the probability that the scanner fails, etc. etc.
In your scenario, I put a P1 probability of ending up at the beach, and a P2 probability of ending up bowling. If both are high, then I’m confident that I will do both.
There is no “instead of”. Going to the beach does not prevent me from bowling. Going bowling does not prevent me from going to the beach. Someone will go to the beach, and someone will go bowling, and both of those someones will be me.
As I alluded to in another reply, assuming perfectly reliable scanning, and assuming that you hate losing in bowling to MarkAI, how do you decide whether to go practice bowling or to do something else you like more?
If it’s important to me not to lose in bowling, I practice bowling, since I expect to go bowling. (Assuming uninteresting scanning tech.)
If it’s also important to me to show off my rocking abs at the beach, I do sit-ups, since I expect to go to the beach.
If I don’t have the time to do both, I make a tradeoff, and I’m not sure exactly how I make that tradeoff, but it doesn’t include assuming that the going to the beach somehow happens more or happens less or anything like that than the going bowling.
Admittedly, this presumes that the bowling-me will go on to live a normal lifetime. If I know the simulation will be turned off right after the bowling match, I might not care so much about winning the bowling match. (Then again, I might care a lot more.) By the same token, if I know the original will be shot tomorrow morning I might not care so much abuot my abs. (Then again, I might care more. I’m really not confident about how the prospect of upcoming death affects my choices; still less how it does so when I expect to keep surviving as well.)
Your probabilities add up to more than 1...
Of course they do. Why shouldn’t they?
What is your probability that you will wake up tomorrow morning?
What is your probability that you will wake up Friday morning?
I expect to do both, so my probabilities of those two things add up to ~2.
In Mark’s scenario, I expect to go bowling and I expect to go to the beach.
My probabilities of those two things similarly add up to ~2.
I think we have the same model of the situation, but I feel compelled to normalize my probability. A guess as to why:
I can rephrase Mark’s question as, “In 10 hours, will you remember having gone to the beach or having bowled?” (Assume the simulation will continue running!) There’ll be a you that went bowling and a you that went to the beach, but no single you that did both of those things. Your successive wakings example doesn’t have this property.
I suppose I answer 50% to indicate my uncertainty about which future self we’re talking about, since there are two possible referents. Maybe this is unhelpful.
Yes, that seems to be what’s going on.
That said, normalizing my probability as though there were only going to be one of me at the end of the process doesn’t seem at all compelling to me. I don’t have any uncertainty about which future self we’re talking about—we’re talking about both of them.
Suppose that you and your husband are planning to take the day off tomorrow, and he is planning to go bowling, and you are planning to go to the beach, and I ask the two of you “what’s y’all’s probability that one of y’all will go bowling, and what’s y’all’s probability that one of y’all will go to the beach?” It seems the correct answers to those questions will add up to more than 1, even though no one person will experience bowling AND going to the beach. In 10 hours, one of you will will remember having gone to the beach, and one will remember having bowled.
This is utterly unproblematic when we’re talking about two people.
In the duplication case, we’re still talking about two people, it’s just that right now they are both me, so I get to answer for both of them. So, in 10 hours, I (aka “one of me”) will remember having gone to the beach. I will also remember having bowled. I will not remember having gone to the beach and having bowled. And my probabilities add up to more than 1.
I recognize that it doesn’t seem that way to you, but it really does seem like the obvious way to think about it to me.
I think your description is coherent and describes the same model of reality I have. :)
Yes. Probabilities aside, this is what I was asking.
I was asking a disguised question. I really wanted to know: “which of the two future selfs do you identify with, and why?”
Oh, that’s easy. Both of them, equally. Assuming accurate enough simulations etc., of course.
ETA: Why? Well, they’ll both think that they’re me, and I can’t think of a way to disprove the claim of one without also disproving the claim of the other.
Any of the models of consciousness-as-continuity would offer a definitive prediction.
IMO, there literally is no fact of the matter here, so I will bite the bullet and say that any model that supposes there is one is wrong. :) I’ll reconsider if you can point to an objective feature of reality that changes depending on the answer to this. (So-and-so will think it to be immoral doesn’t count!)
I won’t because that’s not what I’m arguing. My position is that subjective experience has moral consequences, and therefore matters.
PS: The up/down karma vote isn’t a record of what you agree with, but whether a post has been reasonably argued.
For many people, the up/down karma vote is a record of what we want more/less of.
It is neither of those things. This isn’t debate club. We don’t have to give people credit for finding the most clever arguments for a wrong position.
I make no comment about the subject of debate is in this context (I don’t know or care which party is saying crazy things about ‘conciousness’). I downvoted the parent specifically because it made a normative assertion about how people should use the karma mechanism which is neither something I support nor an accurate description of an accepted cultural norm. This is an example of voting being used legitimately in a way that is nothing to do with whether the post has been reasonably argued.
I did use the term “reasonably argued” but I didn’t mean clever. Maybe “rationally argued”? By my own algorithm a cleverly argued but clearly wrong argument would not garner an up vote.
I gave you an upvote for explaining your down vote.
You are right, ‘clever’ contains connotations that you wouldn’t intend. I myself have used ‘clever’ as term of disdain and I don’t want to apply that to what you are talking about. Let’s say stick with either of the terms you used and agree that we are talking about arguments that are sound, cogent and reasonable rather than artful rhetoric that exploits known biases in human social behaviour to score persuasion points. I maintain that even then down-votes are sometimes appropriate. Allow me to illustrate.
There are two outwardly indistinguishable boxes with buttons that display heads or tails when pressed. You know that one of the boxes returns true 70% of the time, the other returns heads 40% of the time. A third party, Joe, has experimented with the first box three times and tells you that each time it returned true. This represents an argument that the first box is the “70%” box. Now, assume that I have observed the internals of the boxes and know that the first box is, in fact, the 40% box.
Whether I downvote Joe’s comment depends on many things. Obviously, tone matters a lot, as does my impression of whether Joe’s bias is based on dis-ingenuity or more innocent ignorance. But even in the case when Joe is arguing in good faith there are some cases where a policy attempting to improve the community will advocate downvoting the contribution. For example if there is a significant selection bias in what kind of evidence people like Joe have exposed themselves to then popular perception after such people share their opinions will tend to be even more biased than the individuals alone. In that case downvoting Joe’s comment improves the discussion. The ideal outcome would be for Joe to learn to stfu until he learns more.
More simply I observe that even the most ‘rational’ of arguments can be harmful if the selection process for the creation and repetition of those arguments is at all biased.
OK, that’s fine, but I’m not convinced—I’m having trouble thinking of something that I consider to be a moral issue that doesn’t have a corresponding consequence in the territory.
PS: That downvote wasn’t me. I’m aware of how votes work around here. :)
Example: is it moral to power-cycle (hibernate, turn off, power on, restore) a computer running an self-aware AI? WIll future machine intelligences view any less-than-necessary AGI experiments I run the same way we do Josef Mengele’s work in Auschwitz? Is it a possible failure mode that an unfriendly/not-proovably-friendly AI that experiences routine power cycling might uncover this line of reasoning and decide it doesn’t want to “die” every night when the lights go off? What would it do then?
OK, in a hypothetical world where somehow pausing a conscious computation—maintaining all data such that it could be restarted losslessly—is murder, those are concerns. Agreed. I’m not arguing against that.
My position is that pausing a computation as above happens to not be murder/death, and that those who believe it is murder/death are mistaken. The example I’m looking for is something objective that would demonstrate this sort of pausing is murder/death. (In my view, the bad thing about death is its permanence, that’s most of why we care about murder and what makes it a moral issue.)
As Eliezer mentioned in his reply (in different words), if power cycling is death, what’s the shortest suspension time that isn’t? Currently most computers run synchronously off a common clock. The computation is completely suspended between clock cycles. Does this mean that an AI running on such a computer is murdered billions of times every second? If so, then morality leading to this absurd conclusion is not a useful one.
Edit: it’s actually worse than that: digital computation happens mostly within a short time of the clock level switch. The rest of the time between transitions is just to ensure that the electrical signals relax to within their tolerance levels. Which means that the AI in question is likely dead 90% of the time.
What Eliezer and you describe is more analogous to task switching on a timesharing system, and yes my understanding of computational continuity theory is that such a machine would not be sent to oblivion 120 times a second. No, such a computer would be strangely schizophrenic, but also completely self-consistent at any moment in time.
But computational continuity does have a different answer in the case of intermediate non-computational states. For example, saving the state of a whole brain emulation to magnetic disk, shutting off the machine, and restarting it sometime later. In the mean time, shutting off the machine resulted in decoupling/decoherence of state between the computational elements of the machine, and general reversion back to a state of thermal noise. This does equal death-of-identity, and is similar to the transporter thought experiment. The relevance may be more obvious when you think about taking the drive out and loading it in another machine, copying the contents of the disk, or running multiple simulations from a single checkpoint (none of these change the facts, however).
It is probably best for you to stay away from the physics/QM point of view on this, since you will lose: the states “between the computational elements”, whatever you may mean by that, decohere and relax to “thermal noise” much quicker than the time between clock transitions, so there no difference between a nanosecond an an hour.
Maybe what you mean is more logic-related? For example, when a self-aware algorithm (including a human) expects one second to pass and instead measures a full hour (because it was suspended), it interprets that discrepancy of inputs as death? If so, shouldn’t any unexpected discrepancy, like sleeping past your alarm clock, or day-dreaming in class, be treated the same way?
I agree that forking a consciousness is not a morally trivial issue, but that’s different from temporary suspension and restarting, which happens all the time to people and machines. I don’t think that conflating the two is helpful.
No, I meant the physical explanation (I am a physicist, btw). It is possible for a system to exhibit features at certain frequencies, whilst only showing noise at others. Think standing waves, for example.
When does it ever happen to people? When does your brain, even just regions ever stop functioning, entirely? You do not remember deep sleep because you are not forming memories, not because your brain has stopped functioning. What else could you be talking about?
Hmm, I get a feeling that none of these are your true objections and that, for some reason, you want to equate suspension to death. I should have stayed disengaged from this conversation. I’ll try to do so now. Hope you get your doubts resolved to your satisfaction eventually.
I don’t want to, I just think that the alternatives lead to absurd outcomes that can’t possibly be correct (see my analysis of the teleporter scenario).
I really have a hard time imagining a universe where there exists a thing that is preserved when 10^-9 seconds pass between computational steps but not when 10^3 pass between steps (while I move the harddrive to another box).
Prediction: TheOtherDave will say 50%, Beach!Dave and Bowling!Dave would both consider both to be the “original”. Assuming sufficiently accurate scanning & simulating.
Here’s what TheOtherDave actually said.
Yes, looks like that prediction is falsified. At least the first sentence. :)
I’ll give a 50% chance that I’ll experience that. (One copy of me continues in the “real” world, another copy of me appears in a simulation and goes bowling.)
(If you ask this question as “the AI is going to run N copies of the bowling simulation”, then I’m not sure how to answer—I’m not sure how to weight N copies of the exact same experience. My intuition is that I should still give a 50% chance, unless the simulations are going to differ in some respect, then I’d give a N/(N+1) chance.)
I need to think about your answer, as right now it doesn’t make any sense to me. I suspect that whatever intuition underlies it is the source of our disagreement/confusion.
@linkhyrule5 had an answer better than the one I had in mind. The probability of us going bowling together is approximately equal to the probability that you are already in said simulation, if computational continuity is what matters.
If there were a 6th Day like service I could sign up for where if anything were to happen to me, a clone/simulation of with my memories would be created, I’d sign up for it in a heartbeat. Because if something were to happen to me I wouldn’t want to deprive my wife of her husband, or my daughters of their father. But that is purely altruistic: I would have P(~0) expectation that I would actually experience that resurrection. Rather, some doppelganger twin that in every outward way behaves like me will take up my life where I left off. And that’s fine, but let’s be clear about the difference.
If you are not the simulation the AI was referring to, then you and it will not go bowling together, period. Because when said bowling occurs, you’ll be dead. Or maybe you’ll be alive and well and off doing other things while the simulation is going on. But under no circumstances should you expect to wake up as the simulation, as we are assuming them to be causally separate.
At least from my way of thinking. I’m not sure I understand yet where you are coming from well enough to predict what you’d expect to experience.
You could understand my 50% answer to be expressing my uncertainty as to whether I’m in the simulation or not. It’s the same thing.
I don’t understand what “computational continuity” means. Can you explain it using a program that computes the digits of pi as an example?
I think you’re making a distinction that exists only in the map, not in the territory. Can you point to something in the territory that this matters for?
Presumably you create a sim-me which includes the experience of having this conversation with you (the AI).
Let me interpret the term “expect” concretely as “I better go practice bowling now, so that sim-me can do well against you later” (assuming I hate losing). If I don’t particularly enjoy bowling and rather do something else, how much effort is warranted vs doing something I like?
The answer is not unambiguous and depends on how much I (meat-me) care about future sim-me having fun and not embarrassing sim-self. If sim-me continues on after meat-me passes away, I care very much about sim-me’s well being. On the other hand, if the sim-me program is halted after the bowling game, then I (meat-me) don’t care much about that sim-loser. After all, meat-me (who will not go bowling) will continue to exist, at least for a while. You might feel differently about sim-you, of course. There is a whole range of possible scenarios here. Feel free to specify one in more detail.
TL;DR: If the simulation will be the only copy of “me” in existence, I act as if I expect to experience bowling.