Being unhappy that in 0.01% of Many Worlds you are a murderer, is like being unhappy that with probability 0.01% you are a murderer in One World.
That doesn’t seem plausible. If there’s a 0.01% probability that I’m a murderer (and there is only one world) then if I’m not in fact a murderer, I have committed no murders. If there are many worlds, then I have committed no murders in this world, but the ‘me’ in another world (who’se path approximates mine to the extent that would call that person ‘me’) in fact is a murderer. It seems like a difference between some murders and no murders.
Because this is what you are saying, just adding Many Worlds.
I’m saying that depending on what I do, I end up in a non-murder path or a murder path. But nothing I do can change the number of non-murder or murder paths. So it’s not deterministic as regards my position in this selection, just deterministic as regards the selection itself. I can’t causally interact with other worlds, so my not murdering in one world has no effect on any other worlds. If there are five murder worlds branching off from myself at 17, then there are five no matter what. Maybe I can adjust that number prior to the day I turn 17, but there’s still a fixed number of murder worlds extending from the day I was born, and there’s nothing I can do to change that. Is that a faulty case of determinism?
Because you can’t influence what happen in the other branches.
That’s a good point. Would you be willing to commit to an a priori ethical principle such that ought implies can?
If there are five murder worlds branching off from myself at 17, then there are five no matter what.
That’s equivalent to saying “if at the moment of my 17th birthday there is a probability 5% that I will murder someone, then in that moment there is a probability 5% that I will murder someone no matter what”. I agree with this.
there’s still a fixed number of murder worlds extending from the day I was born, and there’s nothing I can do to change that.
That’s equivalent to saying “if at the day I was born there is an X% chance that I will become a murderer, there is nothing I can do to change that probability on that day”. True; you can’t travel back in time and create a counterfactual universe.
Short summary: You are mixing together two different views—timeful and timeless view. In timeful view you can say “today at 12:00 I decided to kill my neighbor”, and it makes sense. Then you switch to a position of a ceiling cat, an independent observer outside of our universe, outside of our time, and say “I cannot change the fact that today at 12:00 I killed my neighbor”. Yes, it also makes sense; if something happened, it cannot non-happen. But we confusing two narrators here: the real you, and the ceiling cat. You decided to kill your neighbor. The ceiling cat cannot decide that you didn’t, because the ceiling cat does not live in this universe; it can only observe what you did. The reason you killed your neighbor is that you, existing in this universe, have decided to do so. You are the cause. The ceiling cat sees your action as determined, because it is outside of the universe.
If we apply it to Many World hypothesis, there are 100 different yous, and one ceiling cat. From those, 5 yous commit murder (because they decided to do so), and 95 don’t (because they decided otherwise, or just failed to murder successfully). Inside the universes, the 5 yous are murderers, the 95 are not. The ceiling cat may decide to blame those 95 for the actions of those 5, but that’s the ceiling cat’s decision. It should at least give you credit for keeping the ratio 5:95 instead of e.g. 50:50.
Would you be willing to commit to an a priori ethical principle such that ought implies can?
That’s tricky. In some sense, we can’t do anything unless the atoms in our bodies do it; and our atoms are following that laws of physics. In some sense, there is no such thing as “can”, if we want to examine things on the atom level. (And that’s equally true in Many Worlds as in One World; only in One World there is also a randomness in the equations.) In other sense, humans are decision-makers. But we are decision-makers built from atoms, not decision-makers about the atoms we are built from.
So my answer would be that “ought” implies psychological “can”; not atomic “can”. (Because the whole ethics exists on psychological level, not on atomic level.)
Short summary: You are mixing together two different views—timeful and timeless view.
This sounds right to me, and I think your subsequent analysis is on target. So we have two views, the timeless view and the timeful view and we can’t (at least directly) translate ethical principles like ‘minimize evils’ across the views. So say we grant this and move on from here. Maybe my question is just that the timeless view is one in which ethics seems to make no sense (or at least not the same kind of sense), and the timeful view is a view in which it is a pressing concern. Would you object to that?
the timeless view is one in which ethics seems to make no sense
I didn’t fully realize that previously, but yes—in the timeless view there is no time, no change, no choice. Ethics is all about choices.
Ethical reasoning only makes sense in time, because the process of ethical reasoning is moving the particles in your brain, and the physical consequence of that can be a good or evil action. Ethics can have an influence on universe only if it is a part of the universe. The whole universe is determined only by its laws and its contents. The only way ethics can act is through the brains of people who contemplate it. Ethics is a human product (though we can discuss how much freedom did we have in creating this product; whether it would be different if we had a different history or biology) and it makes sense only on the human level, not on the level of particles.
I just stick with the timeless view and don’t have any trouble with ethics in it, but that’s because I’ve got all the phenomena of time fully embedded in the timeless view, including choice and morality. :)
Ethics is a human product (though we can discuss how much freedom did we have in creating this product; whether it would be different if we had a different history or biology) and it makes sense only on the human level, not on the level of particles.
I’m happy with the idea that ethics is a human product (since this doesn’t imply that it’s arbitrary or illusory or anything like that). I take this to mean, basically, that ethics concerns the relation of some subsystems with others. There’s no ethical language which makes sense from the ‘top-down’ or from a global perspective. But there’s also nothing to prevent (this is Eliezer’s meaning, I guess) a non-global perspective from being worked out in which ethical language does make sense. And this perspective isn’t arbitrary, because the subsystems working it out have always occupied that perspective as subsystems. To see an algorithm from the inside is to see world as a whole by seeing it as potentially involved in this algorithm. And this is what leads to the confusion between the global, timeless view from the (no less global, in some sense) timeful inside-an-algorithm view.
If that’s all passably normal (as skeptical as I am at the coherence of the idea of ‘adding up to normality’) then the question that remains is what I should do with my idea of things mattering ethically. Maybe the answer here is to see ethical agents as ontologically fundamental or something, though that sounds dangerously anthropocentric. But I don’t know how to justify the idea that physically-fundamental = ontologically-fundamental either.
Would you be willing to commit to an a priori ethical principle such that ought implies can?
I’m not Viliam Bur, but I wouldn’t quite agree with this, in that time matters. It’s not incoherent to talk about a system that can’t do X, could have done X, and ought to have done X, for example. It’s similarly not incoherent to talk about a system that can’t do X now but ought to have acted in the past so as to be able to do X now.
But yes, in general I would say the purpose of ethics is to determine right action. If we’re talking about the ethical status of a system with respect to actions we are virtually certain the system could not have taken, can not take, and will not be able to take, then we’re no longer talking about ethics in any straightforward sense.
Okay, so let’s adopt ‘ought implies can’ then, and restrict it to the same tense: if I ought to do X, I can do X. If I could have done (but can no longer do) X, then I ought to have done (but no longer ought to do) X.
How does this, in connection with MW, interact with consequentialism? The consequences of my actions can’t determine how much murdering I do (in the big world sense), just whether or not I fall on a murder-path. In the big world sense, I can’t (and therefore ought not) change the number of murder-paths. The consequence at which I should aim is the nature of the path I inhabit, because that’s what I can change.
Maybe this is right, but if it is, it seems to me to be an oddly subjective form of consequentialism. I’m not sure if this captures my thought, but it seems that it’s not as if I’m making the world a better place, I’m just putting myself in a better world.
it seems that it’s not as if I’m making the world a better place, I’m just putting myself in a better world.
It seems like you are not making world a better place because you think about fixed probability of becoming a murderer, which your decisions cannot change. But the probability of you becoming a murderer is a result of your decisions.
You have reversed the causality, because you imagine the probability of you ever being a murderer as something that existed sooner, and your decisions about murdering as something that happens later.
You treat probability of something happening in future as a fact that happened in the past. (Which is a common error. When humans talk about “outside of time”, they always imagine it in the past. No, the past is not outside of time; it is a part of time.)
The consequences of my actions can’t determine how much murdering I do (in the big world sense), [...] the nature of the path I inhabit, because that’s what I can change.
I’m not at all convinced that I endorse what you are doing with the word “I” here.
If we want to say that there exists some entity I, such that I commit murders on multiple branches, then to also talk about “the nature of the path I inhabit” seems entirely incoherent. There is no single path I inhabit, I (as defined here) inhabits all paths.
Conversely, if we want to say that there exists a single path that I inhabit (a much more conventional way of speaking), then murders committed on other branches are not murders I commit.
I’m not sure if that affects your point or not, but I have trouble refactoring your point to eliminate that confusion, so it seems relevant.
If we want to say that there exists some entity I, such that I commit murders on multiple branches, then to also talk about “the nature of the path I inhabit” seems entirely incoherent. There is no single path I inhabit, I (as defined here) inhabits all paths.
True, good point. That seems to be salt on the wound though. What I meant by ‘I’ is this: say I’m in path A. I have a parallel ‘I’ in path B if the configuration of something in B is such that, were it in A at some time past or future, I would consider it to be a (perhaps surprising) continuation of my existence in A.
If the Ai and the Bi are the same person, then I’m ethically responsible for the behavior of Bi for the same reasons I’m ethically responsible for myself (Ai). If Ai and Bi are not the same person (even if they’re very similar people) then I’m not responsible for Bi at all, but I’m also no longer de-coherent: there is always only one world with me in it. I take it neither of these options is true, and that some middle ground is to be preferred: Bi is not the same person as me, but something like a counterpart. Am I not responsible for the actions of my counterparts?
That’s a hard question to answer, but say I get uploaded and copied a bunch of times. A year later, some large percentage of my copies have become serial killers, while others have not. Are the peaceful copies morally responsible for the serial killing? If we say ‘no’ then it seems like we’re committed to at least some kind of libertarianism as regards free will. I understood the compatibilist view around here to be that you are responsible for your actions by way of being constituted in such and such a way. But my peaceful copies are constituted in largely the same way as the killer copies are. We only count them as numerically different on the basis of seemingly trivial distinctions like the fact that they’re embodied in different hardware.
What I meant by ‘I’ is this: say I’m in path A. I have a parallel ‘I’ in path B if the configuration of something in B is such that, were it in A at some time past or future, I would consider it to be a (perhaps surprising) continuation of my existence in A.
Well, OK. We are, of course, free to consider any entity we like an extension of our own identity in the sense you describe here. (I might similarly consider some other entity in my own path to be a “parallel me” if I wish. Heck, I might consider you a parallel me.)
If the Ai and the Bi are the same person, then I’m ethically responsible for the behavior of Bi for the same reasons I’m ethically responsible for myself (Ai).
It is not at all clear that I know what the reasons are that I’m ethically responsible for myself, if I am the sort of complex mostly-ignorant-of-its-own-activities entity scattered across multiple branches that you are positing I am. Again, transplanting an ethical intuition (like “I am ethically responsible for my actions”) unexamined from one context to a vastly different one is rarely justified.
So a good place to start might be to ask why I’m ethically responsible for myself, and why it matters.
I take it neither of these options is true, and that some middle ground is to be preferred: Bi is not the same person as me, but something like a counterpart.
Can you say more about that preference? I don’t share it, myself. I would say, rather, that I have some degree of confidence in the claim “Ai and Bi are the same person” and some degree of confidence that “Ai and Bi are different people,” and that multiple observers can have different degrees of confidence in these claims about a given (Ai, Bi) pair, and there’s no fact of the matter.
say I get uploaded and copied a bunch of times. A year later, some large percentage of my copies have become serial killers, while others have not. Are the peaceful copies morally responsible for the serial killing?
Say I belong to a group of distinct individuals, who are born and raised in the usual way, with no copying involved. A year later, some large percentage of the individuals in my group become serial killers, while others do not. Are the peaceful individuals morally responsible for the serial killing?
Almost all of the relevant factors governing my answer to your example seem to apply to mine as well. (My own answer to both questions is “Yes, within limits,” those limits largely being a function of the degree to which observations of Ai can serve as evidence about Bi.
That doesn’t seem plausible. If there’s a 0.01% probability that I’m a murderer (and there is only one world) then if I’m not in fact a murderer, I have committed no murders. If there are many worlds, then I have committed no murders in this world, but the ‘me’ in another world (who’se path approximates mine to the extent that would call that person ‘me’) in fact is a murderer. It seems like a difference between some murders and no murders.
I’m saying that depending on what I do, I end up in a non-murder path or a murder path. But nothing I do can change the number of non-murder or murder paths. So it’s not deterministic as regards my position in this selection, just deterministic as regards the selection itself. I can’t causally interact with other worlds, so my not murdering in one world has no effect on any other worlds. If there are five murder worlds branching off from myself at 17, then there are five no matter what. Maybe I can adjust that number prior to the day I turn 17, but there’s still a fixed number of murder worlds extending from the day I was born, and there’s nothing I can do to change that. Is that a faulty case of determinism?
That’s a good point. Would you be willing to commit to an a priori ethical principle such that ought implies can?
That’s equivalent to saying “if at the moment of my 17th birthday there is a probability 5% that I will murder someone, then in that moment there is a probability 5% that I will murder someone no matter what”. I agree with this.
That’s equivalent to saying “if at the day I was born there is an X% chance that I will become a murderer, there is nothing I can do to change that probability on that day”. True; you can’t travel back in time and create a counterfactual universe.
It is explained here, without the Many Words.
Short summary: You are mixing together two different views—timeful and timeless view. In timeful view you can say “today at 12:00 I decided to kill my neighbor”, and it makes sense. Then you switch to a position of a ceiling cat, an independent observer outside of our universe, outside of our time, and say “I cannot change the fact that today at 12:00 I killed my neighbor”. Yes, it also makes sense; if something happened, it cannot non-happen. But we confusing two narrators here: the real you, and the ceiling cat. You decided to kill your neighbor. The ceiling cat cannot decide that you didn’t, because the ceiling cat does not live in this universe; it can only observe what you did. The reason you killed your neighbor is that you, existing in this universe, have decided to do so. You are the cause. The ceiling cat sees your action as determined, because it is outside of the universe.
If we apply it to Many World hypothesis, there are 100 different yous, and one ceiling cat. From those, 5 yous commit murder (because they decided to do so), and 95 don’t (because they decided otherwise, or just failed to murder successfully). Inside the universes, the 5 yous are murderers, the 95 are not. The ceiling cat may decide to blame those 95 for the actions of those 5, but that’s the ceiling cat’s decision. It should at least give you credit for keeping the ratio 5:95 instead of e.g. 50:50.
That’s tricky. In some sense, we can’t do anything unless the atoms in our bodies do it; and our atoms are following that laws of physics. In some sense, there is no such thing as “can”, if we want to examine things on the atom level. (And that’s equally true in Many Worlds as in One World; only in One World there is also a randomness in the equations.) In other sense, humans are decision-makers. But we are decision-makers built from atoms, not decision-makers about the atoms we are built from.
So my answer would be that “ought” implies psychological “can”; not atomic “can”. (Because the whole ethics exists on psychological level, not on atomic level.)
This sounds right to me, and I think your subsequent analysis is on target. So we have two views, the timeless view and the timeful view and we can’t (at least directly) translate ethical principles like ‘minimize evils’ across the views. So say we grant this and move on from here. Maybe my question is just that the timeless view is one in which ethics seems to make no sense (or at least not the same kind of sense), and the timeful view is a view in which it is a pressing concern. Would you object to that?
I didn’t fully realize that previously, but yes—in the timeless view there is no time, no change, no choice. Ethics is all about choices.
Ethical reasoning only makes sense in time, because the process of ethical reasoning is moving the particles in your brain, and the physical consequence of that can be a good or evil action. Ethics can have an influence on universe only if it is a part of the universe. The whole universe is determined only by its laws and its contents. The only way ethics can act is through the brains of people who contemplate it. Ethics is a human product (though we can discuss how much freedom did we have in creating this product; whether it would be different if we had a different history or biology) and it makes sense only on the human level, not on the level of particles.
I just stick with the timeless view and don’t have any trouble with ethics in it, but that’s because I’ve got all the phenomena of time fully embedded in the timeless view, including choice and morality. :)
I’m happy with the idea that ethics is a human product (since this doesn’t imply that it’s arbitrary or illusory or anything like that). I take this to mean, basically, that ethics concerns the relation of some subsystems with others. There’s no ethical language which makes sense from the ‘top-down’ or from a global perspective. But there’s also nothing to prevent (this is Eliezer’s meaning, I guess) a non-global perspective from being worked out in which ethical language does make sense. And this perspective isn’t arbitrary, because the subsystems working it out have always occupied that perspective as subsystems. To see an algorithm from the inside is to see world as a whole by seeing it as potentially involved in this algorithm. And this is what leads to the confusion between the global, timeless view from the (no less global, in some sense) timeful inside-an-algorithm view.
If that’s all passably normal (as skeptical as I am at the coherence of the idea of ‘adding up to normality’) then the question that remains is what I should do with my idea of things mattering ethically. Maybe the answer here is to see ethical agents as ontologically fundamental or something, though that sounds dangerously anthropocentric. But I don’t know how to justify the idea that physically-fundamental = ontologically-fundamental either.
I’m not Viliam Bur, but I wouldn’t quite agree with this, in that time matters. It’s not incoherent to talk about a system that can’t do X, could have done X, and ought to have done X, for example. It’s similarly not incoherent to talk about a system that can’t do X now but ought to have acted in the past so as to be able to do X now.
But yes, in general I would say the purpose of ethics is to determine right action. If we’re talking about the ethical status of a system with respect to actions we are virtually certain the system could not have taken, can not take, and will not be able to take, then we’re no longer talking about ethics in any straightforward sense.
Okay, so let’s adopt ‘ought implies can’ then, and restrict it to the same tense: if I ought to do X, I can do X. If I could have done (but can no longer do) X, then I ought to have done (but no longer ought to do) X.
How does this, in connection with MW, interact with consequentialism? The consequences of my actions can’t determine how much murdering I do (in the big world sense), just whether or not I fall on a murder-path. In the big world sense, I can’t (and therefore ought not) change the number of murder-paths. The consequence at which I should aim is the nature of the path I inhabit, because that’s what I can change.
Maybe this is right, but if it is, it seems to me to be an oddly subjective form of consequentialism. I’m not sure if this captures my thought, but it seems that it’s not as if I’m making the world a better place, I’m just putting myself in a better world.
It seems like you are not making world a better place because you think about fixed probability of becoming a murderer, which your decisions cannot change. But the probability of you becoming a murderer is a result of your decisions.
You have reversed the causality, because you imagine the probability of you ever being a murderer as something that existed sooner, and your decisions about murdering as something that happens later.
You treat probability of something happening in future as a fact that happened in the past. (Which is a common error. When humans talk about “outside of time”, they always imagine it in the past. No, the past is not outside of time; it is a part of time.)
I’m not at all convinced that I endorse what you are doing with the word “I” here.
If we want to say that there exists some entity I, such that I commit murders on multiple branches, then to also talk about “the nature of the path I inhabit” seems entirely incoherent. There is no single path I inhabit, I (as defined here) inhabits all paths.
Conversely, if we want to say that there exists a single path that I inhabit (a much more conventional way of speaking), then murders committed on other branches are not murders I commit.
I’m not sure if that affects your point or not, but I have trouble refactoring your point to eliminate that confusion, so it seems relevant.
True, good point. That seems to be salt on the wound though. What I meant by ‘I’ is this: say I’m in path A. I have a parallel ‘I’ in path B if the configuration of something in B is such that, were it in A at some time past or future, I would consider it to be a (perhaps surprising) continuation of my existence in A.
If the Ai and the Bi are the same person, then I’m ethically responsible for the behavior of Bi for the same reasons I’m ethically responsible for myself (Ai). If Ai and Bi are not the same person (even if they’re very similar people) then I’m not responsible for Bi at all, but I’m also no longer de-coherent: there is always only one world with me in it. I take it neither of these options is true, and that some middle ground is to be preferred: Bi is not the same person as me, but something like a counterpart. Am I not responsible for the actions of my counterparts?
That’s a hard question to answer, but say I get uploaded and copied a bunch of times. A year later, some large percentage of my copies have become serial killers, while others have not. Are the peaceful copies morally responsible for the serial killing? If we say ‘no’ then it seems like we’re committed to at least some kind of libertarianism as regards free will. I understood the compatibilist view around here to be that you are responsible for your actions by way of being constituted in such and such a way. But my peaceful copies are constituted in largely the same way as the killer copies are. We only count them as numerically different on the basis of seemingly trivial distinctions like the fact that they’re embodied in different hardware.
Well, OK. We are, of course, free to consider any entity we like an extension of our own identity in the sense you describe here. (I might similarly consider some other entity in my own path to be a “parallel me” if I wish. Heck, I might consider you a parallel me.)
It is not at all clear that I know what the reasons are that I’m ethically responsible for myself, if I am the sort of complex mostly-ignorant-of-its-own-activities entity scattered across multiple branches that you are positing I am. Again, transplanting an ethical intuition (like “I am ethically responsible for my actions”) unexamined from one context to a vastly different one is rarely justified.
So a good place to start might be to ask why I’m ethically responsible for myself, and why it matters.
Can you say more about that preference? I don’t share it, myself. I would say, rather, that I have some degree of confidence in the claim “Ai and Bi are the same person” and some degree of confidence that “Ai and Bi are different people,” and that multiple observers can have different degrees of confidence in these claims about a given (Ai, Bi) pair, and there’s no fact of the matter.
Say I belong to a group of distinct individuals, who are born and raised in the usual way, with no copying involved. A year later, some large percentage of the individuals in my group become serial killers, while others do not. Are the peaceful individuals morally responsible for the serial killing?
Almost all of the relevant factors governing my answer to your example seem to apply to mine as well. (My own answer to both questions is “Yes, within limits,” those limits largely being a function of the degree to which observations of Ai can serve as evidence about Bi.