I can’t change the fundamental amount of goodness, I can just push it around.
Wrong (even when assuming there is an exact definition of goodness).
You can’t fix all branches of the universe, because (1) in most branches you don’t exist, and (2) in a very few branches totally random events may prevent your actions. But this does not mean that your actions don’t increase the amount of goodness.
First, you are responsible only for the branches where you existed, so let’s just remove the other branches from our moral equation. Second, the exceptionally random events happen only in exceptionally small proportion of branches. So even if some kind of Maxwell’s demon can ruin your actions in 0.000 … … … 001 of branches, there are stil 0.999 … … … 999 of branches where your actions worked normally. And improving such majority of branches is a good thing.
In each world, people choose the course that seems best to them. Maybe they happen on a different line of thinking, and see new implications or miss others, and come to a different choice. But it’s not that one world chooses each choice. It’s not that one version of you chooses what seems best, and another version chooses what seems worst. In each world, apples go on falling and people go on doing what seems like a good idea.
In all the worlds, people’s choices determine outcomes in the same way they would in just one single world. The choice you make here does not have some strange balancing influence on some world elsewhere. There is no causal communication between decoherent worlds. In each world, people’s choices control the future of that world, not some other world. If you can imagine decisionmaking in one world, you can imagine decision-making in many worlds: just have the world constantly splitting while otherwise obeying all the same rules.
Well, lets say we posit some starting condition, say the condition of the universe on the day I turned 17. I am down one path from that initial condition, and a great many other worlds exist in which things went a little differently. I take it that it’s not (unfortunately) a physical or logical impossibility that in one or more of those branches, I have ten years down the line committed a murder.
Now, there are a finite number of murder-paths, and a finite number of non-murder-paths, and my path is identical to one of them. But it seems to me that whether or not I murder someone, the total number of murder-paths and the total number of non-murder-paths is the same? Is this totally off base? I hope that it is.
Anyway, if that’s true, then by not murdering, all I’ve done is put myself off of a murder-path. There’s one less murder in my world, but not one less murder absolutely. So, fine, live in my world and don’t worry about the others. But whence that rule? That seems arbitrary, and I’m not allowed to apply it in order to localize my ethical considerations in any other case.
On a macro level, a Many Worlds model should be mathematically equal to One World + Probabilities model. Being unhappy that in 0.01% of Many Worlds you are a murderer, is like being unhappy that with probability 0.01% you are a murderer in One World. The difference is that in One World you can later say “I was lucky” or “I was unlucky”, while in the Many Worlds model you can just say “this is a lucky branch” or “this is an unlucky branch”.
But it seems to me that whether or not I murder someone, the total number of murder-paths and the total number of non-murder-paths is the same?
At this point it seems to me that you are mixing a Many Worlds model with a naive determinism, and the problem is with the naive determinism. Imagine saying this: “on the day I turned 17, there is one fixed path towards the future, where I either commit a murder or don’t, and the result is the same whatever I do”. Is this right, or wrong, or confused, or...? Because this is what you are saying, just adding Many Worlds. The difference is that in One World model, if you say “I will flip a coin, and based on the result I will kill him or not” and you mean it, then you are a murderer with probability 50%, while in Many Worlds you are a murderer in 50% of branches. (Of course with the naive determinism the probability is also only in mind—you were already determined to throw the coin with given direction and speed.)
Simply speaking, in Many Worlds model all probabilities happen, but higher probabilities happen “more” and lower probabilities happen “less”. You don’t want to be a murderer? Then behave so that your probability of murdering someone is as small as possible! This is equally valid advice for One World and Many Worlds.
So, fine, live in my world and don’t worry about the others. But whence that rule?
Because you can’t influence what happen in the other branches. However, if you did something that could lead with some probability to other person’s death (e.g. shooting at them and missing them), you should understand that it was a bad thing which made you (in some other branch) a murderer, so you should not do that again (but neither should you do that again in One World). On the other hand, if you did something that could lead to a good outcome, but you randomly failed, you did (in some other branch) a good thing. (Careful! You have a big bias to overestimate the probability of the good outcome. So don’t reward yourself too much for trying.)
Being unhappy that in 0.01% of Many Worlds you are a murderer, is like being unhappy that with probability 0.01% you are a murderer in One World.
That doesn’t seem plausible. If there’s a 0.01% probability that I’m a murderer (and there is only one world) then if I’m not in fact a murderer, I have committed no murders. If there are many worlds, then I have committed no murders in this world, but the ‘me’ in another world (who’se path approximates mine to the extent that would call that person ‘me’) in fact is a murderer. It seems like a difference between some murders and no murders.
Because this is what you are saying, just adding Many Worlds.
I’m saying that depending on what I do, I end up in a non-murder path or a murder path. But nothing I do can change the number of non-murder or murder paths. So it’s not deterministic as regards my position in this selection, just deterministic as regards the selection itself. I can’t causally interact with other worlds, so my not murdering in one world has no effect on any other worlds. If there are five murder worlds branching off from myself at 17, then there are five no matter what. Maybe I can adjust that number prior to the day I turn 17, but there’s still a fixed number of murder worlds extending from the day I was born, and there’s nothing I can do to change that. Is that a faulty case of determinism?
Because you can’t influence what happen in the other branches.
That’s a good point. Would you be willing to commit to an a priori ethical principle such that ought implies can?
If there are five murder worlds branching off from myself at 17, then there are five no matter what.
That’s equivalent to saying “if at the moment of my 17th birthday there is a probability 5% that I will murder someone, then in that moment there is a probability 5% that I will murder someone no matter what”. I agree with this.
there’s still a fixed number of murder worlds extending from the day I was born, and there’s nothing I can do to change that.
That’s equivalent to saying “if at the day I was born there is an X% chance that I will become a murderer, there is nothing I can do to change that probability on that day”. True; you can’t travel back in time and create a counterfactual universe.
Short summary: You are mixing together two different views—timeful and timeless view. In timeful view you can say “today at 12:00 I decided to kill my neighbor”, and it makes sense. Then you switch to a position of a ceiling cat, an independent observer outside of our universe, outside of our time, and say “I cannot change the fact that today at 12:00 I killed my neighbor”. Yes, it also makes sense; if something happened, it cannot non-happen. But we confusing two narrators here: the real you, and the ceiling cat. You decided to kill your neighbor. The ceiling cat cannot decide that you didn’t, because the ceiling cat does not live in this universe; it can only observe what you did. The reason you killed your neighbor is that you, existing in this universe, have decided to do so. You are the cause. The ceiling cat sees your action as determined, because it is outside of the universe.
If we apply it to Many World hypothesis, there are 100 different yous, and one ceiling cat. From those, 5 yous commit murder (because they decided to do so), and 95 don’t (because they decided otherwise, or just failed to murder successfully). Inside the universes, the 5 yous are murderers, the 95 are not. The ceiling cat may decide to blame those 95 for the actions of those 5, but that’s the ceiling cat’s decision. It should at least give you credit for keeping the ratio 5:95 instead of e.g. 50:50.
Would you be willing to commit to an a priori ethical principle such that ought implies can?
That’s tricky. In some sense, we can’t do anything unless the atoms in our bodies do it; and our atoms are following that laws of physics. In some sense, there is no such thing as “can”, if we want to examine things on the atom level. (And that’s equally true in Many Worlds as in One World; only in One World there is also a randomness in the equations.) In other sense, humans are decision-makers. But we are decision-makers built from atoms, not decision-makers about the atoms we are built from.
So my answer would be that “ought” implies psychological “can”; not atomic “can”. (Because the whole ethics exists on psychological level, not on atomic level.)
Short summary: You are mixing together two different views—timeful and timeless view.
This sounds right to me, and I think your subsequent analysis is on target. So we have two views, the timeless view and the timeful view and we can’t (at least directly) translate ethical principles like ‘minimize evils’ across the views. So say we grant this and move on from here. Maybe my question is just that the timeless view is one in which ethics seems to make no sense (or at least not the same kind of sense), and the timeful view is a view in which it is a pressing concern. Would you object to that?
the timeless view is one in which ethics seems to make no sense
I didn’t fully realize that previously, but yes—in the timeless view there is no time, no change, no choice. Ethics is all about choices.
Ethical reasoning only makes sense in time, because the process of ethical reasoning is moving the particles in your brain, and the physical consequence of that can be a good or evil action. Ethics can have an influence on universe only if it is a part of the universe. The whole universe is determined only by its laws and its contents. The only way ethics can act is through the brains of people who contemplate it. Ethics is a human product (though we can discuss how much freedom did we have in creating this product; whether it would be different if we had a different history or biology) and it makes sense only on the human level, not on the level of particles.
I just stick with the timeless view and don’t have any trouble with ethics in it, but that’s because I’ve got all the phenomena of time fully embedded in the timeless view, including choice and morality. :)
Ethics is a human product (though we can discuss how much freedom did we have in creating this product; whether it would be different if we had a different history or biology) and it makes sense only on the human level, not on the level of particles.
I’m happy with the idea that ethics is a human product (since this doesn’t imply that it’s arbitrary or illusory or anything like that). I take this to mean, basically, that ethics concerns the relation of some subsystems with others. There’s no ethical language which makes sense from the ‘top-down’ or from a global perspective. But there’s also nothing to prevent (this is Eliezer’s meaning, I guess) a non-global perspective from being worked out in which ethical language does make sense. And this perspective isn’t arbitrary, because the subsystems working it out have always occupied that perspective as subsystems. To see an algorithm from the inside is to see world as a whole by seeing it as potentially involved in this algorithm. And this is what leads to the confusion between the global, timeless view from the (no less global, in some sense) timeful inside-an-algorithm view.
If that’s all passably normal (as skeptical as I am at the coherence of the idea of ‘adding up to normality’) then the question that remains is what I should do with my idea of things mattering ethically. Maybe the answer here is to see ethical agents as ontologically fundamental or something, though that sounds dangerously anthropocentric. But I don’t know how to justify the idea that physically-fundamental = ontologically-fundamental either.
Would you be willing to commit to an a priori ethical principle such that ought implies can?
I’m not Viliam Bur, but I wouldn’t quite agree with this, in that time matters. It’s not incoherent to talk about a system that can’t do X, could have done X, and ought to have done X, for example. It’s similarly not incoherent to talk about a system that can’t do X now but ought to have acted in the past so as to be able to do X now.
But yes, in general I would say the purpose of ethics is to determine right action. If we’re talking about the ethical status of a system with respect to actions we are virtually certain the system could not have taken, can not take, and will not be able to take, then we’re no longer talking about ethics in any straightforward sense.
Okay, so let’s adopt ‘ought implies can’ then, and restrict it to the same tense: if I ought to do X, I can do X. If I could have done (but can no longer do) X, then I ought to have done (but no longer ought to do) X.
How does this, in connection with MW, interact with consequentialism? The consequences of my actions can’t determine how much murdering I do (in the big world sense), just whether or not I fall on a murder-path. In the big world sense, I can’t (and therefore ought not) change the number of murder-paths. The consequence at which I should aim is the nature of the path I inhabit, because that’s what I can change.
Maybe this is right, but if it is, it seems to me to be an oddly subjective form of consequentialism. I’m not sure if this captures my thought, but it seems that it’s not as if I’m making the world a better place, I’m just putting myself in a better world.
it seems that it’s not as if I’m making the world a better place, I’m just putting myself in a better world.
It seems like you are not making world a better place because you think about fixed probability of becoming a murderer, which your decisions cannot change. But the probability of you becoming a murderer is a result of your decisions.
You have reversed the causality, because you imagine the probability of you ever being a murderer as something that existed sooner, and your decisions about murdering as something that happens later.
You treat probability of something happening in future as a fact that happened in the past. (Which is a common error. When humans talk about “outside of time”, they always imagine it in the past. No, the past is not outside of time; it is a part of time.)
The consequences of my actions can’t determine how much murdering I do (in the big world sense), [...] the nature of the path I inhabit, because that’s what I can change.
I’m not at all convinced that I endorse what you are doing with the word “I” here.
If we want to say that there exists some entity I, such that I commit murders on multiple branches, then to also talk about “the nature of the path I inhabit” seems entirely incoherent. There is no single path I inhabit, I (as defined here) inhabits all paths.
Conversely, if we want to say that there exists a single path that I inhabit (a much more conventional way of speaking), then murders committed on other branches are not murders I commit.
I’m not sure if that affects your point or not, but I have trouble refactoring your point to eliminate that confusion, so it seems relevant.
If we want to say that there exists some entity I, such that I commit murders on multiple branches, then to also talk about “the nature of the path I inhabit” seems entirely incoherent. There is no single path I inhabit, I (as defined here) inhabits all paths.
True, good point. That seems to be salt on the wound though. What I meant by ‘I’ is this: say I’m in path A. I have a parallel ‘I’ in path B if the configuration of something in B is such that, were it in A at some time past or future, I would consider it to be a (perhaps surprising) continuation of my existence in A.
If the Ai and the Bi are the same person, then I’m ethically responsible for the behavior of Bi for the same reasons I’m ethically responsible for myself (Ai). If Ai and Bi are not the same person (even if they’re very similar people) then I’m not responsible for Bi at all, but I’m also no longer de-coherent: there is always only one world with me in it. I take it neither of these options is true, and that some middle ground is to be preferred: Bi is not the same person as me, but something like a counterpart. Am I not responsible for the actions of my counterparts?
That’s a hard question to answer, but say I get uploaded and copied a bunch of times. A year later, some large percentage of my copies have become serial killers, while others have not. Are the peaceful copies morally responsible for the serial killing? If we say ‘no’ then it seems like we’re committed to at least some kind of libertarianism as regards free will. I understood the compatibilist view around here to be that you are responsible for your actions by way of being constituted in such and such a way. But my peaceful copies are constituted in largely the same way as the killer copies are. We only count them as numerically different on the basis of seemingly trivial distinctions like the fact that they’re embodied in different hardware.
What I meant by ‘I’ is this: say I’m in path A. I have a parallel ‘I’ in path B if the configuration of something in B is such that, were it in A at some time past or future, I would consider it to be a (perhaps surprising) continuation of my existence in A.
Well, OK. We are, of course, free to consider any entity we like an extension of our own identity in the sense you describe here. (I might similarly consider some other entity in my own path to be a “parallel me” if I wish. Heck, I might consider you a parallel me.)
If the Ai and the Bi are the same person, then I’m ethically responsible for the behavior of Bi for the same reasons I’m ethically responsible for myself (Ai).
It is not at all clear that I know what the reasons are that I’m ethically responsible for myself, if I am the sort of complex mostly-ignorant-of-its-own-activities entity scattered across multiple branches that you are positing I am. Again, transplanting an ethical intuition (like “I am ethically responsible for my actions”) unexamined from one context to a vastly different one is rarely justified.
So a good place to start might be to ask why I’m ethically responsible for myself, and why it matters.
I take it neither of these options is true, and that some middle ground is to be preferred: Bi is not the same person as me, but something like a counterpart.
Can you say more about that preference? I don’t share it, myself. I would say, rather, that I have some degree of confidence in the claim “Ai and Bi are the same person” and some degree of confidence that “Ai and Bi are different people,” and that multiple observers can have different degrees of confidence in these claims about a given (Ai, Bi) pair, and there’s no fact of the matter.
say I get uploaded and copied a bunch of times. A year later, some large percentage of my copies have become serial killers, while others have not. Are the peaceful copies morally responsible for the serial killing?
Say I belong to a group of distinct individuals, who are born and raised in the usual way, with no copying involved. A year later, some large percentage of the individuals in my group become serial killers, while others do not. Are the peaceful individuals morally responsible for the serial killing?
Almost all of the relevant factors governing my answer to your example seem to apply to mine as well. (My own answer to both questions is “Yes, within limits,” those limits largely being a function of the degree to which observations of Ai can serve as evidence about Bi.
But it seems to me that whether or not I murder someone, the total number of murder-paths and the total number of non-murder-paths is the same? Is this totally off base? I hope that it is.
Good news! It is totally off base. There is nothing in quantum mechanics requiring that the number of branches corresponding to an arbitrary macroscopic event and its negation must be equal.
There is nothing in quantum mechanics requiring that the number of branches corresponding to an arbitrary macroscopic event and its negation must be equal.
Aww, you had my hopes up. There’s nothing in my set-up that requires them to be equal either, just that the numbers be fixed.
So, fine, live in my world and don’t worry about the others. But whence that rule? That seems arbitrary
That feeling of arbitrariness is, IMHO, worth exploring more carefully.
Suppose, for example, it turns out that we don’t live in a Big World… that this is all there is, and that events either happen in this world or they don’t happen at all. Suppose you somehow were to receive confirmation of this. Big relief, right? Now you really can reduce the total amount of whatever in all of existence everywhere, so actions have meaning again.
But then you meet someone who says “But what about hypothetical people? No matter how many people I don’t actually murder, there’s still countless hypothetical people being hypothetically murdered! And, sure, you can tell me to just worry about actual people and don’t worry about the other, but whence that rule? It seems arbitrary.”
Would you find their position reasonable? What would you say to them, if not?
But then you meet someone who says “But what about hypothetical people? No matter how many people I don’t actually murder, there’s still countless hypothetical people being hypothetically murdered! And, sure, you can tell me to just worry about actual people and don’t worry about the other, but whence that rule? It seems arbitrary.”
Well put. This actually does come up in a philosophical view known as modal realism. Roughly, if we can make true or false claims about possible worlds, then those worlds must be actual in order to be truth-makers. So all possible worlds are actual.
If my someone said what you said he said, suppose I ask this in reply:
E:”Wait, are those hypothetical people being hypothetically murdered? Is that true?”
S: “Yes! And there’s nothing you can do!”
E:”And there’s some reality to which this part of the map, the hypothetical-people-being-murdered corresponds? Such that the hypothetical murder of these people is a real part of our world?”
S: “Well, sure.”
E: “Okay, well if we’re going to venture into modal realism then this just conflicts in the same way.”
S: Suppose we’re not modal realists then. Suppose there’s just not really a fact of the matter about whether or not hypothetical, and therefore non-existant people are being murdered.
E: No problem. I’m just interested in reducing real evils.
S: Isn’t that an arbitrary determination?
E: No, it’s the exact opposite of arbitrary. I also don’t take non-existant evidence as evidence, I don’t eat non-existant fruit, etc. If we call this arbitrary, then what isn’t?
I would certainly say you’re justified in not caring about hypothetical murders. I would also say you’re justified in not caring about murders in other MW branches.
What you seem to want to say here is that because murders in other MW branches are “actual”, you care about them, but since murders in my imagination are not “actual”, you don’t.
I have no idea what the word “actual” could possibly refer to so as to do the work you want it to do here.
There are certainly clusters of consistent experience to which a hypothetical murder of a hypothetical person corresponds. Those clusters might, for example, take the form of certain patterns of neural activation in my brain… that’s how I usually model it, anyway. I’m happy to say that those are “actual” patterns of neural activation. I would not say that they are “actual” murdered human beings.
That said, I’m not really sure it matters if they are. I mean, if they are, then… hold on, let me visualize… there: I just “actually” resurrected them and they are now “actually” extremely happy. Was their former murder still evil? At best, it seems all of my preconceived notions about murder (e.g., that it’s a permanent state change of some kind) have just been thrown out the window, and I should give some serious thought to why I think murder is evil in the first place.
It seems something similar is true about existence in a Big World… if I want to incorporate that into my thinking, it seems I ought to rethink all of my assumptions. Transplanting a moral intuition about murder derived in a small world into a big world without any alteration seems like a recipe for walking off conceptual cliffs.
What you seem to want to say here is that because murders in other MW branches are “actual”, you care about them, but since murders in my imagination are not “actual”, you don’t.
Right, exactly. I’m taking this sense of ‘actual’ (not literally) from the sequences. This is from ‘On being Decoherent’:
You only see nearby objects, not objects light-years away, because photons from those objects can’t reach you, therefore you can’t see them. By a similar locality principle, you don’t interact with distant configurations.
Later on in this post EY says that the Big World is already at issue in spatial terms: somewhere far away, there is another Esar (or someone enough like me to count as me). The implication is that existing in another world is analogous to existing in another place. And I certainly don’t think I’m allowed to apply the ‘keep your own corner clean’ principle to spatial zones.
In ’Living in Many Worlds”, EY says:
“Oh, there are a few implications of many-worlds for ethics. Average utilitarianism suddenly looks a lot more attractive—you don’t need to worry about creating as many people as possible, because there are already plenty of people exploring person-space. You just want the average quality of life to be as high as possible, in the future worlds that are your responsibility.
And you should always take joy in discovery, as long as you personally don’t know a thing. It is meaningless to talk of being the “first” or the “only” person to know a thing, when everything knowable is known within worlds that are in neither your past nor your future, and are neither before or after you.”
I take him to mean that there are really, actually many other people who exist (just in different worlds) and that I’m responsible for the quality of life for some sub-set of those people. And that there really are, actually, many people in other worlds who have discovered or know things I might take myself to have discovered or be the first to know. Such that it’s a small but real overturning of normality that I can’t really be the first to know something. (That, I assume, is what an ethical implication of MW for ethics amounts to, some overturning of some ethical normality).
I’m happy to say that those are “actual” patterns of neural activation. I would not say that they are “actual” murdered human beings.
If you modeled it to the point that you fully modeled a human being in your brain, and then murdered them, it seems obvious that you did actually kill someone. Hypothetical murders (but considered) fail to be murders because they fail to be good enough models.
Ordinarily, I would describe someone who is uncertain about obvious things as a fool. It’s not clear to me that I’m a fool, but it is also not at all clear to me that murder as you’ve defined it in this conversation is evil.
If you could explain that obvious truth to me, I might learn something.
Ordinarily, I would describe someone who is uncertain about obvious things as a fool. It’s not clear to me that I’m a fool, but it is also not at all clear to me that murder as you’ve defined it in this conversation is evil.
I didn’t mean to call you a fool, only I don’t think the disruption of your intuitions is a disruption of your ethical intuitions. It’s unintuitive to think of a human-being as something fully emulated within another human being’s brain, but if this is actually possible, it’s not unintuitive that ending this neural activity would be murder (if it weren’t some other form of killing-a-human-being). My point was just that the distinction in hardware can’t make a difference to the question of whether or not ending a neural activity is killing, and given a set of constants, murder.
Since I don’t think we’re any longer talking about my original question, I think I’ll tap out.
Wrong (even when assuming there is an exact definition of goodness).
You can’t fix all branches of the universe, because (1) in most branches you don’t exist, and (2) in a very few branches totally random events may prevent your actions. But this does not mean that your actions don’t increase the amount of goodness.
First, you are responsible only for the branches where you existed, so let’s just remove the other branches from our moral equation. Second, the exceptionally random events happen only in exceptionally small proportion of branches. So even if some kind of Maxwell’s demon can ruin your actions in 0.000 … … … 001 of branches, there are stil 0.999 … … … 999 of branches where your actions worked normally. And improving such majority of branches is a good thing.
More info here:
Well, lets say we posit some starting condition, say the condition of the universe on the day I turned 17. I am down one path from that initial condition, and a great many other worlds exist in which things went a little differently. I take it that it’s not (unfortunately) a physical or logical impossibility that in one or more of those branches, I have ten years down the line committed a murder.
Now, there are a finite number of murder-paths, and a finite number of non-murder-paths, and my path is identical to one of them. But it seems to me that whether or not I murder someone, the total number of murder-paths and the total number of non-murder-paths is the same? Is this totally off base? I hope that it is.
Anyway, if that’s true, then by not murdering, all I’ve done is put myself off of a murder-path. There’s one less murder in my world, but not one less murder absolutely. So, fine, live in my world and don’t worry about the others. But whence that rule? That seems arbitrary, and I’m not allowed to apply it in order to localize my ethical considerations in any other case.
On a macro level, a Many Worlds model should be mathematically equal to One World + Probabilities model. Being unhappy that in 0.01% of Many Worlds you are a murderer, is like being unhappy that with probability 0.01% you are a murderer in One World. The difference is that in One World you can later say “I was lucky” or “I was unlucky”, while in the Many Worlds model you can just say “this is a lucky branch” or “this is an unlucky branch”.
At this point it seems to me that you are mixing a Many Worlds model with a naive determinism, and the problem is with the naive determinism. Imagine saying this: “on the day I turned 17, there is one fixed path towards the future, where I either commit a murder or don’t, and the result is the same whatever I do”. Is this right, or wrong, or confused, or...? Because this is what you are saying, just adding Many Worlds. The difference is that in One World model, if you say “I will flip a coin, and based on the result I will kill him or not” and you mean it, then you are a murderer with probability 50%, while in Many Worlds you are a murderer in 50% of branches. (Of course with the naive determinism the probability is also only in mind—you were already determined to throw the coin with given direction and speed.)
Simply speaking, in Many Worlds model all probabilities happen, but higher probabilities happen “more” and lower probabilities happen “less”. You don’t want to be a murderer? Then behave so that your probability of murdering someone is as small as possible! This is equally valid advice for One World and Many Worlds.
Because you can’t influence what happen in the other branches. However, if you did something that could lead with some probability to other person’s death (e.g. shooting at them and missing them), you should understand that it was a bad thing which made you (in some other branch) a murderer, so you should not do that again (but neither should you do that again in One World). On the other hand, if you did something that could lead to a good outcome, but you randomly failed, you did (in some other branch) a good thing. (Careful! You have a big bias to overestimate the probability of the good outcome. So don’t reward yourself too much for trying.)
That doesn’t seem plausible. If there’s a 0.01% probability that I’m a murderer (and there is only one world) then if I’m not in fact a murderer, I have committed no murders. If there are many worlds, then I have committed no murders in this world, but the ‘me’ in another world (who’se path approximates mine to the extent that would call that person ‘me’) in fact is a murderer. It seems like a difference between some murders and no murders.
I’m saying that depending on what I do, I end up in a non-murder path or a murder path. But nothing I do can change the number of non-murder or murder paths. So it’s not deterministic as regards my position in this selection, just deterministic as regards the selection itself. I can’t causally interact with other worlds, so my not murdering in one world has no effect on any other worlds. If there are five murder worlds branching off from myself at 17, then there are five no matter what. Maybe I can adjust that number prior to the day I turn 17, but there’s still a fixed number of murder worlds extending from the day I was born, and there’s nothing I can do to change that. Is that a faulty case of determinism?
That’s a good point. Would you be willing to commit to an a priori ethical principle such that ought implies can?
That’s equivalent to saying “if at the moment of my 17th birthday there is a probability 5% that I will murder someone, then in that moment there is a probability 5% that I will murder someone no matter what”. I agree with this.
That’s equivalent to saying “if at the day I was born there is an X% chance that I will become a murderer, there is nothing I can do to change that probability on that day”. True; you can’t travel back in time and create a counterfactual universe.
It is explained here, without the Many Words.
Short summary: You are mixing together two different views—timeful and timeless view. In timeful view you can say “today at 12:00 I decided to kill my neighbor”, and it makes sense. Then you switch to a position of a ceiling cat, an independent observer outside of our universe, outside of our time, and say “I cannot change the fact that today at 12:00 I killed my neighbor”. Yes, it also makes sense; if something happened, it cannot non-happen. But we confusing two narrators here: the real you, and the ceiling cat. You decided to kill your neighbor. The ceiling cat cannot decide that you didn’t, because the ceiling cat does not live in this universe; it can only observe what you did. The reason you killed your neighbor is that you, existing in this universe, have decided to do so. You are the cause. The ceiling cat sees your action as determined, because it is outside of the universe.
If we apply it to Many World hypothesis, there are 100 different yous, and one ceiling cat. From those, 5 yous commit murder (because they decided to do so), and 95 don’t (because they decided otherwise, or just failed to murder successfully). Inside the universes, the 5 yous are murderers, the 95 are not. The ceiling cat may decide to blame those 95 for the actions of those 5, but that’s the ceiling cat’s decision. It should at least give you credit for keeping the ratio 5:95 instead of e.g. 50:50.
That’s tricky. In some sense, we can’t do anything unless the atoms in our bodies do it; and our atoms are following that laws of physics. In some sense, there is no such thing as “can”, if we want to examine things on the atom level. (And that’s equally true in Many Worlds as in One World; only in One World there is also a randomness in the equations.) In other sense, humans are decision-makers. But we are decision-makers built from atoms, not decision-makers about the atoms we are built from.
So my answer would be that “ought” implies psychological “can”; not atomic “can”. (Because the whole ethics exists on psychological level, not on atomic level.)
This sounds right to me, and I think your subsequent analysis is on target. So we have two views, the timeless view and the timeful view and we can’t (at least directly) translate ethical principles like ‘minimize evils’ across the views. So say we grant this and move on from here. Maybe my question is just that the timeless view is one in which ethics seems to make no sense (or at least not the same kind of sense), and the timeful view is a view in which it is a pressing concern. Would you object to that?
I didn’t fully realize that previously, but yes—in the timeless view there is no time, no change, no choice. Ethics is all about choices.
Ethical reasoning only makes sense in time, because the process of ethical reasoning is moving the particles in your brain, and the physical consequence of that can be a good or evil action. Ethics can have an influence on universe only if it is a part of the universe. The whole universe is determined only by its laws and its contents. The only way ethics can act is through the brains of people who contemplate it. Ethics is a human product (though we can discuss how much freedom did we have in creating this product; whether it would be different if we had a different history or biology) and it makes sense only on the human level, not on the level of particles.
I just stick with the timeless view and don’t have any trouble with ethics in it, but that’s because I’ve got all the phenomena of time fully embedded in the timeless view, including choice and morality. :)
I’m happy with the idea that ethics is a human product (since this doesn’t imply that it’s arbitrary or illusory or anything like that). I take this to mean, basically, that ethics concerns the relation of some subsystems with others. There’s no ethical language which makes sense from the ‘top-down’ or from a global perspective. But there’s also nothing to prevent (this is Eliezer’s meaning, I guess) a non-global perspective from being worked out in which ethical language does make sense. And this perspective isn’t arbitrary, because the subsystems working it out have always occupied that perspective as subsystems. To see an algorithm from the inside is to see world as a whole by seeing it as potentially involved in this algorithm. And this is what leads to the confusion between the global, timeless view from the (no less global, in some sense) timeful inside-an-algorithm view.
If that’s all passably normal (as skeptical as I am at the coherence of the idea of ‘adding up to normality’) then the question that remains is what I should do with my idea of things mattering ethically. Maybe the answer here is to see ethical agents as ontologically fundamental or something, though that sounds dangerously anthropocentric. But I don’t know how to justify the idea that physically-fundamental = ontologically-fundamental either.
I’m not Viliam Bur, but I wouldn’t quite agree with this, in that time matters. It’s not incoherent to talk about a system that can’t do X, could have done X, and ought to have done X, for example. It’s similarly not incoherent to talk about a system that can’t do X now but ought to have acted in the past so as to be able to do X now.
But yes, in general I would say the purpose of ethics is to determine right action. If we’re talking about the ethical status of a system with respect to actions we are virtually certain the system could not have taken, can not take, and will not be able to take, then we’re no longer talking about ethics in any straightforward sense.
Okay, so let’s adopt ‘ought implies can’ then, and restrict it to the same tense: if I ought to do X, I can do X. If I could have done (but can no longer do) X, then I ought to have done (but no longer ought to do) X.
How does this, in connection with MW, interact with consequentialism? The consequences of my actions can’t determine how much murdering I do (in the big world sense), just whether or not I fall on a murder-path. In the big world sense, I can’t (and therefore ought not) change the number of murder-paths. The consequence at which I should aim is the nature of the path I inhabit, because that’s what I can change.
Maybe this is right, but if it is, it seems to me to be an oddly subjective form of consequentialism. I’m not sure if this captures my thought, but it seems that it’s not as if I’m making the world a better place, I’m just putting myself in a better world.
It seems like you are not making world a better place because you think about fixed probability of becoming a murderer, which your decisions cannot change. But the probability of you becoming a murderer is a result of your decisions.
You have reversed the causality, because you imagine the probability of you ever being a murderer as something that existed sooner, and your decisions about murdering as something that happens later.
You treat probability of something happening in future as a fact that happened in the past. (Which is a common error. When humans talk about “outside of time”, they always imagine it in the past. No, the past is not outside of time; it is a part of time.)
I’m not at all convinced that I endorse what you are doing with the word “I” here.
If we want to say that there exists some entity I, such that I commit murders on multiple branches, then to also talk about “the nature of the path I inhabit” seems entirely incoherent. There is no single path I inhabit, I (as defined here) inhabits all paths.
Conversely, if we want to say that there exists a single path that I inhabit (a much more conventional way of speaking), then murders committed on other branches are not murders I commit.
I’m not sure if that affects your point or not, but I have trouble refactoring your point to eliminate that confusion, so it seems relevant.
True, good point. That seems to be salt on the wound though. What I meant by ‘I’ is this: say I’m in path A. I have a parallel ‘I’ in path B if the configuration of something in B is such that, were it in A at some time past or future, I would consider it to be a (perhaps surprising) continuation of my existence in A.
If the Ai and the Bi are the same person, then I’m ethically responsible for the behavior of Bi for the same reasons I’m ethically responsible for myself (Ai). If Ai and Bi are not the same person (even if they’re very similar people) then I’m not responsible for Bi at all, but I’m also no longer de-coherent: there is always only one world with me in it. I take it neither of these options is true, and that some middle ground is to be preferred: Bi is not the same person as me, but something like a counterpart. Am I not responsible for the actions of my counterparts?
That’s a hard question to answer, but say I get uploaded and copied a bunch of times. A year later, some large percentage of my copies have become serial killers, while others have not. Are the peaceful copies morally responsible for the serial killing? If we say ‘no’ then it seems like we’re committed to at least some kind of libertarianism as regards free will. I understood the compatibilist view around here to be that you are responsible for your actions by way of being constituted in such and such a way. But my peaceful copies are constituted in largely the same way as the killer copies are. We only count them as numerically different on the basis of seemingly trivial distinctions like the fact that they’re embodied in different hardware.
Well, OK. We are, of course, free to consider any entity we like an extension of our own identity in the sense you describe here. (I might similarly consider some other entity in my own path to be a “parallel me” if I wish. Heck, I might consider you a parallel me.)
It is not at all clear that I know what the reasons are that I’m ethically responsible for myself, if I am the sort of complex mostly-ignorant-of-its-own-activities entity scattered across multiple branches that you are positing I am. Again, transplanting an ethical intuition (like “I am ethically responsible for my actions”) unexamined from one context to a vastly different one is rarely justified.
So a good place to start might be to ask why I’m ethically responsible for myself, and why it matters.
Can you say more about that preference? I don’t share it, myself. I would say, rather, that I have some degree of confidence in the claim “Ai and Bi are the same person” and some degree of confidence that “Ai and Bi are different people,” and that multiple observers can have different degrees of confidence in these claims about a given (Ai, Bi) pair, and there’s no fact of the matter.
Say I belong to a group of distinct individuals, who are born and raised in the usual way, with no copying involved. A year later, some large percentage of the individuals in my group become serial killers, while others do not. Are the peaceful individuals morally responsible for the serial killing?
Almost all of the relevant factors governing my answer to your example seem to apply to mine as well. (My own answer to both questions is “Yes, within limits,” those limits largely being a function of the degree to which observations of Ai can serve as evidence about Bi.
Good news! It is totally off base. There is nothing in quantum mechanics requiring that the number of branches corresponding to an arbitrary macroscopic event and its negation must be equal.
Aww, you had my hopes up. There’s nothing in my set-up that requires them to be equal either, just that the numbers be fixed.
That feeling of arbitrariness is, IMHO, worth exploring more carefully.
Suppose, for example, it turns out that we don’t live in a Big World… that this is all there is, and that events either happen in this world or they don’t happen at all. Suppose you somehow were to receive confirmation of this. Big relief, right? Now you really can reduce the total amount of whatever in all of existence everywhere, so actions have meaning again.
But then you meet someone who says “But what about hypothetical people? No matter how many people I don’t actually murder, there’s still countless hypothetical people being hypothetically murdered! And, sure, you can tell me to just worry about actual people and don’t worry about the other, but whence that rule? It seems arbitrary.”
Would you find their position reasonable?
What would you say to them, if not?
Well put. This actually does come up in a philosophical view known as modal realism. Roughly, if we can make true or false claims about possible worlds, then those worlds must be actual in order to be truth-makers. So all possible worlds are actual.
If my someone said what you said he said, suppose I ask this in reply:
E:”Wait, are those hypothetical people being hypothetically murdered? Is that true?”
S: “Yes! And there’s nothing you can do!”
E:”And there’s some reality to which this part of the map, the hypothetical-people-being-murdered corresponds? Such that the hypothetical murder of these people is a real part of our world?”
S: “Well, sure.”
E: “Okay, well if we’re going to venture into modal realism then this just conflicts in the same way.”
S: Suppose we’re not modal realists then. Suppose there’s just not really a fact of the matter about whether or not hypothetical, and therefore non-existant people are being murdered.
E: No problem. I’m just interested in reducing real evils.
S: Isn’t that an arbitrary determination?
E: No, it’s the exact opposite of arbitrary. I also don’t take non-existant evidence as evidence, I don’t eat non-existant fruit, etc. If we call this arbitrary, then what isn’t?
I would certainly say you’re justified in not caring about hypothetical murders. I would also say you’re justified in not caring about murders in other MW branches.
What you seem to want to say here is that because murders in other MW branches are “actual”, you care about them, but since murders in my imagination are not “actual”, you don’t.
I have no idea what the word “actual” could possibly refer to so as to do the work you want it to do here.
There are certainly clusters of consistent experience to which a hypothetical murder of a hypothetical person corresponds. Those clusters might, for example, take the form of certain patterns of neural activation in my brain… that’s how I usually model it, anyway. I’m happy to say that those are “actual” patterns of neural activation. I would not say that they are “actual” murdered human beings.
That said, I’m not really sure it matters if they are. I mean, if they are, then… hold on, let me visualize… there: I just “actually” resurrected them and they are now “actually” extremely happy. Was their former murder still evil? At best, it seems all of my preconceived notions about murder (e.g., that it’s a permanent state change of some kind) have just been thrown out the window, and I should give some serious thought to why I think murder is evil in the first place.
It seems something similar is true about existence in a Big World… if I want to incorporate that into my thinking, it seems I ought to rethink all of my assumptions. Transplanting a moral intuition about murder derived in a small world into a big world without any alteration seems like a recipe for walking off conceptual cliffs.
Right, exactly. I’m taking this sense of ‘actual’ (not literally) from the sequences. This is from ‘On being Decoherent’:
Later on in this post EY says that the Big World is already at issue in spatial terms: somewhere far away, there is another Esar (or someone enough like me to count as me). The implication is that existing in another world is analogous to existing in another place. And I certainly don’t think I’m allowed to apply the ‘keep your own corner clean’ principle to spatial zones.
In ’Living in Many Worlds”, EY says:
I take him to mean that there are really, actually many other people who exist (just in different worlds) and that I’m responsible for the quality of life for some sub-set of those people. And that there really are, actually, many people in other worlds who have discovered or know things I might take myself to have discovered or be the first to know. Such that it’s a small but real overturning of normality that I can’t really be the first to know something. (That, I assume, is what an ethical implication of MW for ethics amounts to, some overturning of some ethical normality).
If you modeled it to the point that you fully modeled a human being in your brain, and then murdered them, it seems obvious that you did actually kill someone. Hypothetical murders (but considered) fail to be murders because they fail to be good enough models.
Yes...obviously!
Ordinarily, I would describe someone who is uncertain about obvious things as a fool. It’s not clear to me that I’m a fool, but it is also not at all clear to me that murder as you’ve defined it in this conversation is evil.
If you could explain that obvious truth to me, I might learn something.
I didn’t mean to call you a fool, only I don’t think the disruption of your intuitions is a disruption of your ethical intuitions. It’s unintuitive to think of a human-being as something fully emulated within another human being’s brain, but if this is actually possible, it’s not unintuitive that ending this neural activity would be murder (if it weren’t some other form of killing-a-human-being). My point was just that the distinction in hardware can’t make a difference to the question of whether or not ending a neural activity is killing, and given a set of constants, murder.
Since I don’t think we’re any longer talking about my original question, I think I’ll tap out.