So, fine, live in my world and don’t worry about the others. But whence that rule? That seems arbitrary
That feeling of arbitrariness is, IMHO, worth exploring more carefully.
Suppose, for example, it turns out that we don’t live in a Big World… that this is all there is, and that events either happen in this world or they don’t happen at all. Suppose you somehow were to receive confirmation of this. Big relief, right? Now you really can reduce the total amount of whatever in all of existence everywhere, so actions have meaning again.
But then you meet someone who says “But what about hypothetical people? No matter how many people I don’t actually murder, there’s still countless hypothetical people being hypothetically murdered! And, sure, you can tell me to just worry about actual people and don’t worry about the other, but whence that rule? It seems arbitrary.”
Would you find their position reasonable? What would you say to them, if not?
But then you meet someone who says “But what about hypothetical people? No matter how many people I don’t actually murder, there’s still countless hypothetical people being hypothetically murdered! And, sure, you can tell me to just worry about actual people and don’t worry about the other, but whence that rule? It seems arbitrary.”
Well put. This actually does come up in a philosophical view known as modal realism. Roughly, if we can make true or false claims about possible worlds, then those worlds must be actual in order to be truth-makers. So all possible worlds are actual.
If my someone said what you said he said, suppose I ask this in reply:
E:”Wait, are those hypothetical people being hypothetically murdered? Is that true?”
S: “Yes! And there’s nothing you can do!”
E:”And there’s some reality to which this part of the map, the hypothetical-people-being-murdered corresponds? Such that the hypothetical murder of these people is a real part of our world?”
S: “Well, sure.”
E: “Okay, well if we’re going to venture into modal realism then this just conflicts in the same way.”
S: Suppose we’re not modal realists then. Suppose there’s just not really a fact of the matter about whether or not hypothetical, and therefore non-existant people are being murdered.
E: No problem. I’m just interested in reducing real evils.
S: Isn’t that an arbitrary determination?
E: No, it’s the exact opposite of arbitrary. I also don’t take non-existant evidence as evidence, I don’t eat non-existant fruit, etc. If we call this arbitrary, then what isn’t?
I would certainly say you’re justified in not caring about hypothetical murders. I would also say you’re justified in not caring about murders in other MW branches.
What you seem to want to say here is that because murders in other MW branches are “actual”, you care about them, but since murders in my imagination are not “actual”, you don’t.
I have no idea what the word “actual” could possibly refer to so as to do the work you want it to do here.
There are certainly clusters of consistent experience to which a hypothetical murder of a hypothetical person corresponds. Those clusters might, for example, take the form of certain patterns of neural activation in my brain… that’s how I usually model it, anyway. I’m happy to say that those are “actual” patterns of neural activation. I would not say that they are “actual” murdered human beings.
That said, I’m not really sure it matters if they are. I mean, if they are, then… hold on, let me visualize… there: I just “actually” resurrected them and they are now “actually” extremely happy. Was their former murder still evil? At best, it seems all of my preconceived notions about murder (e.g., that it’s a permanent state change of some kind) have just been thrown out the window, and I should give some serious thought to why I think murder is evil in the first place.
It seems something similar is true about existence in a Big World… if I want to incorporate that into my thinking, it seems I ought to rethink all of my assumptions. Transplanting a moral intuition about murder derived in a small world into a big world without any alteration seems like a recipe for walking off conceptual cliffs.
What you seem to want to say here is that because murders in other MW branches are “actual”, you care about them, but since murders in my imagination are not “actual”, you don’t.
Right, exactly. I’m taking this sense of ‘actual’ (not literally) from the sequences. This is from ‘On being Decoherent’:
You only see nearby objects, not objects light-years away, because photons from those objects can’t reach you, therefore you can’t see them. By a similar locality principle, you don’t interact with distant configurations.
Later on in this post EY says that the Big World is already at issue in spatial terms: somewhere far away, there is another Esar (or someone enough like me to count as me). The implication is that existing in another world is analogous to existing in another place. And I certainly don’t think I’m allowed to apply the ‘keep your own corner clean’ principle to spatial zones.
In ’Living in Many Worlds”, EY says:
“Oh, there are a few implications of many-worlds for ethics. Average utilitarianism suddenly looks a lot more attractive—you don’t need to worry about creating as many people as possible, because there are already plenty of people exploring person-space. You just want the average quality of life to be as high as possible, in the future worlds that are your responsibility.
And you should always take joy in discovery, as long as you personally don’t know a thing. It is meaningless to talk of being the “first” or the “only” person to know a thing, when everything knowable is known within worlds that are in neither your past nor your future, and are neither before or after you.”
I take him to mean that there are really, actually many other people who exist (just in different worlds) and that I’m responsible for the quality of life for some sub-set of those people. And that there really are, actually, many people in other worlds who have discovered or know things I might take myself to have discovered or be the first to know. Such that it’s a small but real overturning of normality that I can’t really be the first to know something. (That, I assume, is what an ethical implication of MW for ethics amounts to, some overturning of some ethical normality).
I’m happy to say that those are “actual” patterns of neural activation. I would not say that they are “actual” murdered human beings.
If you modeled it to the point that you fully modeled a human being in your brain, and then murdered them, it seems obvious that you did actually kill someone. Hypothetical murders (but considered) fail to be murders because they fail to be good enough models.
Ordinarily, I would describe someone who is uncertain about obvious things as a fool. It’s not clear to me that I’m a fool, but it is also not at all clear to me that murder as you’ve defined it in this conversation is evil.
If you could explain that obvious truth to me, I might learn something.
Ordinarily, I would describe someone who is uncertain about obvious things as a fool. It’s not clear to me that I’m a fool, but it is also not at all clear to me that murder as you’ve defined it in this conversation is evil.
I didn’t mean to call you a fool, only I don’t think the disruption of your intuitions is a disruption of your ethical intuitions. It’s unintuitive to think of a human-being as something fully emulated within another human being’s brain, but if this is actually possible, it’s not unintuitive that ending this neural activity would be murder (if it weren’t some other form of killing-a-human-being). My point was just that the distinction in hardware can’t make a difference to the question of whether or not ending a neural activity is killing, and given a set of constants, murder.
Since I don’t think we’re any longer talking about my original question, I think I’ll tap out.
That feeling of arbitrariness is, IMHO, worth exploring more carefully.
Suppose, for example, it turns out that we don’t live in a Big World… that this is all there is, and that events either happen in this world or they don’t happen at all. Suppose you somehow were to receive confirmation of this. Big relief, right? Now you really can reduce the total amount of whatever in all of existence everywhere, so actions have meaning again.
But then you meet someone who says “But what about hypothetical people? No matter how many people I don’t actually murder, there’s still countless hypothetical people being hypothetically murdered! And, sure, you can tell me to just worry about actual people and don’t worry about the other, but whence that rule? It seems arbitrary.”
Would you find their position reasonable?
What would you say to them, if not?
Well put. This actually does come up in a philosophical view known as modal realism. Roughly, if we can make true or false claims about possible worlds, then those worlds must be actual in order to be truth-makers. So all possible worlds are actual.
If my someone said what you said he said, suppose I ask this in reply:
E:”Wait, are those hypothetical people being hypothetically murdered? Is that true?”
S: “Yes! And there’s nothing you can do!”
E:”And there’s some reality to which this part of the map, the hypothetical-people-being-murdered corresponds? Such that the hypothetical murder of these people is a real part of our world?”
S: “Well, sure.”
E: “Okay, well if we’re going to venture into modal realism then this just conflicts in the same way.”
S: Suppose we’re not modal realists then. Suppose there’s just not really a fact of the matter about whether or not hypothetical, and therefore non-existant people are being murdered.
E: No problem. I’m just interested in reducing real evils.
S: Isn’t that an arbitrary determination?
E: No, it’s the exact opposite of arbitrary. I also don’t take non-existant evidence as evidence, I don’t eat non-existant fruit, etc. If we call this arbitrary, then what isn’t?
I would certainly say you’re justified in not caring about hypothetical murders. I would also say you’re justified in not caring about murders in other MW branches.
What you seem to want to say here is that because murders in other MW branches are “actual”, you care about them, but since murders in my imagination are not “actual”, you don’t.
I have no idea what the word “actual” could possibly refer to so as to do the work you want it to do here.
There are certainly clusters of consistent experience to which a hypothetical murder of a hypothetical person corresponds. Those clusters might, for example, take the form of certain patterns of neural activation in my brain… that’s how I usually model it, anyway. I’m happy to say that those are “actual” patterns of neural activation. I would not say that they are “actual” murdered human beings.
That said, I’m not really sure it matters if they are. I mean, if they are, then… hold on, let me visualize… there: I just “actually” resurrected them and they are now “actually” extremely happy. Was their former murder still evil? At best, it seems all of my preconceived notions about murder (e.g., that it’s a permanent state change of some kind) have just been thrown out the window, and I should give some serious thought to why I think murder is evil in the first place.
It seems something similar is true about existence in a Big World… if I want to incorporate that into my thinking, it seems I ought to rethink all of my assumptions. Transplanting a moral intuition about murder derived in a small world into a big world without any alteration seems like a recipe for walking off conceptual cliffs.
Right, exactly. I’m taking this sense of ‘actual’ (not literally) from the sequences. This is from ‘On being Decoherent’:
Later on in this post EY says that the Big World is already at issue in spatial terms: somewhere far away, there is another Esar (or someone enough like me to count as me). The implication is that existing in another world is analogous to existing in another place. And I certainly don’t think I’m allowed to apply the ‘keep your own corner clean’ principle to spatial zones.
In ’Living in Many Worlds”, EY says:
I take him to mean that there are really, actually many other people who exist (just in different worlds) and that I’m responsible for the quality of life for some sub-set of those people. And that there really are, actually, many people in other worlds who have discovered or know things I might take myself to have discovered or be the first to know. Such that it’s a small but real overturning of normality that I can’t really be the first to know something. (That, I assume, is what an ethical implication of MW for ethics amounts to, some overturning of some ethical normality).
If you modeled it to the point that you fully modeled a human being in your brain, and then murdered them, it seems obvious that you did actually kill someone. Hypothetical murders (but considered) fail to be murders because they fail to be good enough models.
Yes...obviously!
Ordinarily, I would describe someone who is uncertain about obvious things as a fool. It’s not clear to me that I’m a fool, but it is also not at all clear to me that murder as you’ve defined it in this conversation is evil.
If you could explain that obvious truth to me, I might learn something.
I didn’t mean to call you a fool, only I don’t think the disruption of your intuitions is a disruption of your ethical intuitions. It’s unintuitive to think of a human-being as something fully emulated within another human being’s brain, but if this is actually possible, it’s not unintuitive that ending this neural activity would be murder (if it weren’t some other form of killing-a-human-being). My point was just that the distinction in hardware can’t make a difference to the question of whether or not ending a neural activity is killing, and given a set of constants, murder.
Since I don’t think we’re any longer talking about my original question, I think I’ll tap out.