Would you be willing to commit to an a priori ethical principle such that ought implies can?
I’m not Viliam Bur, but I wouldn’t quite agree with this, in that time matters. It’s not incoherent to talk about a system that can’t do X, could have done X, and ought to have done X, for example. It’s similarly not incoherent to talk about a system that can’t do X now but ought to have acted in the past so as to be able to do X now.
But yes, in general I would say the purpose of ethics is to determine right action. If we’re talking about the ethical status of a system with respect to actions we are virtually certain the system could not have taken, can not take, and will not be able to take, then we’re no longer talking about ethics in any straightforward sense.
Okay, so let’s adopt ‘ought implies can’ then, and restrict it to the same tense: if I ought to do X, I can do X. If I could have done (but can no longer do) X, then I ought to have done (but no longer ought to do) X.
How does this, in connection with MW, interact with consequentialism? The consequences of my actions can’t determine how much murdering I do (in the big world sense), just whether or not I fall on a murder-path. In the big world sense, I can’t (and therefore ought not) change the number of murder-paths. The consequence at which I should aim is the nature of the path I inhabit, because that’s what I can change.
Maybe this is right, but if it is, it seems to me to be an oddly subjective form of consequentialism. I’m not sure if this captures my thought, but it seems that it’s not as if I’m making the world a better place, I’m just putting myself in a better world.
it seems that it’s not as if I’m making the world a better place, I’m just putting myself in a better world.
It seems like you are not making world a better place because you think about fixed probability of becoming a murderer, which your decisions cannot change. But the probability of you becoming a murderer is a result of your decisions.
You have reversed the causality, because you imagine the probability of you ever being a murderer as something that existed sooner, and your decisions about murdering as something that happens later.
You treat probability of something happening in future as a fact that happened in the past. (Which is a common error. When humans talk about “outside of time”, they always imagine it in the past. No, the past is not outside of time; it is a part of time.)
The consequences of my actions can’t determine how much murdering I do (in the big world sense), [...] the nature of the path I inhabit, because that’s what I can change.
I’m not at all convinced that I endorse what you are doing with the word “I” here.
If we want to say that there exists some entity I, such that I commit murders on multiple branches, then to also talk about “the nature of the path I inhabit” seems entirely incoherent. There is no single path I inhabit, I (as defined here) inhabits all paths.
Conversely, if we want to say that there exists a single path that I inhabit (a much more conventional way of speaking), then murders committed on other branches are not murders I commit.
I’m not sure if that affects your point or not, but I have trouble refactoring your point to eliminate that confusion, so it seems relevant.
If we want to say that there exists some entity I, such that I commit murders on multiple branches, then to also talk about “the nature of the path I inhabit” seems entirely incoherent. There is no single path I inhabit, I (as defined here) inhabits all paths.
True, good point. That seems to be salt on the wound though. What I meant by ‘I’ is this: say I’m in path A. I have a parallel ‘I’ in path B if the configuration of something in B is such that, were it in A at some time past or future, I would consider it to be a (perhaps surprising) continuation of my existence in A.
If the Ai and the Bi are the same person, then I’m ethically responsible for the behavior of Bi for the same reasons I’m ethically responsible for myself (Ai). If Ai and Bi are not the same person (even if they’re very similar people) then I’m not responsible for Bi at all, but I’m also no longer de-coherent: there is always only one world with me in it. I take it neither of these options is true, and that some middle ground is to be preferred: Bi is not the same person as me, but something like a counterpart. Am I not responsible for the actions of my counterparts?
That’s a hard question to answer, but say I get uploaded and copied a bunch of times. A year later, some large percentage of my copies have become serial killers, while others have not. Are the peaceful copies morally responsible for the serial killing? If we say ‘no’ then it seems like we’re committed to at least some kind of libertarianism as regards free will. I understood the compatibilist view around here to be that you are responsible for your actions by way of being constituted in such and such a way. But my peaceful copies are constituted in largely the same way as the killer copies are. We only count them as numerically different on the basis of seemingly trivial distinctions like the fact that they’re embodied in different hardware.
What I meant by ‘I’ is this: say I’m in path A. I have a parallel ‘I’ in path B if the configuration of something in B is such that, were it in A at some time past or future, I would consider it to be a (perhaps surprising) continuation of my existence in A.
Well, OK. We are, of course, free to consider any entity we like an extension of our own identity in the sense you describe here. (I might similarly consider some other entity in my own path to be a “parallel me” if I wish. Heck, I might consider you a parallel me.)
If the Ai and the Bi are the same person, then I’m ethically responsible for the behavior of Bi for the same reasons I’m ethically responsible for myself (Ai).
It is not at all clear that I know what the reasons are that I’m ethically responsible for myself, if I am the sort of complex mostly-ignorant-of-its-own-activities entity scattered across multiple branches that you are positing I am. Again, transplanting an ethical intuition (like “I am ethically responsible for my actions”) unexamined from one context to a vastly different one is rarely justified.
So a good place to start might be to ask why I’m ethically responsible for myself, and why it matters.
I take it neither of these options is true, and that some middle ground is to be preferred: Bi is not the same person as me, but something like a counterpart.
Can you say more about that preference? I don’t share it, myself. I would say, rather, that I have some degree of confidence in the claim “Ai and Bi are the same person” and some degree of confidence that “Ai and Bi are different people,” and that multiple observers can have different degrees of confidence in these claims about a given (Ai, Bi) pair, and there’s no fact of the matter.
say I get uploaded and copied a bunch of times. A year later, some large percentage of my copies have become serial killers, while others have not. Are the peaceful copies morally responsible for the serial killing?
Say I belong to a group of distinct individuals, who are born and raised in the usual way, with no copying involved. A year later, some large percentage of the individuals in my group become serial killers, while others do not. Are the peaceful individuals morally responsible for the serial killing?
Almost all of the relevant factors governing my answer to your example seem to apply to mine as well. (My own answer to both questions is “Yes, within limits,” those limits largely being a function of the degree to which observations of Ai can serve as evidence about Bi.
I’m not Viliam Bur, but I wouldn’t quite agree with this, in that time matters. It’s not incoherent to talk about a system that can’t do X, could have done X, and ought to have done X, for example. It’s similarly not incoherent to talk about a system that can’t do X now but ought to have acted in the past so as to be able to do X now.
But yes, in general I would say the purpose of ethics is to determine right action. If we’re talking about the ethical status of a system with respect to actions we are virtually certain the system could not have taken, can not take, and will not be able to take, then we’re no longer talking about ethics in any straightforward sense.
Okay, so let’s adopt ‘ought implies can’ then, and restrict it to the same tense: if I ought to do X, I can do X. If I could have done (but can no longer do) X, then I ought to have done (but no longer ought to do) X.
How does this, in connection with MW, interact with consequentialism? The consequences of my actions can’t determine how much murdering I do (in the big world sense), just whether or not I fall on a murder-path. In the big world sense, I can’t (and therefore ought not) change the number of murder-paths. The consequence at which I should aim is the nature of the path I inhabit, because that’s what I can change.
Maybe this is right, but if it is, it seems to me to be an oddly subjective form of consequentialism. I’m not sure if this captures my thought, but it seems that it’s not as if I’m making the world a better place, I’m just putting myself in a better world.
It seems like you are not making world a better place because you think about fixed probability of becoming a murderer, which your decisions cannot change. But the probability of you becoming a murderer is a result of your decisions.
You have reversed the causality, because you imagine the probability of you ever being a murderer as something that existed sooner, and your decisions about murdering as something that happens later.
You treat probability of something happening in future as a fact that happened in the past. (Which is a common error. When humans talk about “outside of time”, they always imagine it in the past. No, the past is not outside of time; it is a part of time.)
I’m not at all convinced that I endorse what you are doing with the word “I” here.
If we want to say that there exists some entity I, such that I commit murders on multiple branches, then to also talk about “the nature of the path I inhabit” seems entirely incoherent. There is no single path I inhabit, I (as defined here) inhabits all paths.
Conversely, if we want to say that there exists a single path that I inhabit (a much more conventional way of speaking), then murders committed on other branches are not murders I commit.
I’m not sure if that affects your point or not, but I have trouble refactoring your point to eliminate that confusion, so it seems relevant.
True, good point. That seems to be salt on the wound though. What I meant by ‘I’ is this: say I’m in path A. I have a parallel ‘I’ in path B if the configuration of something in B is such that, were it in A at some time past or future, I would consider it to be a (perhaps surprising) continuation of my existence in A.
If the Ai and the Bi are the same person, then I’m ethically responsible for the behavior of Bi for the same reasons I’m ethically responsible for myself (Ai). If Ai and Bi are not the same person (even if they’re very similar people) then I’m not responsible for Bi at all, but I’m also no longer de-coherent: there is always only one world with me in it. I take it neither of these options is true, and that some middle ground is to be preferred: Bi is not the same person as me, but something like a counterpart. Am I not responsible for the actions of my counterparts?
That’s a hard question to answer, but say I get uploaded and copied a bunch of times. A year later, some large percentage of my copies have become serial killers, while others have not. Are the peaceful copies morally responsible for the serial killing? If we say ‘no’ then it seems like we’re committed to at least some kind of libertarianism as regards free will. I understood the compatibilist view around here to be that you are responsible for your actions by way of being constituted in such and such a way. But my peaceful copies are constituted in largely the same way as the killer copies are. We only count them as numerically different on the basis of seemingly trivial distinctions like the fact that they’re embodied in different hardware.
Well, OK. We are, of course, free to consider any entity we like an extension of our own identity in the sense you describe here. (I might similarly consider some other entity in my own path to be a “parallel me” if I wish. Heck, I might consider you a parallel me.)
It is not at all clear that I know what the reasons are that I’m ethically responsible for myself, if I am the sort of complex mostly-ignorant-of-its-own-activities entity scattered across multiple branches that you are positing I am. Again, transplanting an ethical intuition (like “I am ethically responsible for my actions”) unexamined from one context to a vastly different one is rarely justified.
So a good place to start might be to ask why I’m ethically responsible for myself, and why it matters.
Can you say more about that preference? I don’t share it, myself. I would say, rather, that I have some degree of confidence in the claim “Ai and Bi are the same person” and some degree of confidence that “Ai and Bi are different people,” and that multiple observers can have different degrees of confidence in these claims about a given (Ai, Bi) pair, and there’s no fact of the matter.
Say I belong to a group of distinct individuals, who are born and raised in the usual way, with no copying involved. A year later, some large percentage of the individuals in my group become serial killers, while others do not. Are the peaceful individuals morally responsible for the serial killing?
Almost all of the relevant factors governing my answer to your example seem to apply to mine as well. (My own answer to both questions is “Yes, within limits,” those limits largely being a function of the degree to which observations of Ai can serve as evidence about Bi.