Is there a separate name for “consequentialism over world histories” in comparison to “consequentialism over world states” ?
What I mean is, say you have a scenario where you can kill of person A and replace him with a happier person B. As I understand the terms, deontology might say “don’t do it, killing people is bad”. Consequentialism over world states would say “do it, utility will increase” (maybe with provisos that no-one notices or remembers the killing). Consequentialism over world histories would say “the utility contribution of the final state is higher with the happy person in it, but the killing event subtracts utility and makes a net negative, so don’t do it”.
I don’t know if there’s a name for it. In general, consequentialism is over the entire timeline. You could value events that have a specific order, or value events that happen earlier, etc. I don’t like the idea of judging based on things like that, but it’s just part of my general dislike of judging based on things that cannot be subjectively experienced. (You can subjectively experience the memory of things happening in a certain order, but each instant of you remembering it is instantaneous, and you’d have no way of knowing if the instants happened in a different order, or even if some of them didn’t happen.)
It seems likely that your post is due to a misunderstanding of my post, so let me clarify. I was not suggesting killing Alice to make way for Bob. I was talking about preventing the existence of Alice to make way for Bob. Alice is not dying. I am removing the potential for her to exist. But potential is just an abstraction. There is not some platonic potential of Alice floating out in space that I just killed.
Due to loss aversion, losing the potential for Alice may seem worse than gaining the potential for Bob, but this isn’t something that can be justified on consequentialist grounds.
I don’t know if there’s a name for it. In general, consequentialism is over the entire timeline.
Yes, that makes the most sense.
It seems likely that your post is due to a misunderstanding of my post, so let me clarify. I was not suggesting killing Alice to make way for Bob.
No no, I understand that you’re not talking about killing people off and replacing them, I was just trying (unsuccessfully) to give the most clearest example I could.
And I agree with your consequentialist analysis of indifference between the creation of Alice and Bob if they have the same utility … unless “playing god events” have negative utility.
Is there a separate name for “consequentialism over world histories” in comparison to “consequentialism over world states” ?
What I mean is, say you have a scenario where you can kill of person A and replace him with a happier person B. As I understand the terms, deontology might say “don’t do it, killing people is bad”. Consequentialism over world states would say “do it, utility will increase” (maybe with provisos that no-one notices or remembers the killing). Consequentialism over world histories would say “the utility contribution of the final state is higher with the happy person in it, but the killing event subtracts utility and makes a net negative, so don’t do it”.
I don’t know if there’s a name for it. In general, consequentialism is over the entire timeline. You could value events that have a specific order, or value events that happen earlier, etc. I don’t like the idea of judging based on things like that, but it’s just part of my general dislike of judging based on things that cannot be subjectively experienced. (You can subjectively experience the memory of things happening in a certain order, but each instant of you remembering it is instantaneous, and you’d have no way of knowing if the instants happened in a different order, or even if some of them didn’t happen.)
It seems likely that your post is due to a misunderstanding of my post, so let me clarify. I was not suggesting killing Alice to make way for Bob. I was talking about preventing the existence of Alice to make way for Bob. Alice is not dying. I am removing the potential for her to exist. But potential is just an abstraction. There is not some platonic potential of Alice floating out in space that I just killed.
Due to loss aversion, losing the potential for Alice may seem worse than gaining the potential for Bob, but this isn’t something that can be justified on consequentialist grounds.
Yes, that makes the most sense.
No no, I understand that you’re not talking about killing people off and replacing them, I was just trying (unsuccessfully) to give the most clearest example I could.
And I agree with your consequentialist analysis of indifference between the creation of Alice and Bob if they have the same utility … unless “playing god events” have negative utility.