There aren’t any. It’s an inherently deongological idea.
Suppose Alice or Bob can exist, but not both. Under deontology, you could talk about which one of them will exist if you do nothing, and ask if it’s a good idea to change it. You might decide that you can’t play god and you have to leave it, that you should make sure the one with a better life comes into existence, or that since they’re not born yet neither of them have any rights and you can decide whichever you like.
Under consequentialism, it’s a meaningless question. There is one universe with Alice. There is one with Bob. You must choose which you value more. Choosing not to act is a choice.
If Alice and Bob have the same utility, then you should be indifferent. If you consider preventing the birth of Alice with X utility and causing the birth of Bob with Y utility, that’s the same as preventing the birth of Alice with X utility and causing the birth of Bob with X utility plus increasing the utility of Bob from X to Y. This has a total utility of 0 + (Y-X) = Y-X.
Is there a separate name for “consequentialism over world histories” in comparison to “consequentialism over world states” ?
What I mean is, say you have a scenario where you can kill of person A and replace him with a happier person B. As I understand the terms, deontology might say “don’t do it, killing people is bad”. Consequentialism over world states would say “do it, utility will increase” (maybe with provisos that no-one notices or remembers the killing). Consequentialism over world histories would say “the utility contribution of the final state is higher with the happy person in it, but the killing event subtracts utility and makes a net negative, so don’t do it”.
I don’t know if there’s a name for it. In general, consequentialism is over the entire timeline. You could value events that have a specific order, or value events that happen earlier, etc. I don’t like the idea of judging based on things like that, but it’s just part of my general dislike of judging based on things that cannot be subjectively experienced. (You can subjectively experience the memory of things happening in a certain order, but each instant of you remembering it is instantaneous, and you’d have no way of knowing if the instants happened in a different order, or even if some of them didn’t happen.)
It seems likely that your post is due to a misunderstanding of my post, so let me clarify. I was not suggesting killing Alice to make way for Bob. I was talking about preventing the existence of Alice to make way for Bob. Alice is not dying. I am removing the potential for her to exist. But potential is just an abstraction. There is not some platonic potential of Alice floating out in space that I just killed.
Due to loss aversion, losing the potential for Alice may seem worse than gaining the potential for Bob, but this isn’t something that can be justified on consequentialist grounds.
I don’t know if there’s a name for it. In general, consequentialism is over the entire timeline.
Yes, that makes the most sense.
It seems likely that your post is due to a misunderstanding of my post, so let me clarify. I was not suggesting killing Alice to make way for Bob.
No no, I understand that you’re not talking about killing people off and replacing them, I was just trying (unsuccessfully) to give the most clearest example I could.
And I agree with your consequentialist analysis of indifference between the creation of Alice and Bob if they have the same utility … unless “playing god events” have negative utility.
There aren’t any. It’s an inherently deongological idea.
Suppose Alice or Bob can exist, but not both. Under deontology, you could talk about which one of them will exist if you do nothing, and ask if it’s a good idea to change it. You might decide that you can’t play god and you have to leave it, that you should make sure the one with a better life comes into existence, or that since they’re not born yet neither of them have any rights and you can decide whichever you like.
Under consequentialism, it’s a meaningless question. There is one universe with Alice. There is one with Bob. You must choose which you value more. Choosing not to act is a choice.
If Alice and Bob have the same utility, then you should be indifferent. If you consider preventing the birth of Alice with X utility and causing the birth of Bob with Y utility, that’s the same as preventing the birth of Alice with X utility and causing the birth of Bob with X utility plus increasing the utility of Bob from X to Y. This has a total utility of 0 + (Y-X) = Y-X.
Is there a separate name for “consequentialism over world histories” in comparison to “consequentialism over world states” ?
What I mean is, say you have a scenario where you can kill of person A and replace him with a happier person B. As I understand the terms, deontology might say “don’t do it, killing people is bad”. Consequentialism over world states would say “do it, utility will increase” (maybe with provisos that no-one notices or remembers the killing). Consequentialism over world histories would say “the utility contribution of the final state is higher with the happy person in it, but the killing event subtracts utility and makes a net negative, so don’t do it”.
I don’t know if there’s a name for it. In general, consequentialism is over the entire timeline. You could value events that have a specific order, or value events that happen earlier, etc. I don’t like the idea of judging based on things like that, but it’s just part of my general dislike of judging based on things that cannot be subjectively experienced. (You can subjectively experience the memory of things happening in a certain order, but each instant of you remembering it is instantaneous, and you’d have no way of knowing if the instants happened in a different order, or even if some of them didn’t happen.)
It seems likely that your post is due to a misunderstanding of my post, so let me clarify. I was not suggesting killing Alice to make way for Bob. I was talking about preventing the existence of Alice to make way for Bob. Alice is not dying. I am removing the potential for her to exist. But potential is just an abstraction. There is not some platonic potential of Alice floating out in space that I just killed.
Due to loss aversion, losing the potential for Alice may seem worse than gaining the potential for Bob, but this isn’t something that can be justified on consequentialist grounds.
Yes, that makes the most sense.
No no, I understand that you’re not talking about killing people off and replacing them, I was just trying (unsuccessfully) to give the most clearest example I could.
And I agree with your consequentialist analysis of indifference between the creation of Alice and Bob if they have the same utility … unless “playing god events” have negative utility.