That’s just very poor consequentialism in my eyes. Instead of me pointing out the most abominable scenarios that I believe immediately follow from such a consequentialism, why don’t you supply one that you think would be objectionable to others, but which you’d be willing to defend?
As for your spin on the question, while I think it is a different question than the original, I see no need to shy away from it. Some people are worth killing. That’s not to say there isn’t something of value in them, but choice is about tradeoffs, and I don’t expect that to change with greater technology. The particular tradeoffs will change, but that there are tradeoffs will not.
And in the same way, a great many more people are not worth saving either.
That’s just very poor consequentialism in my eyes. Instead of me pointing out the most abominable scenarios that I believe immediately follow from such a consequentialism, why don’t you supply one that you think would be objectionable to others, but which you’d be willing to defend?
As for your spin on the question, while I think it is a different question than the original, I see no need to shy away from it. Some people are worth killing. That’s not to say there isn’t something of value in them, but choice is about tradeoffs, and I don’t expect that to change with greater technology. The particular tradeoffs will change, but that there are tradeoffs will not.
And in the same way, a great many more people are not worth saving either.
Sure, assuming we’re clear on what the question means.